Nov 12 22:42:02.882876 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 21:10:03 -00 2024 Nov 12 22:42:02.882904 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:42:02.882915 kernel: BIOS-provided physical RAM map: Nov 12 22:42:02.882921 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 22:42:02.882927 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 22:42:02.882934 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 22:42:02.882941 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 12 22:42:02.882947 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 12 22:42:02.882953 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 22:42:02.882961 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 12 22:42:02.882968 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 22:42:02.882974 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 22:42:02.882980 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 22:42:02.882986 kernel: NX (Execute Disable) protection: active Nov 12 22:42:02.882994 kernel: APIC: Static calls initialized Nov 12 22:42:02.883003 kernel: SMBIOS 2.8 present. Nov 12 22:42:02.883010 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 12 22:42:02.883016 kernel: Hypervisor detected: KVM Nov 12 22:42:02.883023 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 22:42:02.883030 kernel: kvm-clock: using sched offset of 3229355628 cycles Nov 12 22:42:02.883036 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 22:42:02.883044 kernel: tsc: Detected 2794.748 MHz processor Nov 12 22:42:02.883051 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 22:42:02.883058 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 22:42:02.883065 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 12 22:42:02.883074 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 22:42:02.883081 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 22:42:02.883088 kernel: Using GB pages for direct mapping Nov 12 22:42:02.883094 kernel: ACPI: Early table checksum verification disabled Nov 12 22:42:02.883101 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 12 22:42:02.883108 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:42:02.883115 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:42:02.883122 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:42:02.883131 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 12 22:42:02.883138 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:42:02.883145 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:42:02.883152 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:42:02.883158 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:42:02.883165 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Nov 12 22:42:02.883172 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Nov 12 22:42:02.883183 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 12 22:42:02.883192 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Nov 12 22:42:02.883199 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Nov 12 22:42:02.883206 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Nov 12 22:42:02.883213 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Nov 12 22:42:02.883220 kernel: No NUMA configuration found Nov 12 22:42:02.883227 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 12 22:42:02.883234 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 12 22:42:02.883244 kernel: Zone ranges: Nov 12 22:42:02.883251 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 22:42:02.883258 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 12 22:42:02.883265 kernel: Normal empty Nov 12 22:42:02.883272 kernel: Movable zone start for each node Nov 12 22:42:02.883279 kernel: Early memory node ranges Nov 12 22:42:02.883286 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 22:42:02.883293 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 12 22:42:02.883300 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 12 22:42:02.883309 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 22:42:02.883316 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 22:42:02.883324 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 12 22:42:02.883331 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 22:42:02.883338 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 22:42:02.883345 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 22:42:02.883352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 22:42:02.883359 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 22:42:02.883366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 22:42:02.883376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 22:42:02.883383 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 22:42:02.883390 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 22:42:02.883397 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 22:42:02.883404 kernel: TSC deadline timer available Nov 12 22:42:02.883411 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 22:42:02.883418 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 22:42:02.883425 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 22:42:02.883432 kernel: kvm-guest: setup PV sched yield Nov 12 22:42:02.883442 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 12 22:42:02.883449 kernel: Booting paravirtualized kernel on KVM Nov 12 22:42:02.883456 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 22:42:02.883463 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 22:42:02.883471 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 22:42:02.883478 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 22:42:02.883485 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 22:42:02.883492 kernel: kvm-guest: PV spinlocks enabled Nov 12 22:42:02.883499 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 22:42:02.883517 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:42:02.883528 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 22:42:02.883535 kernel: random: crng init done Nov 12 22:42:02.883542 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 22:42:02.883549 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:42:02.883580 kernel: Fallback order for Node 0: 0 Nov 12 22:42:02.883587 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 12 22:42:02.883594 kernel: Policy zone: DMA32 Nov 12 22:42:02.883602 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 22:42:02.883612 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22736K rodata, 42968K init, 2220K bss, 136900K reserved, 0K cma-reserved) Nov 12 22:42:02.883619 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 22:42:02.883627 kernel: ftrace: allocating 37801 entries in 148 pages Nov 12 22:42:02.883634 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 22:42:02.883641 kernel: Dynamic Preempt: voluntary Nov 12 22:42:02.883648 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 22:42:02.883656 kernel: rcu: RCU event tracing is enabled. Nov 12 22:42:02.883663 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 22:42:02.883670 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 22:42:02.883680 kernel: Rude variant of Tasks RCU enabled. Nov 12 22:42:02.883688 kernel: Tracing variant of Tasks RCU enabled. Nov 12 22:42:02.883695 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 22:42:02.883702 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 22:42:02.883709 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 22:42:02.883716 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 22:42:02.883723 kernel: Console: colour VGA+ 80x25 Nov 12 22:42:02.883730 kernel: printk: console [ttyS0] enabled Nov 12 22:42:02.883737 kernel: ACPI: Core revision 20230628 Nov 12 22:42:02.883747 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 22:42:02.883754 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 22:42:02.883761 kernel: x2apic enabled Nov 12 22:42:02.883769 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 22:42:02.883776 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 22:42:02.883783 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 22:42:02.883790 kernel: kvm-guest: setup PV IPIs Nov 12 22:42:02.883807 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 22:42:02.883815 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 22:42:02.883822 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 12 22:42:02.883830 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 22:42:02.883838 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 22:42:02.883849 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 22:42:02.883857 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 22:42:02.883877 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 22:42:02.883886 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 22:42:02.883907 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 22:42:02.883923 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 22:42:02.883939 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 22:42:02.883955 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 22:42:02.883971 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 22:42:02.883993 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 22:42:02.884010 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 22:42:02.884032 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 22:42:02.884040 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 22:42:02.884066 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 22:42:02.884074 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 22:42:02.884081 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 22:42:02.884089 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 22:42:02.884097 kernel: Freeing SMP alternatives memory: 32K Nov 12 22:42:02.884104 kernel: pid_max: default: 32768 minimum: 301 Nov 12 22:42:02.884112 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 22:42:02.884119 kernel: landlock: Up and running. Nov 12 22:42:02.884126 kernel: SELinux: Initializing. Nov 12 22:42:02.884137 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:42:02.884144 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:42:02.884152 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 22:42:02.884159 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:42:02.884167 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:42:02.884175 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:42:02.884182 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 22:42:02.884190 kernel: ... version: 0 Nov 12 22:42:02.884200 kernel: ... bit width: 48 Nov 12 22:42:02.884207 kernel: ... generic registers: 6 Nov 12 22:42:02.884215 kernel: ... value mask: 0000ffffffffffff Nov 12 22:42:02.884222 kernel: ... max period: 00007fffffffffff Nov 12 22:42:02.884230 kernel: ... fixed-purpose events: 0 Nov 12 22:42:02.884237 kernel: ... event mask: 000000000000003f Nov 12 22:42:02.884245 kernel: signal: max sigframe size: 1776 Nov 12 22:42:02.884252 kernel: rcu: Hierarchical SRCU implementation. Nov 12 22:42:02.884260 kernel: rcu: Max phase no-delay instances is 400. Nov 12 22:42:02.884267 kernel: smp: Bringing up secondary CPUs ... Nov 12 22:42:02.884278 kernel: smpboot: x86: Booting SMP configuration: Nov 12 22:42:02.884285 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 22:42:02.884292 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 22:42:02.884300 kernel: smpboot: Max logical packages: 1 Nov 12 22:42:02.884307 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 12 22:42:02.884315 kernel: devtmpfs: initialized Nov 12 22:42:02.884322 kernel: x86/mm: Memory block size: 128MB Nov 12 22:42:02.884330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 22:42:02.884338 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 22:42:02.884347 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 22:42:02.884355 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 22:42:02.884362 kernel: audit: initializing netlink subsys (disabled) Nov 12 22:42:02.884370 kernel: audit: type=2000 audit(1731451322.487:1): state=initialized audit_enabled=0 res=1 Nov 12 22:42:02.884377 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 22:42:02.884385 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 22:42:02.884392 kernel: cpuidle: using governor menu Nov 12 22:42:02.884400 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 22:42:02.884407 kernel: dca service started, version 1.12.1 Nov 12 22:42:02.884417 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 22:42:02.884425 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 22:42:02.884432 kernel: PCI: Using configuration type 1 for base access Nov 12 22:42:02.884440 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 22:42:02.884448 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 22:42:02.884455 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 22:42:02.884463 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 22:42:02.884470 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 22:42:02.884478 kernel: ACPI: Added _OSI(Module Device) Nov 12 22:42:02.884488 kernel: ACPI: Added _OSI(Processor Device) Nov 12 22:42:02.884495 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 22:42:02.884511 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 22:42:02.884521 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 22:42:02.884528 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 22:42:02.884536 kernel: ACPI: Interpreter enabled Nov 12 22:42:02.884543 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 22:42:02.884551 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 22:42:02.884571 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 22:42:02.884581 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 22:42:02.884589 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 22:42:02.884596 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 22:42:02.884803 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 22:42:02.884932 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 22:42:02.885055 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 22:42:02.885066 kernel: PCI host bridge to bus 0000:00 Nov 12 22:42:02.885208 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 22:42:02.885327 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 22:42:02.885444 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 22:42:02.885580 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 22:42:02.885702 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 22:42:02.885885 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 12 22:42:02.886000 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 22:42:02.886193 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 22:42:02.886338 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 22:42:02.886461 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 12 22:42:02.886624 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 12 22:42:02.886751 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 12 22:42:02.886877 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 22:42:02.887029 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 22:42:02.887153 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 12 22:42:02.887275 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 12 22:42:02.887422 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 12 22:42:02.887658 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 22:42:02.887786 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 22:42:02.887910 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 12 22:42:02.888037 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 12 22:42:02.888176 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 22:42:02.888300 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 12 22:42:02.888457 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 12 22:42:02.888609 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 12 22:42:02.888733 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 12 22:42:02.888876 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 22:42:02.889004 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 22:42:02.889140 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 22:42:02.889261 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 12 22:42:02.889383 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 12 22:42:02.889525 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 22:42:02.889663 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 12 22:42:02.889674 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 22:42:02.889686 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 22:42:02.889695 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 22:42:02.889702 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 22:42:02.889710 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 22:42:02.889718 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 22:42:02.889725 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 22:42:02.889733 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 22:42:02.889740 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 22:42:02.889750 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 22:42:02.889758 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 22:42:02.889765 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 22:42:02.889773 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 22:42:02.889780 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 22:42:02.889788 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 22:42:02.889795 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 22:42:02.889803 kernel: iommu: Default domain type: Translated Nov 12 22:42:02.889810 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 22:42:02.889817 kernel: PCI: Using ACPI for IRQ routing Nov 12 22:42:02.889827 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 22:42:02.889835 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 22:42:02.889843 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 12 22:42:02.889964 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 22:42:02.890085 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 22:42:02.890206 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 22:42:02.890215 kernel: vgaarb: loaded Nov 12 22:42:02.890223 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 22:42:02.890235 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 22:42:02.890242 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 22:42:02.890250 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 22:42:02.890257 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 22:42:02.890265 kernel: pnp: PnP ACPI init Nov 12 22:42:02.890412 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 22:42:02.890423 kernel: pnp: PnP ACPI: found 6 devices Nov 12 22:42:02.890431 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 22:42:02.890443 kernel: NET: Registered PF_INET protocol family Nov 12 22:42:02.890450 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:42:02.890458 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 22:42:02.890466 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 22:42:02.890473 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 22:42:02.890481 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 22:42:02.890489 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 22:42:02.890496 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:42:02.890514 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:42:02.890525 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 22:42:02.890532 kernel: NET: Registered PF_XDP protocol family Nov 12 22:42:02.890683 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 22:42:02.890797 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 22:42:02.890909 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 22:42:02.891020 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 22:42:02.891131 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 22:42:02.891241 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 12 22:42:02.891255 kernel: PCI: CLS 0 bytes, default 64 Nov 12 22:42:02.891263 kernel: Initialise system trusted keyrings Nov 12 22:42:02.891271 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 22:42:02.891279 kernel: Key type asymmetric registered Nov 12 22:42:02.891286 kernel: Asymmetric key parser 'x509' registered Nov 12 22:42:02.891293 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 22:42:02.891301 kernel: io scheduler mq-deadline registered Nov 12 22:42:02.891309 kernel: io scheduler kyber registered Nov 12 22:42:02.891316 kernel: io scheduler bfq registered Nov 12 22:42:02.891326 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 22:42:02.891334 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 22:42:02.891342 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 22:42:02.891350 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 22:42:02.891357 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 22:42:02.891365 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 22:42:02.891373 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 22:42:02.891383 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 22:42:02.891397 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 22:42:02.891609 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 22:42:02.891623 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 22:42:02.891740 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 22:42:02.891855 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T22:42:02 UTC (1731451322) Nov 12 22:42:02.891968 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 22:42:02.891978 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 22:42:02.891986 kernel: NET: Registered PF_INET6 protocol family Nov 12 22:42:02.891993 kernel: Segment Routing with IPv6 Nov 12 22:42:02.892005 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 22:42:02.892013 kernel: NET: Registered PF_PACKET protocol family Nov 12 22:42:02.892020 kernel: Key type dns_resolver registered Nov 12 22:42:02.892027 kernel: IPI shorthand broadcast: enabled Nov 12 22:42:02.892035 kernel: sched_clock: Marking stable (703002550, 109989489)->(839488470, -26496431) Nov 12 22:42:02.892043 kernel: registered taskstats version 1 Nov 12 22:42:02.892050 kernel: Loading compiled-in X.509 certificates Nov 12 22:42:02.892058 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: d04cb2ddbd5c3ca82936c51f5645ef0dcbdcd3b4' Nov 12 22:42:02.892065 kernel: Key type .fscrypt registered Nov 12 22:42:02.892075 kernel: Key type fscrypt-provisioning registered Nov 12 22:42:02.892083 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 22:42:02.892090 kernel: ima: Allocated hash algorithm: sha1 Nov 12 22:42:02.892098 kernel: ima: No architecture policies found Nov 12 22:42:02.892105 kernel: clk: Disabling unused clocks Nov 12 22:42:02.892113 kernel: Freeing unused kernel image (initmem) memory: 42968K Nov 12 22:42:02.892120 kernel: Write protecting the kernel read-only data: 36864k Nov 12 22:42:02.892128 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Nov 12 22:42:02.892135 kernel: Run /init as init process Nov 12 22:42:02.892145 kernel: with arguments: Nov 12 22:42:02.892153 kernel: /init Nov 12 22:42:02.892160 kernel: with environment: Nov 12 22:42:02.892167 kernel: HOME=/ Nov 12 22:42:02.892175 kernel: TERM=linux Nov 12 22:42:02.892182 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 22:42:02.892191 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:42:02.892201 systemd[1]: Detected virtualization kvm. Nov 12 22:42:02.892212 systemd[1]: Detected architecture x86-64. Nov 12 22:42:02.892219 systemd[1]: Running in initrd. Nov 12 22:42:02.892227 systemd[1]: No hostname configured, using default hostname. Nov 12 22:42:02.892235 systemd[1]: Hostname set to . Nov 12 22:42:02.892243 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:42:02.892251 systemd[1]: Queued start job for default target initrd.target. Nov 12 22:42:02.892259 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:42:02.892267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:42:02.892279 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 22:42:02.892299 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:42:02.892310 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 22:42:02.892319 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 22:42:02.892328 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 22:42:02.892339 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 22:42:02.892347 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:42:02.892356 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:42:02.892364 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:42:02.892372 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:42:02.892380 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:42:02.892389 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:42:02.892397 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:42:02.892408 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:42:02.892416 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:42:02.892425 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:42:02.892433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:42:02.892441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:42:02.892450 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:42:02.892458 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:42:02.892466 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 22:42:02.892475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:42:02.892486 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 22:42:02.892496 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 22:42:02.892515 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:42:02.892523 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:42:02.892531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:42:02.892540 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 22:42:02.892548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:42:02.892576 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 22:42:02.892607 systemd-journald[194]: Collecting audit messages is disabled. Nov 12 22:42:02.892629 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:42:02.892640 systemd-journald[194]: Journal started Nov 12 22:42:02.892661 systemd-journald[194]: Runtime Journal (/run/log/journal/c3901cae5719474fab4d42e88ec6a31d) is 6.0M, max 48.4M, 42.3M free. Nov 12 22:42:02.886238 systemd-modules-load[195]: Inserted module 'overlay' Nov 12 22:42:02.921581 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:42:02.921610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 22:42:02.921624 kernel: Bridge firewalling registered Nov 12 22:42:02.913496 systemd-modules-load[195]: Inserted module 'br_netfilter' Nov 12 22:42:02.921625 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:42:02.922234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:42:02.931756 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:42:02.937846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:42:02.938683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:42:02.939646 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:42:02.944284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:42:02.956166 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:42:02.957289 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:42:02.958343 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:42:02.961179 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:42:02.972479 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:42:02.982705 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 22:42:02.996403 dracut-cmdline[230]: dracut-dracut-053 Nov 12 22:42:03.000463 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:42:03.001351 systemd-resolved[224]: Positive Trust Anchors: Nov 12 22:42:03.001361 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:42:03.001403 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:42:03.004525 systemd-resolved[224]: Defaulting to hostname 'linux'. Nov 12 22:42:03.005830 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:42:03.007255 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:42:03.099618 kernel: SCSI subsystem initialized Nov 12 22:42:03.111605 kernel: Loading iSCSI transport class v2.0-870. Nov 12 22:42:03.125625 kernel: iscsi: registered transport (tcp) Nov 12 22:42:03.148614 kernel: iscsi: registered transport (qla4xxx) Nov 12 22:42:03.148709 kernel: QLogic iSCSI HBA Driver Nov 12 22:42:03.213895 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 22:42:03.231885 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 22:42:03.258780 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 22:42:03.258862 kernel: device-mapper: uevent: version 1.0.3 Nov 12 22:42:03.259991 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 22:42:03.308638 kernel: raid6: avx2x4 gen() 24238 MB/s Nov 12 22:42:03.325614 kernel: raid6: avx2x2 gen() 21531 MB/s Nov 12 22:42:03.342729 kernel: raid6: avx2x1 gen() 22371 MB/s Nov 12 22:42:03.342828 kernel: raid6: using algorithm avx2x4 gen() 24238 MB/s Nov 12 22:42:03.360767 kernel: raid6: .... xor() 7134 MB/s, rmw enabled Nov 12 22:42:03.360848 kernel: raid6: using avx2x2 recovery algorithm Nov 12 22:42:03.382604 kernel: xor: automatically using best checksumming function avx Nov 12 22:42:03.558613 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 22:42:03.570156 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:42:03.589727 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:42:03.604571 systemd-udevd[413]: Using default interface naming scheme 'v255'. Nov 12 22:42:03.610458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:42:03.616807 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 22:42:03.630574 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Nov 12 22:42:03.667450 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:42:03.681766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:42:03.751452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:42:03.764747 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 22:42:03.779217 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 22:42:03.781460 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:42:03.784820 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:42:03.788828 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:42:03.792239 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 22:42:03.795605 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 22:42:03.828654 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 22:42:03.828843 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 22:42:03.828860 kernel: AES CTR mode by8 optimization enabled Nov 12 22:42:03.828882 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 22:42:03.828897 kernel: libata version 3.00 loaded. Nov 12 22:42:03.828912 kernel: GPT:9289727 != 19775487 Nov 12 22:42:03.828926 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 22:42:03.828940 kernel: GPT:9289727 != 19775487 Nov 12 22:42:03.828954 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 22:42:03.828967 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:42:03.797977 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 22:42:03.817602 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:42:03.827752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:42:03.842535 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 22:42:03.874571 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 22:42:03.874589 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 22:42:03.874743 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 22:42:03.874881 kernel: BTRFS: device fsid d498af32-b44b-4318-a942-3a646ccb9d0a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (461) Nov 12 22:42:03.874901 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (476) Nov 12 22:42:03.874914 kernel: scsi host0: ahci Nov 12 22:42:03.875123 kernel: scsi host1: ahci Nov 12 22:42:03.875315 kernel: scsi host2: ahci Nov 12 22:42:03.875511 kernel: scsi host3: ahci Nov 12 22:42:03.875713 kernel: scsi host4: ahci Nov 12 22:42:03.875915 kernel: scsi host5: ahci Nov 12 22:42:03.876099 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 12 22:42:03.876116 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 12 22:42:03.876131 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 12 22:42:03.876150 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 12 22:42:03.876165 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 12 22:42:03.876179 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 12 22:42:03.827883 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:42:03.830641 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:42:03.832918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:42:03.833081 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:42:03.837734 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:42:03.845854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:42:03.873491 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 22:42:03.900488 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 22:42:03.927295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:42:03.929951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:42:03.936485 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 22:42:03.939033 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 22:42:04.052599 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 22:42:04.055312 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:42:04.065932 disk-uuid[570]: Primary Header is updated. Nov 12 22:42:04.065932 disk-uuid[570]: Secondary Entries is updated. Nov 12 22:42:04.065932 disk-uuid[570]: Secondary Header is updated. Nov 12 22:42:04.069606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:42:04.081124 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:42:04.186609 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 22:42:04.186702 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 22:42:04.187586 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 22:42:04.188603 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 22:42:04.189590 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 22:42:04.190591 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 22:42:04.191596 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 22:42:04.191621 kernel: ata3.00: applying bridge limits Nov 12 22:42:04.192614 kernel: ata3.00: configured for UDMA/100 Nov 12 22:42:04.194591 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 22:42:04.234626 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 22:42:04.249801 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 22:42:04.249827 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 22:42:05.078164 disk-uuid[572]: The operation has completed successfully. Nov 12 22:42:05.079510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:42:05.111815 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 22:42:05.111971 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 22:42:05.141969 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 22:42:05.146435 sh[596]: Success Nov 12 22:42:05.161591 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 22:42:05.202624 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 22:42:05.221689 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 22:42:05.224494 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 22:42:05.238601 kernel: BTRFS info (device dm-0): first mount of filesystem d498af32-b44b-4318-a942-3a646ccb9d0a Nov 12 22:42:05.238668 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:42:05.238680 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 22:42:05.238690 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 22:42:05.239920 kernel: BTRFS info (device dm-0): using free space tree Nov 12 22:42:05.244326 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 22:42:05.247071 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 22:42:05.260932 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 22:42:05.264252 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 22:42:05.273068 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:42:05.273110 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:42:05.273124 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:42:05.275922 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:42:05.287735 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 22:42:05.290576 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:42:05.303672 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 22:42:05.310781 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 22:42:05.443275 ignition[692]: Ignition 2.20.0 Nov 12 22:42:05.443733 ignition[692]: Stage: fetch-offline Nov 12 22:42:05.443777 ignition[692]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:42:05.443786 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:42:05.443905 ignition[692]: parsed url from cmdline: "" Nov 12 22:42:05.443909 ignition[692]: no config URL provided Nov 12 22:42:05.443915 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 22:42:05.443924 ignition[692]: no config at "/usr/lib/ignition/user.ign" Nov 12 22:42:05.443957 ignition[692]: op(1): [started] loading QEMU firmware config module Nov 12 22:42:05.443963 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 22:42:05.453780 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:42:05.459163 ignition[692]: op(1): [finished] loading QEMU firmware config module Nov 12 22:42:05.469836 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:42:05.491781 systemd-networkd[784]: lo: Link UP Nov 12 22:42:05.491794 systemd-networkd[784]: lo: Gained carrier Nov 12 22:42:05.493425 systemd-networkd[784]: Enumeration completed Nov 12 22:42:05.494002 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:42:05.494006 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:42:05.495010 systemd-networkd[784]: eth0: Link UP Nov 12 22:42:05.495014 systemd-networkd[784]: eth0: Gained carrier Nov 12 22:42:05.495021 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:42:05.495623 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:42:05.497373 systemd[1]: Reached target network.target - Network. Nov 12 22:42:05.513884 ignition[692]: parsing config with SHA512: 347ce6673a1a94a54e1df2dbbf641b38770e58ef11103df5fb7d0d5b4db8f054fb31129bbfa42ab9b8e3e49780a666a1552a7e6de0ab3c596678052631e84947 Nov 12 22:42:05.521661 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:42:05.523227 unknown[692]: fetched base config from "system" Nov 12 22:42:05.523244 unknown[692]: fetched user config from "qemu" Nov 12 22:42:05.529368 ignition[692]: fetch-offline: fetch-offline passed Nov 12 22:42:05.529584 ignition[692]: Ignition finished successfully Nov 12 22:42:05.533916 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:42:05.534330 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 22:42:05.538916 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 22:42:05.816519 ignition[788]: Ignition 2.20.0 Nov 12 22:42:05.816535 ignition[788]: Stage: kargs Nov 12 22:42:05.816869 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:42:05.816886 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:42:05.821765 ignition[788]: kargs: kargs passed Nov 12 22:42:05.822587 ignition[788]: Ignition finished successfully Nov 12 22:42:05.827517 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 22:42:05.841058 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 22:42:05.871374 ignition[797]: Ignition 2.20.0 Nov 12 22:42:05.871389 ignition[797]: Stage: disks Nov 12 22:42:05.871719 ignition[797]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:42:05.871734 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:42:05.877190 ignition[797]: disks: disks passed Nov 12 22:42:05.877309 ignition[797]: Ignition finished successfully Nov 12 22:42:05.881802 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 22:42:05.884637 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 22:42:05.888378 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:42:05.891931 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:42:05.894845 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:42:05.897549 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:42:05.911068 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 22:42:05.932739 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 22:42:05.943673 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 22:42:05.955752 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 22:42:06.057994 kernel: EXT4-fs (vda9): mounted filesystem 62325592-ead9-4e81-b706-99baa0cf9fff r/w with ordered data mode. Quota mode: none. Nov 12 22:42:06.059205 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 22:42:06.060279 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 22:42:06.075874 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:42:06.078196 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 22:42:06.080672 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 22:42:06.080740 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 22:42:06.090623 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Nov 12 22:42:06.090652 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:42:06.090667 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:42:06.090681 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:42:06.080778 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:42:06.094678 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:42:06.096978 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:42:06.113882 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 22:42:06.116254 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 22:42:06.158821 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 22:42:06.164333 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Nov 12 22:42:06.169974 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 22:42:06.173809 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 22:42:06.259695 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 22:42:06.267645 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 22:42:06.269356 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 22:42:06.276647 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 22:42:06.278217 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:42:06.295383 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 22:42:06.316589 ignition[928]: INFO : Ignition 2.20.0 Nov 12 22:42:06.316589 ignition[928]: INFO : Stage: mount Nov 12 22:42:06.318479 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:42:06.318479 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:42:06.318479 ignition[928]: INFO : mount: mount passed Nov 12 22:42:06.318479 ignition[928]: INFO : Ignition finished successfully Nov 12 22:42:06.320187 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 22:42:06.332751 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 22:42:06.340425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:42:06.352585 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Nov 12 22:42:06.354865 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:42:06.354889 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:42:06.354915 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:42:06.358593 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:42:06.361096 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:42:06.390913 ignition[959]: INFO : Ignition 2.20.0 Nov 12 22:42:06.390913 ignition[959]: INFO : Stage: files Nov 12 22:42:06.393302 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:42:06.393302 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:42:06.393302 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Nov 12 22:42:06.393302 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 22:42:06.393302 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 22:42:06.400673 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 22:42:06.400673 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 22:42:06.400673 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 22:42:06.400673 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:42:06.400673 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 22:42:06.395639 unknown[959]: wrote ssh authorized keys file for user: core Nov 12 22:42:06.439119 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 22:42:06.535197 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:42:06.535197 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:42:06.540078 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 22:42:06.810854 systemd-networkd[784]: eth0: Gained IPv6LL Nov 12 22:42:07.032948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 22:42:07.245077 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:42:07.245077 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 22:42:07.249326 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 22:42:07.251334 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:42:07.253667 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:42:07.255712 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:42:07.257877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:42:07.259956 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:42:07.262141 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:42:07.264545 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:42:07.266804 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:42:07.268841 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:42:07.272081 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:42:07.275033 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:42:07.277762 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 22:42:07.703463 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 22:42:08.603105 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:42:08.603105 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 22:42:08.607421 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:42:08.609727 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:42:08.609727 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 22:42:08.609727 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 22:42:08.609727 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:42:08.609727 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:42:08.609727 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 22:42:08.609727 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 22:42:08.653204 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:42:08.658546 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:42:08.660444 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 22:42:08.660444 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 22:42:08.663713 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 22:42:08.665438 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:42:08.667548 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:42:08.669495 ignition[959]: INFO : files: files passed Nov 12 22:42:08.670375 ignition[959]: INFO : Ignition finished successfully Nov 12 22:42:08.674386 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 22:42:08.683751 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 22:42:08.685731 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 22:42:08.692916 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 22:42:08.694365 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 22:42:08.696383 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 22:42:08.699413 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:42:08.699413 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:42:08.704306 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:42:08.702789 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:42:08.704510 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 22:42:08.715714 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 22:42:08.750109 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 22:42:08.750272 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 22:42:08.752953 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 22:42:08.755187 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 22:42:08.757539 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 22:42:08.767812 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 22:42:08.784299 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:42:08.794781 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 22:42:08.807366 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:42:08.810328 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:42:08.813348 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 22:42:08.815645 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 22:42:08.816912 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:42:08.820114 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 22:42:08.822698 systemd[1]: Stopped target basic.target - Basic System. Nov 12 22:42:08.825019 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 22:42:08.827632 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:42:08.830547 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 22:42:08.833373 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 22:42:08.836008 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:42:08.839136 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 22:42:08.841638 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 22:42:08.844171 systemd[1]: Stopped target swap.target - Swaps. Nov 12 22:42:08.846306 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 22:42:08.847610 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:42:08.850447 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:42:08.853170 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:42:08.856102 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 22:42:08.857389 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:42:08.860474 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 22:42:08.861775 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 22:42:08.864692 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 22:42:08.865967 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:42:08.868887 systemd[1]: Stopped target paths.target - Path Units. Nov 12 22:42:08.871018 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 22:42:08.875656 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:42:08.879014 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 22:42:08.881210 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 22:42:08.883606 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 22:42:08.884760 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:42:08.887187 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 22:42:08.888285 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:42:08.890887 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 22:42:08.892400 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:42:08.895634 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 22:42:08.896920 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 22:42:08.910944 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 22:42:08.913382 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 22:42:08.914731 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:42:08.918822 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 22:42:08.921057 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 22:42:08.922475 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:42:08.926949 ignition[1014]: INFO : Ignition 2.20.0 Nov 12 22:42:08.926949 ignition[1014]: INFO : Stage: umount Nov 12 22:42:08.927569 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:42:08.927569 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:42:08.926967 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 22:42:08.927225 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:42:08.931266 ignition[1014]: INFO : umount: umount passed Nov 12 22:42:08.931266 ignition[1014]: INFO : Ignition finished successfully Nov 12 22:42:08.939929 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 22:42:08.941113 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 22:42:08.945790 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 22:42:08.946984 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 22:42:08.951088 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 22:42:08.953072 systemd[1]: Stopped target network.target - Network. Nov 12 22:42:08.955033 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 22:42:08.955996 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 22:42:08.958662 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 22:42:08.958733 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 22:42:08.961993 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 22:42:08.962055 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 22:42:08.965186 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 22:42:08.966241 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 22:42:08.968649 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 22:42:08.970909 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 22:42:08.975653 systemd-networkd[784]: eth0: DHCPv6 lease lost Nov 12 22:42:08.978403 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 22:42:08.978611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 22:42:08.979949 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 22:42:08.980002 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:42:08.990765 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 22:42:08.991815 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 22:42:08.991909 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:42:08.994796 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:42:09.001088 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 22:42:09.001237 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 22:42:09.023457 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 22:42:09.023784 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:42:09.040531 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 22:42:09.040677 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 22:42:09.043116 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 22:42:09.043191 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 22:42:09.044671 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 22:42:09.044722 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:42:09.046750 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 22:42:09.046803 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:42:09.048948 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 22:42:09.049004 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 22:42:09.050960 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:42:09.051020 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:42:09.054021 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 22:42:09.055176 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:42:09.055282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:42:09.057452 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 22:42:09.057502 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 22:42:09.059460 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 22:42:09.059510 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:42:09.061771 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 22:42:09.061825 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:42:09.065101 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:42:09.065153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:42:09.067011 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 22:42:09.067116 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 22:42:09.143423 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 22:42:09.143611 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 22:42:09.145925 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 22:42:09.146607 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 22:42:09.146679 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 22:42:09.161851 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 22:42:09.169913 systemd[1]: Switching root. Nov 12 22:42:09.205814 systemd-journald[194]: Journal stopped Nov 12 22:42:10.808450 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Nov 12 22:42:10.808517 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 22:42:10.808535 kernel: SELinux: policy capability open_perms=1 Nov 12 22:42:10.808547 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 22:42:10.808577 kernel: SELinux: policy capability always_check_network=0 Nov 12 22:42:10.808593 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 22:42:10.808605 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 22:42:10.808622 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 22:42:10.808633 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 22:42:10.808645 kernel: audit: type=1403 audit(1731451329.688:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 22:42:10.808658 systemd[1]: Successfully loaded SELinux policy in 74.552ms. Nov 12 22:42:10.808680 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.452ms. Nov 12 22:42:10.808694 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:42:10.808707 systemd[1]: Detected virtualization kvm. Nov 12 22:42:10.808722 systemd[1]: Detected architecture x86-64. Nov 12 22:42:10.808734 systemd[1]: Detected first boot. Nov 12 22:42:10.808746 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:42:10.808758 zram_generator::config[1058]: No configuration found. Nov 12 22:42:10.808771 systemd[1]: Populated /etc with preset unit settings. Nov 12 22:42:10.808783 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 22:42:10.808795 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 22:42:10.808807 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 22:42:10.808822 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 22:42:10.808835 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 22:42:10.808847 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 22:42:10.808859 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 22:42:10.808876 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 22:42:10.808888 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 22:42:10.808901 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 22:42:10.808913 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 22:42:10.808927 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:42:10.808940 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:42:10.808952 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 22:42:10.808965 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 22:42:10.808977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 22:42:10.808990 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:42:10.809002 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 22:42:10.809014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:42:10.809026 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 22:42:10.809042 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 22:42:10.809054 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 22:42:10.809066 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 22:42:10.809079 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:42:10.809092 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:42:10.809104 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:42:10.809117 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:42:10.809129 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 22:42:10.809143 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 22:42:10.809155 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:42:10.809168 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:42:10.809180 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:42:10.809192 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 22:42:10.809204 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 22:42:10.809216 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 22:42:10.809229 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 22:42:10.809241 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:42:10.809256 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 22:42:10.809268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 22:42:10.809280 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 22:42:10.809301 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 22:42:10.809313 systemd[1]: Reached target machines.target - Containers. Nov 12 22:42:10.809325 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 22:42:10.809338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:42:10.809351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:42:10.809364 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 22:42:10.809379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:42:10.809391 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:42:10.809404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:42:10.809416 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 22:42:10.809428 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:42:10.809441 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 22:42:10.809454 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 22:42:10.809466 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 22:42:10.809481 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 22:42:10.809493 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 22:42:10.809505 kernel: fuse: init (API version 7.39) Nov 12 22:42:10.809517 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:42:10.809529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:42:10.809541 kernel: loop: module loaded Nov 12 22:42:10.809553 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 22:42:10.809575 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 22:42:10.809588 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:42:10.809603 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 22:42:10.809615 systemd[1]: Stopped verity-setup.service. Nov 12 22:42:10.809629 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:42:10.809642 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 22:42:10.809654 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 22:42:10.809666 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 22:42:10.809679 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 22:42:10.809692 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 22:42:10.809707 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 22:42:10.809719 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 22:42:10.809731 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:42:10.809762 systemd-journald[1127]: Collecting audit messages is disabled. Nov 12 22:42:10.809786 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 22:42:10.809799 systemd-journald[1127]: Journal started Nov 12 22:42:10.809820 systemd-journald[1127]: Runtime Journal (/run/log/journal/c3901cae5719474fab4d42e88ec6a31d) is 6.0M, max 48.4M, 42.3M free. Nov 12 22:42:10.523958 systemd[1]: Queued start job for default target multi-user.target. Nov 12 22:42:10.548053 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 22:42:10.548616 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 22:42:10.811009 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 22:42:10.815577 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:42:10.817015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:42:10.817232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:42:10.818981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:42:10.819191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:42:10.820922 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 22:42:10.821149 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 22:42:10.824305 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:42:10.824513 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:42:10.824574 kernel: ACPI: bus type drm_connector registered Nov 12 22:42:10.826306 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:42:10.826510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:42:10.828435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:42:10.830066 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 22:42:10.831918 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 22:42:10.847757 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 22:42:10.862672 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 22:42:10.865017 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 22:42:10.866392 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 22:42:10.866424 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:42:10.868622 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 22:42:10.871111 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 22:42:10.875575 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 22:42:10.876888 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:42:10.879343 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 22:42:10.882500 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 22:42:10.883892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:42:10.885637 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 22:42:10.888687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:42:10.893019 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:42:10.896980 systemd-journald[1127]: Time spent on flushing to /var/log/journal/c3901cae5719474fab4d42e88ec6a31d is 20.387ms for 951 entries. Nov 12 22:42:10.896980 systemd-journald[1127]: System Journal (/var/log/journal/c3901cae5719474fab4d42e88ec6a31d) is 8.0M, max 195.6M, 187.6M free. Nov 12 22:42:10.958612 systemd-journald[1127]: Received client request to flush runtime journal. Nov 12 22:42:10.897716 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 22:42:10.901188 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 22:42:10.905423 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 22:42:10.909820 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 22:42:10.911822 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 22:42:10.913778 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 22:42:10.947289 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 22:42:10.963010 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 22:42:10.965644 kernel: loop0: detected capacity change from 0 to 140992 Nov 12 22:42:10.966450 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:42:10.977275 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 22:42:10.980534 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 22:42:10.993585 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 22:42:10.996738 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:42:11.001347 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 22:42:11.003217 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:42:11.014241 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 22:42:11.015949 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 22:42:11.026499 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Nov 12 22:42:11.027654 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Nov 12 22:42:11.028921 kernel: loop1: detected capacity change from 0 to 211296 Nov 12 22:42:11.029321 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 22:42:11.039170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:42:11.082122 kernel: loop2: detected capacity change from 0 to 138184 Nov 12 22:42:11.143245 kernel: loop3: detected capacity change from 0 to 140992 Nov 12 22:42:11.202757 kernel: loop4: detected capacity change from 0 to 211296 Nov 12 22:42:11.248637 kernel: loop5: detected capacity change from 0 to 138184 Nov 12 22:42:11.309612 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 22:42:11.390914 (sd-merge)[1197]: Merged extensions into '/usr'. Nov 12 22:42:11.404386 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 22:42:11.404723 systemd[1]: Reloading... Nov 12 22:42:11.586983 zram_generator::config[1223]: No configuration found. Nov 12 22:42:11.821644 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:42:11.855206 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 22:42:11.890837 systemd[1]: Reloading finished in 485 ms. Nov 12 22:42:11.952801 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 22:42:11.959700 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 22:42:11.989025 systemd[1]: Starting ensure-sysext.service... Nov 12 22:42:12.005713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:42:12.019803 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Nov 12 22:42:12.019970 systemd[1]: Reloading... Nov 12 22:42:12.081474 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 22:42:12.082000 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 22:42:12.083384 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 22:42:12.089187 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 12 22:42:12.089393 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 12 22:42:12.094526 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:42:12.094756 systemd-tmpfiles[1261]: Skipping /boot Nov 12 22:42:12.126905 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:42:12.127154 systemd-tmpfiles[1261]: Skipping /boot Nov 12 22:42:12.190600 zram_generator::config[1291]: No configuration found. Nov 12 22:42:12.421952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:42:12.493081 systemd[1]: Reloading finished in 472 ms. Nov 12 22:42:12.533814 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 22:42:12.557578 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:42:12.602329 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:42:12.611685 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 22:42:12.616330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 22:42:12.624155 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:42:12.634152 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:42:12.653257 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 22:42:12.660416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:42:12.660667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:42:12.672798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:42:12.693870 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:42:12.724824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:42:12.731115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:42:12.743276 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 22:42:12.749963 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:42:12.751545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:42:12.751827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:42:12.753004 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Nov 12 22:42:12.770177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:42:12.770453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:42:12.786388 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:42:12.786732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:42:12.809011 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 22:42:12.812968 augenrules[1356]: No rules Nov 12 22:42:12.818518 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:42:12.818889 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:42:12.822856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:42:12.890110 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 22:42:12.905082 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:42:12.938727 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:42:12.942158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:42:12.949833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:42:12.973594 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1364) Nov 12 22:42:12.963368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:42:12.984462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:42:13.007997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:42:13.015670 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:42:13.029323 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:42:13.068790 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 22:42:13.070304 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:42:13.072166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:42:13.072508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:42:13.135378 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 22:42:13.137769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:42:13.138055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:42:13.144997 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:42:13.145801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:42:13.179407 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 22:42:13.185626 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1363) Nov 12 22:42:13.187846 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:42:13.188483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:42:13.190354 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1363) Nov 12 22:42:13.200854 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 22:42:13.208515 augenrules[1381]: /sbin/augenrules: No change Nov 12 22:42:13.238511 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 22:42:13.259537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:42:13.263314 augenrules[1428]: No rules Nov 12 22:42:13.266674 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:42:13.267032 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:42:13.274088 systemd[1]: Finished ensure-sysext.service. Nov 12 22:42:13.309070 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 22:42:13.314738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:42:13.314893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:42:13.319672 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 22:42:13.321784 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 22:42:13.328658 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 22:42:13.341628 kernel: ACPI: button: Power Button [PWRF] Nov 12 22:42:13.372608 systemd-resolved[1331]: Positive Trust Anchors: Nov 12 22:42:13.372634 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:42:13.372671 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:42:13.379702 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 22:42:13.389760 systemd-resolved[1331]: Defaulting to hostname 'linux'. Nov 12 22:42:13.393261 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:42:13.395717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:42:13.404541 systemd-networkd[1397]: lo: Link UP Nov 12 22:42:13.404596 systemd-networkd[1397]: lo: Gained carrier Nov 12 22:42:13.413152 systemd-networkd[1397]: Enumeration completed Nov 12 22:42:13.413800 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:42:13.421470 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 22:42:13.422143 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 22:42:13.422547 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 22:42:13.415922 systemd[1]: Reached target network.target - Network. Nov 12 22:42:13.416316 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:42:13.416322 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:42:13.421544 systemd-networkd[1397]: eth0: Link UP Nov 12 22:42:13.421570 systemd-networkd[1397]: eth0: Gained carrier Nov 12 22:42:13.421614 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:42:13.436082 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 22:42:13.456786 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 12 22:42:13.462603 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:42:13.486488 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 22:42:14.183079 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 22:42:14.183153 systemd-timesyncd[1437]: Initial clock synchronization to Tue 2024-11-12 22:42:14.182886 UTC. Nov 12 22:42:14.183788 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 22:42:14.185004 systemd-resolved[1331]: Clock change detected. Flushing caches. Nov 12 22:42:14.200118 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 22:42:14.233797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:42:14.333474 kernel: kvm_amd: TSC scaling supported Nov 12 22:42:14.333588 kernel: kvm_amd: Nested Virtualization enabled Nov 12 22:42:14.333607 kernel: kvm_amd: Nested Paging enabled Nov 12 22:42:14.334789 kernel: kvm_amd: LBR virtualization supported Nov 12 22:42:14.334858 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 22:42:14.335375 kernel: kvm_amd: Virtual GIF supported Nov 12 22:42:14.355080 kernel: EDAC MC: Ver: 3.0.0 Nov 12 22:42:14.386163 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 22:42:14.407255 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 22:42:14.408927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:42:14.415454 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:42:14.451704 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 22:42:14.453364 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:42:14.454500 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:42:14.455710 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 22:42:14.457017 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 22:42:14.458596 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 22:42:14.459889 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 22:42:14.461162 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 22:42:14.462435 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 22:42:14.462474 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:42:14.463457 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:42:14.465667 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 22:42:14.468752 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 22:42:14.481430 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 22:42:14.483996 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 22:42:14.485574 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 22:42:14.486736 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:42:14.487704 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:42:14.488661 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:42:14.488705 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:42:14.489729 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 22:42:14.492192 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 22:42:14.496148 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:42:14.496219 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 22:42:14.500328 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 22:42:14.502969 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 22:42:14.504793 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 22:42:14.509102 jq[1460]: false Nov 12 22:42:14.517166 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 22:42:14.519919 dbus-daemon[1459]: [system] SELinux support is enabled Nov 12 22:42:14.519640 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 22:42:14.524227 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 22:42:14.533186 extend-filesystems[1461]: Found loop3 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found loop4 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found loop5 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found sr0 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found vda Nov 12 22:42:14.533186 extend-filesystems[1461]: Found vda1 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found vda2 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found vda3 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found usr Nov 12 22:42:14.533186 extend-filesystems[1461]: Found vda4 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found vda6 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found vda7 Nov 12 22:42:14.533186 extend-filesystems[1461]: Found vda9 Nov 12 22:42:14.533186 extend-filesystems[1461]: Checking size of /dev/vda9 Nov 12 22:42:14.533542 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 22:42:14.535434 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 22:42:14.536124 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 22:42:14.549262 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 22:42:14.552218 extend-filesystems[1461]: Resized partition /dev/vda9 Nov 12 22:42:14.557184 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 22:42:14.559479 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 22:42:14.560999 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Nov 12 22:42:14.565376 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 22:42:14.572500 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 22:42:14.575776 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 22:42:14.581184 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1374) Nov 12 22:42:14.581235 update_engine[1474]: I20241112 22:42:14.576397 1474 main.cc:92] Flatcar Update Engine starting Nov 12 22:42:14.576217 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 22:42:14.587079 update_engine[1474]: I20241112 22:42:14.585176 1474 update_check_scheduler.cc:74] Next update check in 9m20s Nov 12 22:42:14.576689 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 22:42:14.576959 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 22:42:14.589426 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 22:42:14.589727 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 22:42:14.596066 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 22:42:14.620171 jq[1482]: true Nov 12 22:42:14.613640 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 22:42:14.621450 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 22:42:14.621450 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 22:42:14.621450 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 22:42:14.628631 extend-filesystems[1461]: Resized filesystem in /dev/vda9 Nov 12 22:42:14.631243 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 22:42:14.631477 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 22:42:14.631851 jq[1490]: true Nov 12 22:42:14.644876 systemd-logind[1469]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 22:42:14.644905 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 22:42:14.647443 systemd-logind[1469]: New seat seat0. Nov 12 22:42:14.648705 tar[1485]: linux-amd64/helm Nov 12 22:42:14.651976 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 22:42:14.658640 systemd[1]: Started update-engine.service - Update Engine. Nov 12 22:42:14.661756 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 22:42:14.661997 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 22:42:14.664288 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 22:42:14.664448 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 22:42:14.677717 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 22:42:14.713175 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 22:42:14.716926 bash[1515]: Updated "/home/core/.ssh/authorized_keys" Nov 12 22:42:14.720346 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 22:42:14.726759 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 22:42:14.734552 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 22:42:14.741973 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 22:42:14.752306 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 22:42:14.760407 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 22:42:14.760681 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 22:42:14.769363 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 22:42:14.786555 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 22:42:14.795321 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 22:42:14.798986 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 22:42:14.800567 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 22:42:14.855136 containerd[1486]: time="2024-11-12T22:42:14.854954639Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 12 22:42:14.881069 containerd[1486]: time="2024-11-12T22:42:14.878788134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881069 containerd[1486]: time="2024-11-12T22:42:14.880705280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881069 containerd[1486]: time="2024-11-12T22:42:14.880727061Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 22:42:14.881069 containerd[1486]: time="2024-11-12T22:42:14.880742720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 22:42:14.881069 containerd[1486]: time="2024-11-12T22:42:14.880916056Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 22:42:14.881069 containerd[1486]: time="2024-11-12T22:42:14.880936935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881069 containerd[1486]: time="2024-11-12T22:42:14.881011735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881069 containerd[1486]: time="2024-11-12T22:42:14.881023026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881300 containerd[1486]: time="2024-11-12T22:42:14.881237729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881300 containerd[1486]: time="2024-11-12T22:42:14.881252767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881300 containerd[1486]: time="2024-11-12T22:42:14.881264770Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881300 containerd[1486]: time="2024-11-12T22:42:14.881273476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881372 containerd[1486]: time="2024-11-12T22:42:14.881363956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881619 containerd[1486]: time="2024-11-12T22:42:14.881584069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881768 containerd[1486]: time="2024-11-12T22:42:14.881704464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:42:14.881768 containerd[1486]: time="2024-11-12T22:42:14.881715946Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 22:42:14.881830 containerd[1486]: time="2024-11-12T22:42:14.881818649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 22:42:14.881901 containerd[1486]: time="2024-11-12T22:42:14.881873241Z" level=info msg="metadata content store policy set" policy=shared Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887171279Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887222885Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887237773Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887253483Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887266848Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887401080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887655437Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887807261Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887824173Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887845653Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887863146Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887880128Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887894585Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 22:42:14.890003 containerd[1486]: time="2024-11-12T22:42:14.887910736Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.887930412Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.887944940Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.887958976Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.887972802Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.887994312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.888008559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.888021593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.888035900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.888102074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.888120960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.888144584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.888165523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.888181403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890302 containerd[1486]: time="2024-11-12T22:42:14.888197934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.888211610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.888226758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.888242348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.888258508Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.888279728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.888295046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.888308963Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.888988156Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.889008665Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.889020577Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.889032119Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.889041106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.889065331Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 22:42:14.890559 containerd[1486]: time="2024-11-12T22:42:14.889075801Z" level=info msg="NRI interface is disabled by configuration." Nov 12 22:42:14.890820 containerd[1486]: time="2024-11-12T22:42:14.889085559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 22:42:14.890841 containerd[1486]: time="2024-11-12T22:42:14.889338263Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 22:42:14.890841 containerd[1486]: time="2024-11-12T22:42:14.889590216Z" level=info msg="Connect containerd service" Nov 12 22:42:14.890841 containerd[1486]: time="2024-11-12T22:42:14.890074985Z" level=info msg="using legacy CRI server" Nov 12 22:42:14.890841 containerd[1486]: time="2024-11-12T22:42:14.890087669Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 22:42:14.890841 containerd[1486]: time="2024-11-12T22:42:14.890188789Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 22:42:14.891064 containerd[1486]: time="2024-11-12T22:42:14.890848175Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:42:14.891164 containerd[1486]: time="2024-11-12T22:42:14.891114405Z" level=info msg="Start subscribing containerd event" Nov 12 22:42:14.891188 containerd[1486]: time="2024-11-12T22:42:14.891167645Z" level=info msg="Start recovering state" Nov 12 22:42:14.891244 containerd[1486]: time="2024-11-12T22:42:14.891175610Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 22:42:14.891318 containerd[1486]: time="2024-11-12T22:42:14.891273193Z" level=info msg="Start event monitor" Nov 12 22:42:14.891345 containerd[1486]: time="2024-11-12T22:42:14.891316173Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 22:42:14.891345 containerd[1486]: time="2024-11-12T22:42:14.891332724Z" level=info msg="Start snapshots syncer" Nov 12 22:42:14.891388 containerd[1486]: time="2024-11-12T22:42:14.891345498Z" level=info msg="Start cni network conf syncer for default" Nov 12 22:42:14.891388 containerd[1486]: time="2024-11-12T22:42:14.891355467Z" level=info msg="Start streaming server" Nov 12 22:42:14.891426 containerd[1486]: time="2024-11-12T22:42:14.891417153Z" level=info msg="containerd successfully booted in 0.038498s" Nov 12 22:42:14.891721 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 22:42:15.096599 tar[1485]: linux-amd64/LICENSE Nov 12 22:42:15.096599 tar[1485]: linux-amd64/README.md Nov 12 22:42:15.116427 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 22:42:15.249179 systemd-networkd[1397]: eth0: Gained IPv6LL Nov 12 22:42:15.252350 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 22:42:15.254355 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 22:42:15.266300 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 22:42:15.269040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:42:15.271534 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 22:42:15.290365 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 22:42:15.290641 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 22:42:15.292615 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 22:42:15.296369 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 22:42:16.035946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:42:16.037923 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 22:42:16.039514 systemd[1]: Startup finished in 837ms (kernel) + 6.959s (initrd) + 5.724s (userspace) = 13.521s. Nov 12 22:42:16.053552 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:42:16.659247 kubelet[1572]: E1112 22:42:16.659149 1572 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:42:16.663843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:42:16.664040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:42:16.664422 systemd[1]: kubelet.service: Consumed 1.238s CPU time. Nov 12 22:42:19.127504 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 22:42:19.128860 systemd[1]: Started sshd@0-10.0.0.46:22-10.0.0.1:50690.service - OpenSSH per-connection server daemon (10.0.0.1:50690). Nov 12 22:42:19.185576 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 50690 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:19.187853 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:19.196333 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 22:42:19.211249 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 22:42:19.213263 systemd-logind[1469]: New session 1 of user core. Nov 12 22:42:19.223626 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 22:42:19.227663 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 22:42:19.235301 (systemd)[1590]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 22:42:19.337862 systemd[1590]: Queued start job for default target default.target. Nov 12 22:42:19.346323 systemd[1590]: Created slice app.slice - User Application Slice. Nov 12 22:42:19.346348 systemd[1590]: Reached target paths.target - Paths. Nov 12 22:42:19.346361 systemd[1590]: Reached target timers.target - Timers. Nov 12 22:42:19.347832 systemd[1590]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 22:42:19.359271 systemd[1590]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 22:42:19.359388 systemd[1590]: Reached target sockets.target - Sockets. Nov 12 22:42:19.359407 systemd[1590]: Reached target basic.target - Basic System. Nov 12 22:42:19.359448 systemd[1590]: Reached target default.target - Main User Target. Nov 12 22:42:19.359482 systemd[1590]: Startup finished in 117ms. Nov 12 22:42:19.360075 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 22:42:19.374192 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 22:42:19.439849 systemd[1]: Started sshd@1-10.0.0.46:22-10.0.0.1:50702.service - OpenSSH per-connection server daemon (10.0.0.1:50702). Nov 12 22:42:19.485225 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 50702 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:19.487077 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:19.491880 systemd-logind[1469]: New session 2 of user core. Nov 12 22:42:19.505251 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 22:42:19.560228 sshd[1603]: Connection closed by 10.0.0.1 port 50702 Nov 12 22:42:19.560869 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:19.575444 systemd[1]: sshd@1-10.0.0.46:22-10.0.0.1:50702.service: Deactivated successfully. Nov 12 22:42:19.577746 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 22:42:19.579828 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. Nov 12 22:42:19.581357 systemd[1]: Started sshd@2-10.0.0.46:22-10.0.0.1:50708.service - OpenSSH per-connection server daemon (10.0.0.1:50708). Nov 12 22:42:19.582415 systemd-logind[1469]: Removed session 2. Nov 12 22:42:19.624694 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 50708 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:19.626674 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:19.631753 systemd-logind[1469]: New session 3 of user core. Nov 12 22:42:19.641196 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 22:42:19.692648 sshd[1610]: Connection closed by 10.0.0.1 port 50708 Nov 12 22:42:19.693507 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:19.709482 systemd[1]: sshd@2-10.0.0.46:22-10.0.0.1:50708.service: Deactivated successfully. Nov 12 22:42:19.711396 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 22:42:19.713191 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. Nov 12 22:42:19.722325 systemd[1]: Started sshd@3-10.0.0.46:22-10.0.0.1:50712.service - OpenSSH per-connection server daemon (10.0.0.1:50712). Nov 12 22:42:19.723338 systemd-logind[1469]: Removed session 3. Nov 12 22:42:19.763149 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 50712 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:19.764888 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:19.773436 systemd-logind[1469]: New session 4 of user core. Nov 12 22:42:19.779224 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 22:42:19.833863 sshd[1617]: Connection closed by 10.0.0.1 port 50712 Nov 12 22:42:19.834277 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:19.846096 systemd[1]: sshd@3-10.0.0.46:22-10.0.0.1:50712.service: Deactivated successfully. Nov 12 22:42:19.848010 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 22:42:19.849686 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. Nov 12 22:42:19.850968 systemd[1]: Started sshd@4-10.0.0.46:22-10.0.0.1:50718.service - OpenSSH per-connection server daemon (10.0.0.1:50718). Nov 12 22:42:19.851804 systemd-logind[1469]: Removed session 4. Nov 12 22:42:19.892954 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 50718 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:19.894796 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:19.899299 systemd-logind[1469]: New session 5 of user core. Nov 12 22:42:19.909295 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 22:42:19.971037 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 22:42:19.971412 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:42:19.987505 sudo[1625]: pam_unix(sudo:session): session closed for user root Nov 12 22:42:19.989509 sshd[1624]: Connection closed by 10.0.0.1 port 50718 Nov 12 22:42:19.989936 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:20.007130 systemd[1]: sshd@4-10.0.0.46:22-10.0.0.1:50718.service: Deactivated successfully. Nov 12 22:42:20.009088 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 22:42:20.010959 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. Nov 12 22:42:20.012372 systemd[1]: Started sshd@5-10.0.0.46:22-10.0.0.1:50720.service - OpenSSH per-connection server daemon (10.0.0.1:50720). Nov 12 22:42:20.013103 systemd-logind[1469]: Removed session 5. Nov 12 22:42:20.053399 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 50720 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:20.054827 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:20.058769 systemd-logind[1469]: New session 6 of user core. Nov 12 22:42:20.067184 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 22:42:20.121433 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 22:42:20.121773 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:42:20.125652 sudo[1634]: pam_unix(sudo:session): session closed for user root Nov 12 22:42:20.131527 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 12 22:42:20.131848 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:42:20.155432 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:42:20.187924 augenrules[1656]: No rules Nov 12 22:42:20.190167 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:42:20.190480 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:42:20.191989 sudo[1633]: pam_unix(sudo:session): session closed for user root Nov 12 22:42:20.193838 sshd[1632]: Connection closed by 10.0.0.1 port 50720 Nov 12 22:42:20.194257 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:20.202441 systemd[1]: sshd@5-10.0.0.46:22-10.0.0.1:50720.service: Deactivated successfully. Nov 12 22:42:20.204427 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 22:42:20.206209 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. Nov 12 22:42:20.211357 systemd[1]: Started sshd@6-10.0.0.46:22-10.0.0.1:50728.service - OpenSSH per-connection server daemon (10.0.0.1:50728). Nov 12 22:42:20.212203 systemd-logind[1469]: Removed session 6. Nov 12 22:42:20.255030 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 50728 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:20.256753 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:20.260745 systemd-logind[1469]: New session 7 of user core. Nov 12 22:42:20.270172 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 22:42:20.322457 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 22:42:20.322787 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:42:20.754689 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 22:42:20.754726 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 22:42:21.176699 dockerd[1687]: time="2024-11-12T22:42:21.176522193Z" level=info msg="Starting up" Nov 12 22:42:21.662266 dockerd[1687]: time="2024-11-12T22:42:21.662081714Z" level=info msg="Loading containers: start." Nov 12 22:42:21.859085 kernel: Initializing XFRM netlink socket Nov 12 22:42:21.949383 systemd-networkd[1397]: docker0: Link UP Nov 12 22:42:22.011227 dockerd[1687]: time="2024-11-12T22:42:22.010822412Z" level=info msg="Loading containers: done." Nov 12 22:42:22.065081 dockerd[1687]: time="2024-11-12T22:42:22.064940322Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 22:42:22.065367 dockerd[1687]: time="2024-11-12T22:42:22.065110691Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 12 22:42:22.065367 dockerd[1687]: time="2024-11-12T22:42:22.065290178Z" level=info msg="Daemon has completed initialization" Nov 12 22:42:22.209967 dockerd[1687]: time="2024-11-12T22:42:22.209382043Z" level=info msg="API listen on /run/docker.sock" Nov 12 22:42:22.210016 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 22:42:23.783635 containerd[1486]: time="2024-11-12T22:42:23.783506691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 22:42:24.651906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492957267.mount: Deactivated successfully. Nov 12 22:42:26.061565 containerd[1486]: time="2024-11-12T22:42:26.061483875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:26.062454 containerd[1486]: time="2024-11-12T22:42:26.062403570Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 22:42:26.063745 containerd[1486]: time="2024-11-12T22:42:26.063704190Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:26.068058 containerd[1486]: time="2024-11-12T22:42:26.067962908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:26.069472 containerd[1486]: time="2024-11-12T22:42:26.069177115Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.285593981s" Nov 12 22:42:26.069472 containerd[1486]: time="2024-11-12T22:42:26.069227329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 22:42:26.111688 containerd[1486]: time="2024-11-12T22:42:26.110384028Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 22:42:26.914332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 22:42:26.928212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:42:27.075860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:42:27.080546 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:42:27.195625 kubelet[1960]: E1112 22:42:27.195460 1960 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:42:27.203583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:42:27.203825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:42:28.897407 containerd[1486]: time="2024-11-12T22:42:28.897331965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:28.898344 containerd[1486]: time="2024-11-12T22:42:28.898245319Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 22:42:28.899907 containerd[1486]: time="2024-11-12T22:42:28.899863925Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:28.903460 containerd[1486]: time="2024-11-12T22:42:28.903415316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:28.907075 containerd[1486]: time="2024-11-12T22:42:28.904519207Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 2.794078502s" Nov 12 22:42:28.907075 containerd[1486]: time="2024-11-12T22:42:28.904912855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 22:42:28.958411 containerd[1486]: time="2024-11-12T22:42:28.958348806Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 22:42:30.681588 containerd[1486]: time="2024-11-12T22:42:30.681508030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:30.682215 containerd[1486]: time="2024-11-12T22:42:30.682160433Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 22:42:30.683557 containerd[1486]: time="2024-11-12T22:42:30.683525404Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:30.686437 containerd[1486]: time="2024-11-12T22:42:30.686385779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:30.688471 containerd[1486]: time="2024-11-12T22:42:30.688427889Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.730034069s" Nov 12 22:42:30.688471 containerd[1486]: time="2024-11-12T22:42:30.688466111Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 22:42:30.713065 containerd[1486]: time="2024-11-12T22:42:30.713020949Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 22:42:32.544689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838401705.mount: Deactivated successfully. Nov 12 22:42:33.447284 containerd[1486]: time="2024-11-12T22:42:33.447194747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:33.448131 containerd[1486]: time="2024-11-12T22:42:33.448077784Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 22:42:33.449427 containerd[1486]: time="2024-11-12T22:42:33.449377973Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:33.451552 containerd[1486]: time="2024-11-12T22:42:33.451485325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:33.452279 containerd[1486]: time="2024-11-12T22:42:33.452221206Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 2.739131788s" Nov 12 22:42:33.452279 containerd[1486]: time="2024-11-12T22:42:33.452253667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 22:42:33.483733 containerd[1486]: time="2024-11-12T22:42:33.483666859Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 22:42:34.096912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267896938.mount: Deactivated successfully. Nov 12 22:42:34.842671 containerd[1486]: time="2024-11-12T22:42:34.842561537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:34.844343 containerd[1486]: time="2024-11-12T22:42:34.844300018Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 22:42:34.845524 containerd[1486]: time="2024-11-12T22:42:34.845479500Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:34.851748 containerd[1486]: time="2024-11-12T22:42:34.851679439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:34.852674 containerd[1486]: time="2024-11-12T22:42:34.852632166Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.368926355s" Nov 12 22:42:34.852674 containerd[1486]: time="2024-11-12T22:42:34.852667933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 22:42:34.880019 containerd[1486]: time="2024-11-12T22:42:34.879962079Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 22:42:35.394193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800772818.mount: Deactivated successfully. Nov 12 22:42:35.403014 containerd[1486]: time="2024-11-12T22:42:35.402927263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:35.403781 containerd[1486]: time="2024-11-12T22:42:35.403731441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 22:42:35.405102 containerd[1486]: time="2024-11-12T22:42:35.405040236Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:35.407604 containerd[1486]: time="2024-11-12T22:42:35.407574480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:35.408567 containerd[1486]: time="2024-11-12T22:42:35.408529531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 528.513721ms" Nov 12 22:42:35.408655 containerd[1486]: time="2024-11-12T22:42:35.408571900Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 22:42:35.436088 containerd[1486]: time="2024-11-12T22:42:35.435747294Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 22:42:36.027410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681684291.mount: Deactivated successfully. Nov 12 22:42:37.455741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 22:42:37.466356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:42:38.272794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:42:38.277371 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:42:38.317838 kubelet[2120]: E1112 22:42:38.317756 2120 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:42:38.322752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:42:38.322964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:42:38.445964 containerd[1486]: time="2024-11-12T22:42:38.445901382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:38.446831 containerd[1486]: time="2024-11-12T22:42:38.446742129Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 22:42:38.448073 containerd[1486]: time="2024-11-12T22:42:38.448016139Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:38.451318 containerd[1486]: time="2024-11-12T22:42:38.451279991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:42:38.452380 containerd[1486]: time="2024-11-12T22:42:38.452351180Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.016552841s" Nov 12 22:42:38.452435 containerd[1486]: time="2024-11-12T22:42:38.452381427Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 22:42:40.879479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:42:40.890283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:42:40.912148 systemd[1]: Reloading requested from client PID 2211 ('systemctl') (unit session-7.scope)... Nov 12 22:42:40.912163 systemd[1]: Reloading... Nov 12 22:42:40.999099 zram_generator::config[2250]: No configuration found. Nov 12 22:42:41.215369 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:42:41.311208 systemd[1]: Reloading finished in 398 ms. Nov 12 22:42:41.385286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:42:41.387863 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:42:41.388221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:42:41.390379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:42:41.548567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:42:41.555235 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:42:41.607905 kubelet[2300]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:42:41.607905 kubelet[2300]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:42:41.607905 kubelet[2300]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:42:41.608514 kubelet[2300]: I1112 22:42:41.607963 2300 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:42:41.778113 kubelet[2300]: I1112 22:42:41.778067 2300 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:42:41.778113 kubelet[2300]: I1112 22:42:41.778101 2300 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:42:41.778321 kubelet[2300]: I1112 22:42:41.778315 2300 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:42:41.796128 kubelet[2300]: E1112 22:42:41.796078 2300 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:41.797026 kubelet[2300]: I1112 22:42:41.796976 2300 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:42:41.808927 kubelet[2300]: I1112 22:42:41.808788 2300 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:42:41.809144 kubelet[2300]: I1112 22:42:41.809119 2300 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:42:41.809347 kubelet[2300]: I1112 22:42:41.809320 2300 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:42:41.809514 kubelet[2300]: I1112 22:42:41.809355 2300 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:42:41.809514 kubelet[2300]: I1112 22:42:41.809368 2300 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:42:41.809514 kubelet[2300]: I1112 22:42:41.809505 2300 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:42:41.809635 kubelet[2300]: I1112 22:42:41.809622 2300 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:42:41.809672 kubelet[2300]: I1112 22:42:41.809639 2300 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:42:41.809708 kubelet[2300]: I1112 22:42:41.809674 2300 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:42:41.809708 kubelet[2300]: I1112 22:42:41.809693 2300 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:42:41.811013 kubelet[2300]: W1112 22:42:41.810923 2300 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:41.811013 kubelet[2300]: E1112 22:42:41.810987 2300 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:41.811593 kubelet[2300]: I1112 22:42:41.811560 2300 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:42:41.811798 kubelet[2300]: W1112 22:42:41.811731 2300 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:41.811798 kubelet[2300]: E1112 22:42:41.811771 2300 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:41.814765 kubelet[2300]: I1112 22:42:41.814741 2300 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:42:41.815883 kubelet[2300]: W1112 22:42:41.815854 2300 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 22:42:41.818821 kubelet[2300]: I1112 22:42:41.816610 2300 server.go:1256] "Started kubelet" Nov 12 22:42:41.818821 kubelet[2300]: I1112 22:42:41.816662 2300 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:42:41.818821 kubelet[2300]: I1112 22:42:41.816683 2300 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:42:41.818821 kubelet[2300]: I1112 22:42:41.817360 2300 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:42:41.818821 kubelet[2300]: I1112 22:42:41.817807 2300 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:42:41.818821 kubelet[2300]: I1112 22:42:41.818594 2300 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:42:41.818821 kubelet[2300]: I1112 22:42:41.818700 2300 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:42:41.823404 kubelet[2300]: I1112 22:42:41.822484 2300 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:42:41.823404 kubelet[2300]: I1112 22:42:41.822599 2300 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:42:41.823404 kubelet[2300]: W1112 22:42:41.822713 2300 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:41.823404 kubelet[2300]: E1112 22:42:41.822748 2300 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:42:41.823404 kubelet[2300]: E1112 22:42:41.822763 2300 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:41.823404 kubelet[2300]: E1112 22:42:41.823411 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="200ms" Nov 12 22:42:41.825486 kubelet[2300]: E1112 22:42:41.825449 2300 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180759d706f1b64d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:42:41.816589901 +0000 UTC m=+0.256609988,LastTimestamp:2024-11-12 22:42:41.816589901 +0000 UTC m=+0.256609988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:42:41.828150 kubelet[2300]: I1112 22:42:41.828116 2300 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:42:41.829735 kubelet[2300]: E1112 22:42:41.829030 2300 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:42:41.829735 kubelet[2300]: I1112 22:42:41.829119 2300 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:42:41.829735 kubelet[2300]: I1112 22:42:41.829132 2300 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:42:41.863850 kubelet[2300]: I1112 22:42:41.863820 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:42:41.865111 kubelet[2300]: I1112 22:42:41.864779 2300 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:42:41.865111 kubelet[2300]: I1112 22:42:41.864804 2300 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:42:41.865111 kubelet[2300]: I1112 22:42:41.864818 2300 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:42:41.865852 kubelet[2300]: I1112 22:42:41.865823 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:42:41.865905 kubelet[2300]: I1112 22:42:41.865890 2300 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:42:41.865962 kubelet[2300]: I1112 22:42:41.865933 2300 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:42:41.866034 kubelet[2300]: E1112 22:42:41.866016 2300 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:42:41.866780 kubelet[2300]: W1112 22:42:41.866734 2300 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:41.866780 kubelet[2300]: E1112 22:42:41.866781 2300 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:41.924622 kubelet[2300]: I1112 22:42:41.924580 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:42:41.925131 kubelet[2300]: E1112 22:42:41.925104 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Nov 12 22:42:41.966403 kubelet[2300]: E1112 22:42:41.966340 2300 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 22:42:42.024537 kubelet[2300]: E1112 22:42:42.024485 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="400ms" Nov 12 22:42:42.127536 kubelet[2300]: I1112 22:42:42.127396 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:42:42.127771 kubelet[2300]: E1112 22:42:42.127742 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Nov 12 22:42:42.167020 kubelet[2300]: E1112 22:42:42.166905 2300 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 22:42:42.378588 kubelet[2300]: I1112 22:42:42.378415 2300 policy_none.go:49] "None policy: Start" Nov 12 22:42:42.380082 kubelet[2300]: I1112 22:42:42.379622 2300 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:42:42.380082 kubelet[2300]: I1112 22:42:42.379687 2300 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:42:42.389811 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 22:42:42.400556 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 22:42:42.403817 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 22:42:42.420347 kubelet[2300]: I1112 22:42:42.420300 2300 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:42:42.420750 kubelet[2300]: I1112 22:42:42.420634 2300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:42:42.421891 kubelet[2300]: E1112 22:42:42.421862 2300 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 22:42:42.424995 kubelet[2300]: E1112 22:42:42.424970 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="800ms" Nov 12 22:42:42.530103 kubelet[2300]: I1112 22:42:42.530065 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:42:42.530471 kubelet[2300]: E1112 22:42:42.530451 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Nov 12 22:42:42.567721 kubelet[2300]: I1112 22:42:42.567659 2300 topology_manager.go:215] "Topology Admit Handler" podUID="454e88c4446f313dc42db245cb2802b2" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:42:42.570945 kubelet[2300]: I1112 22:42:42.570909 2300 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:42:42.572115 kubelet[2300]: I1112 22:42:42.572094 2300 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:42:42.578609 systemd[1]: Created slice kubepods-burstable-pod454e88c4446f313dc42db245cb2802b2.slice - libcontainer container kubepods-burstable-pod454e88c4446f313dc42db245cb2802b2.slice. Nov 12 22:42:42.591614 systemd[1]: Created slice kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice - libcontainer container kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice. Nov 12 22:42:42.608566 systemd[1]: Created slice kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice - libcontainer container kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice. Nov 12 22:42:42.628088 kubelet[2300]: I1112 22:42:42.628014 2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/454e88c4446f313dc42db245cb2802b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"454e88c4446f313dc42db245cb2802b2\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:42:42.628088 kubelet[2300]: I1112 22:42:42.628088 2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/454e88c4446f313dc42db245cb2802b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"454e88c4446f313dc42db245cb2802b2\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:42:42.628527 kubelet[2300]: I1112 22:42:42.628136 2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:42.628527 kubelet[2300]: I1112 22:42:42.628172 2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:42.628527 kubelet[2300]: I1112 22:42:42.628194 2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:42:42.628527 kubelet[2300]: I1112 22:42:42.628215 2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/454e88c4446f313dc42db245cb2802b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"454e88c4446f313dc42db245cb2802b2\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:42:42.628527 kubelet[2300]: I1112 22:42:42.628260 2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:42.628712 kubelet[2300]: I1112 22:42:42.628306 2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:42.628712 kubelet[2300]: I1112 22:42:42.628336 2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:42.669643 kubelet[2300]: W1112 22:42:42.669567 2300 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:42.669643 kubelet[2300]: E1112 22:42:42.669636 2300 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:42.734015 kubelet[2300]: W1112 22:42:42.733910 2300 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:42.734015 kubelet[2300]: E1112 22:42:42.734010 2300 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:42.889758 kubelet[2300]: E1112 22:42:42.889590 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:42.890744 containerd[1486]: time="2024-11-12T22:42:42.890368673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:454e88c4446f313dc42db245cb2802b2,Namespace:kube-system,Attempt:0,}" Nov 12 22:42:42.907977 kubelet[2300]: E1112 22:42:42.907901 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:42.908832 containerd[1486]: time="2024-11-12T22:42:42.908530439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 22:42:42.911948 kubelet[2300]: E1112 22:42:42.911901 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:42.912412 containerd[1486]: time="2024-11-12T22:42:42.912378086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 22:42:43.076688 kubelet[2300]: W1112 22:42:43.076575 2300 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:43.076688 kubelet[2300]: E1112 22:42:43.076669 2300 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:43.226367 kubelet[2300]: E1112 22:42:43.226202 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="1.6s" Nov 12 22:42:43.332413 kubelet[2300]: I1112 22:42:43.332373 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:42:43.332870 kubelet[2300]: E1112 22:42:43.332838 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Nov 12 22:42:43.414645 kubelet[2300]: W1112 22:42:43.414555 2300 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:43.414645 kubelet[2300]: E1112 22:42:43.414633 2300 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:43.839858 kubelet[2300]: E1112 22:42:43.839782 2300 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:44.642786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501138498.mount: Deactivated successfully. Nov 12 22:42:44.652412 containerd[1486]: time="2024-11-12T22:42:44.652347000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:42:44.655940 containerd[1486]: time="2024-11-12T22:42:44.655893212Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 22:42:44.657167 containerd[1486]: time="2024-11-12T22:42:44.657120214Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:42:44.659036 containerd[1486]: time="2024-11-12T22:42:44.658981805Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:42:44.659786 containerd[1486]: time="2024-11-12T22:42:44.659722414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:42:44.660763 containerd[1486]: time="2024-11-12T22:42:44.660716850Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:42:44.661741 containerd[1486]: time="2024-11-12T22:42:44.661672953Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:42:44.662635 containerd[1486]: time="2024-11-12T22:42:44.662588170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:42:44.663537 containerd[1486]: time="2024-11-12T22:42:44.663502395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.773012725s" Nov 12 22:42:44.667982 containerd[1486]: time="2024-11-12T22:42:44.667918468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.759277181s" Nov 12 22:42:44.671876 containerd[1486]: time="2024-11-12T22:42:44.671809617Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.759348224s" Nov 12 22:42:44.825258 containerd[1486]: time="2024-11-12T22:42:44.825139560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:42:44.825258 containerd[1486]: time="2024-11-12T22:42:44.822778571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:42:44.825446 containerd[1486]: time="2024-11-12T22:42:44.825254495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:42:44.827603 containerd[1486]: time="2024-11-12T22:42:44.825909364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:44.827603 containerd[1486]: time="2024-11-12T22:42:44.825942937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:42:44.827603 containerd[1486]: time="2024-11-12T22:42:44.825963375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:44.827603 containerd[1486]: time="2024-11-12T22:42:44.826138554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:44.827603 containerd[1486]: time="2024-11-12T22:42:44.826220618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:44.827778 kubelet[2300]: E1112 22:42:44.827448 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="3.2s" Nov 12 22:42:44.828688 containerd[1486]: time="2024-11-12T22:42:44.828582668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:42:44.828688 containerd[1486]: time="2024-11-12T22:42:44.828671254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:42:44.828838 containerd[1486]: time="2024-11-12T22:42:44.828800086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:44.829849 containerd[1486]: time="2024-11-12T22:42:44.829810060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:44.866270 systemd[1]: Started cri-containerd-8cf775a4e5622b7c09d6aeb084b4f20707a62c39dda370c14d4279839ec004a1.scope - libcontainer container 8cf775a4e5622b7c09d6aeb084b4f20707a62c39dda370c14d4279839ec004a1. Nov 12 22:42:44.872462 systemd[1]: Started cri-containerd-7896bfdfce09ad4550c136543116b4e1c9a6b2539d184e3eb0c8619fc0dab41c.scope - libcontainer container 7896bfdfce09ad4550c136543116b4e1c9a6b2539d184e3eb0c8619fc0dab41c. Nov 12 22:42:44.877033 systemd[1]: Started cri-containerd-b44a8ee848b1db719e41b8fc890c094b0459a836d85f094bf694a7c620bdc00c.scope - libcontainer container b44a8ee848b1db719e41b8fc890c094b0459a836d85f094bf694a7c620bdc00c. Nov 12 22:42:44.936360 kubelet[2300]: I1112 22:42:44.936209 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:42:44.939395 kubelet[2300]: E1112 22:42:44.939369 2300 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Nov 12 22:42:44.944103 containerd[1486]: time="2024-11-12T22:42:44.943641836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cf775a4e5622b7c09d6aeb084b4f20707a62c39dda370c14d4279839ec004a1\"" Nov 12 22:42:44.944717 kubelet[2300]: E1112 22:42:44.944540 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:44.949172 containerd[1486]: time="2024-11-12T22:42:44.947080406Z" level=info msg="CreateContainer within sandbox \"8cf775a4e5622b7c09d6aeb084b4f20707a62c39dda370c14d4279839ec004a1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 22:42:44.956365 containerd[1486]: time="2024-11-12T22:42:44.956305920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7896bfdfce09ad4550c136543116b4e1c9a6b2539d184e3eb0c8619fc0dab41c\"" Nov 12 22:42:44.956536 containerd[1486]: time="2024-11-12T22:42:44.956498291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:454e88c4446f313dc42db245cb2802b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44a8ee848b1db719e41b8fc890c094b0459a836d85f094bf694a7c620bdc00c\"" Nov 12 22:42:44.957296 kubelet[2300]: E1112 22:42:44.957271 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:44.957714 kubelet[2300]: E1112 22:42:44.957684 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:44.961596 containerd[1486]: time="2024-11-12T22:42:44.961545729Z" level=info msg="CreateContainer within sandbox \"7896bfdfce09ad4550c136543116b4e1c9a6b2539d184e3eb0c8619fc0dab41c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 22:42:44.962832 containerd[1486]: time="2024-11-12T22:42:44.962790413Z" level=info msg="CreateContainer within sandbox \"b44a8ee848b1db719e41b8fc890c094b0459a836d85f094bf694a7c620bdc00c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 22:42:45.004588 containerd[1486]: time="2024-11-12T22:42:45.004516860Z" level=info msg="CreateContainer within sandbox \"8cf775a4e5622b7c09d6aeb084b4f20707a62c39dda370c14d4279839ec004a1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"edd1bf0b5bb72fad8c0284dc1d01778fc8d02ceb115b45eebcd5fdfcfad11342\"" Nov 12 22:42:45.005246 containerd[1486]: time="2024-11-12T22:42:45.005209660Z" level=info msg="StartContainer for \"edd1bf0b5bb72fad8c0284dc1d01778fc8d02ceb115b45eebcd5fdfcfad11342\"" Nov 12 22:42:45.018972 containerd[1486]: time="2024-11-12T22:42:45.018829707Z" level=info msg="CreateContainer within sandbox \"7896bfdfce09ad4550c136543116b4e1c9a6b2539d184e3eb0c8619fc0dab41c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8097e5e3b0f60fb5ed72c6fee176b52113c97436d0b38971adc6029bc2d21381\"" Nov 12 22:42:45.019499 containerd[1486]: time="2024-11-12T22:42:45.019474797Z" level=info msg="StartContainer for \"8097e5e3b0f60fb5ed72c6fee176b52113c97436d0b38971adc6029bc2d21381\"" Nov 12 22:42:45.039571 systemd[1]: Started cri-containerd-edd1bf0b5bb72fad8c0284dc1d01778fc8d02ceb115b45eebcd5fdfcfad11342.scope - libcontainer container edd1bf0b5bb72fad8c0284dc1d01778fc8d02ceb115b45eebcd5fdfcfad11342. Nov 12 22:42:45.053673 containerd[1486]: time="2024-11-12T22:42:45.053629441Z" level=info msg="CreateContainer within sandbox \"b44a8ee848b1db719e41b8fc890c094b0459a836d85f094bf694a7c620bdc00c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e22f72061220612ff76f1ff3b9192f3779c75ce647be757ceaab77d65c32a62b\"" Nov 12 22:42:45.054589 containerd[1486]: time="2024-11-12T22:42:45.054557913Z" level=info msg="StartContainer for \"e22f72061220612ff76f1ff3b9192f3779c75ce647be757ceaab77d65c32a62b\"" Nov 12 22:42:45.081979 kubelet[2300]: W1112 22:42:45.081925 2300 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:45.081979 kubelet[2300]: E1112 22:42:45.081971 2300 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Nov 12 22:42:45.097398 systemd[1]: Started cri-containerd-8097e5e3b0f60fb5ed72c6fee176b52113c97436d0b38971adc6029bc2d21381.scope - libcontainer container 8097e5e3b0f60fb5ed72c6fee176b52113c97436d0b38971adc6029bc2d21381. Nov 12 22:42:45.128204 systemd[1]: Started cri-containerd-e22f72061220612ff76f1ff3b9192f3779c75ce647be757ceaab77d65c32a62b.scope - libcontainer container e22f72061220612ff76f1ff3b9192f3779c75ce647be757ceaab77d65c32a62b. Nov 12 22:42:45.146996 containerd[1486]: time="2024-11-12T22:42:45.146941899Z" level=info msg="StartContainer for \"edd1bf0b5bb72fad8c0284dc1d01778fc8d02ceb115b45eebcd5fdfcfad11342\" returns successfully" Nov 12 22:42:45.156705 containerd[1486]: time="2024-11-12T22:42:45.156611096Z" level=info msg="StartContainer for \"8097e5e3b0f60fb5ed72c6fee176b52113c97436d0b38971adc6029bc2d21381\" returns successfully" Nov 12 22:42:45.187523 containerd[1486]: time="2024-11-12T22:42:45.187344533Z" level=info msg="StartContainer for \"e22f72061220612ff76f1ff3b9192f3779c75ce647be757ceaab77d65c32a62b\" returns successfully" Nov 12 22:42:45.876951 kubelet[2300]: E1112 22:42:45.876918 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:45.878559 kubelet[2300]: E1112 22:42:45.878540 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:45.879999 kubelet[2300]: E1112 22:42:45.879965 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:46.813373 kubelet[2300]: I1112 22:42:46.813324 2300 apiserver.go:52] "Watching apiserver" Nov 12 22:42:46.823233 kubelet[2300]: I1112 22:42:46.823204 2300 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:42:46.882035 kubelet[2300]: E1112 22:42:46.881990 2300 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:47.178283 kubelet[2300]: E1112 22:42:47.178130 2300 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.180759d706f1b64d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:42:41.816589901 +0000 UTC m=+0.256609988,LastTimestamp:2024-11-12 22:42:41.816589901 +0000 UTC m=+0.256609988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:42:47.315881 kubelet[2300]: E1112 22:42:47.315834 2300 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.180759d707af513c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:42:41.829015868 +0000 UTC m=+0.269035955,LastTimestamp:2024-11-12 22:42:41.829015868 +0000 UTC m=+0.269035955,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:42:47.378401 kubelet[2300]: E1112 22:42:47.378330 2300 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 22:42:47.534938 kubelet[2300]: E1112 22:42:47.534784 2300 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.180759d709c94856 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:42:41.864271958 +0000 UTC m=+0.304292046,LastTimestamp:2024-11-12 22:42:41.864271958 +0000 UTC m=+0.304292046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:42:47.587344 kubelet[2300]: E1112 22:42:47.587303 2300 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.180759d709c95f45 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:42:41.864277829 +0000 UTC m=+0.304297917,LastTimestamp:2024-11-12 22:42:41.864277829 +0000 UTC m=+0.304297917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:42:47.959200 kubelet[2300]: E1112 22:42:47.959034 2300 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 22:42:48.071650 kubelet[2300]: E1112 22:42:48.071586 2300 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 22:42:48.141498 kubelet[2300]: I1112 22:42:48.141463 2300 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:42:48.149497 kubelet[2300]: I1112 22:42:48.149464 2300 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:42:49.654175 systemd[1]: Reloading requested from client PID 2584 ('systemctl') (unit session-7.scope)... Nov 12 22:42:49.654188 systemd[1]: Reloading... Nov 12 22:42:49.740088 zram_generator::config[2626]: No configuration found. Nov 12 22:42:49.849890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:42:49.946701 systemd[1]: Reloading finished in 292 ms. Nov 12 22:42:49.994387 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:42:50.008717 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:42:50.009001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:42:50.021381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:42:50.231083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:42:50.237025 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:42:50.423922 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:42:50.423922 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:42:50.423922 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:42:50.424400 kubelet[2668]: I1112 22:42:50.423986 2668 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:42:50.429448 kubelet[2668]: I1112 22:42:50.429409 2668 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:42:50.429448 kubelet[2668]: I1112 22:42:50.429433 2668 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:42:50.429624 kubelet[2668]: I1112 22:42:50.429605 2668 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:42:50.430903 kubelet[2668]: I1112 22:42:50.430882 2668 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 22:42:50.432593 kubelet[2668]: I1112 22:42:50.432524 2668 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:42:50.443341 kubelet[2668]: I1112 22:42:50.443304 2668 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:42:50.443771 kubelet[2668]: I1112 22:42:50.443730 2668 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:42:50.443980 kubelet[2668]: I1112 22:42:50.443953 2668 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:42:50.444097 kubelet[2668]: I1112 22:42:50.443988 2668 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:42:50.444097 kubelet[2668]: I1112 22:42:50.444001 2668 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:42:50.444097 kubelet[2668]: I1112 22:42:50.444039 2668 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:42:50.444201 kubelet[2668]: I1112 22:42:50.444179 2668 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:42:50.444201 kubelet[2668]: I1112 22:42:50.444196 2668 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:42:50.444274 kubelet[2668]: I1112 22:42:50.444229 2668 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:42:50.444274 kubelet[2668]: I1112 22:42:50.444245 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:42:50.445456 kubelet[2668]: I1112 22:42:50.445426 2668 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:42:50.450707 kubelet[2668]: I1112 22:42:50.448204 2668 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:42:50.450707 kubelet[2668]: I1112 22:42:50.448707 2668 server.go:1256] "Started kubelet" Nov 12 22:42:50.450707 kubelet[2668]: I1112 22:42:50.450549 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:42:50.452324 kubelet[2668]: I1112 22:42:50.452299 2668 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:42:50.452440 kubelet[2668]: I1112 22:42:50.452419 2668 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:42:50.452615 kubelet[2668]: I1112 22:42:50.452593 2668 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:42:50.452859 kubelet[2668]: E1112 22:42:50.452823 2668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:42:50.453087 kubelet[2668]: I1112 22:42:50.453066 2668 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:42:50.454001 kubelet[2668]: I1112 22:42:50.453973 2668 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:42:50.454083 kubelet[2668]: I1112 22:42:50.454015 2668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:42:50.456724 kubelet[2668]: I1112 22:42:50.456696 2668 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:42:50.457038 kubelet[2668]: I1112 22:42:50.457015 2668 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:42:50.457304 kubelet[2668]: I1112 22:42:50.457267 2668 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:42:50.457304 kubelet[2668]: E1112 22:42:50.456765 2668 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:42:50.458541 kubelet[2668]: I1112 22:42:50.458511 2668 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:42:50.475150 kubelet[2668]: I1112 22:42:50.472027 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:42:50.479136 kubelet[2668]: I1112 22:42:50.478526 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:42:50.479136 kubelet[2668]: I1112 22:42:50.478572 2668 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:42:50.479136 kubelet[2668]: I1112 22:42:50.478595 2668 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:42:50.479136 kubelet[2668]: E1112 22:42:50.478785 2668 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:42:50.502176 kubelet[2668]: I1112 22:42:50.502073 2668 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:42:50.502176 kubelet[2668]: I1112 22:42:50.502096 2668 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:42:50.502176 kubelet[2668]: I1112 22:42:50.502115 2668 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:42:50.502370 kubelet[2668]: I1112 22:42:50.502274 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 22:42:50.502370 kubelet[2668]: I1112 22:42:50.502299 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 22:42:50.502370 kubelet[2668]: I1112 22:42:50.502308 2668 policy_none.go:49] "None policy: Start" Nov 12 22:42:50.504390 kubelet[2668]: I1112 22:42:50.504334 2668 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:42:50.504390 kubelet[2668]: I1112 22:42:50.504359 2668 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:42:50.504514 kubelet[2668]: I1112 22:42:50.504497 2668 state_mem.go:75] "Updated machine memory state" Nov 12 22:42:50.509249 kubelet[2668]: I1112 22:42:50.509220 2668 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:42:50.509459 kubelet[2668]: I1112 22:42:50.509442 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:42:50.519586 sudo[2699]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 22:42:50.520078 sudo[2699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 22:42:50.557593 kubelet[2668]: I1112 22:42:50.557556 2668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:42:50.570086 kubelet[2668]: I1112 22:42:50.567303 2668 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 22:42:50.570086 kubelet[2668]: I1112 22:42:50.567407 2668 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:42:50.579265 kubelet[2668]: I1112 22:42:50.579225 2668 topology_manager.go:215] "Topology Admit Handler" podUID="454e88c4446f313dc42db245cb2802b2" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:42:50.579421 kubelet[2668]: I1112 22:42:50.579328 2668 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:42:50.579421 kubelet[2668]: I1112 22:42:50.579389 2668 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:42:50.754345 kubelet[2668]: I1112 22:42:50.754191 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/454e88c4446f313dc42db245cb2802b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"454e88c4446f313dc42db245cb2802b2\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:42:50.754345 kubelet[2668]: I1112 22:42:50.754261 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/454e88c4446f313dc42db245cb2802b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"454e88c4446f313dc42db245cb2802b2\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:42:50.754345 kubelet[2668]: I1112 22:42:50.754292 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:42:50.754345 kubelet[2668]: I1112 22:42:50.754313 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/454e88c4446f313dc42db245cb2802b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"454e88c4446f313dc42db245cb2802b2\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:42:50.754345 kubelet[2668]: I1112 22:42:50.754343 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:50.754568 kubelet[2668]: I1112 22:42:50.754367 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:50.754568 kubelet[2668]: I1112 22:42:50.754390 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:50.754568 kubelet[2668]: I1112 22:42:50.754413 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:50.754568 kubelet[2668]: I1112 22:42:50.754442 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:42:50.896625 kubelet[2668]: E1112 22:42:50.896468 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:50.899517 kubelet[2668]: E1112 22:42:50.898901 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:50.899879 kubelet[2668]: E1112 22:42:50.899862 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:51.038130 sudo[2699]: pam_unix(sudo:session): session closed for user root Nov 12 22:42:51.446177 kubelet[2668]: I1112 22:42:51.446045 2668 apiserver.go:52] "Watching apiserver" Nov 12 22:42:51.494342 kubelet[2668]: E1112 22:42:51.494093 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:51.494342 kubelet[2668]: E1112 22:42:51.494126 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:51.552984 kubelet[2668]: I1112 22:42:51.552937 2668 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:42:51.678333 kubelet[2668]: E1112 22:42:51.677776 2668 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 22:42:51.678333 kubelet[2668]: E1112 22:42:51.678273 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:51.847364 kubelet[2668]: I1112 22:42:51.847226 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.847160752 podStartE2EDuration="1.847160752s" podCreationTimestamp="2024-11-12 22:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:42:51.676829887 +0000 UTC m=+1.434633664" watchObservedRunningTime="2024-11-12 22:42:51.847160752 +0000 UTC m=+1.604964529" Nov 12 22:42:52.049084 kubelet[2668]: I1112 22:42:52.048804 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.048763796 podStartE2EDuration="2.048763796s" podCreationTimestamp="2024-11-12 22:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:42:51.847312611 +0000 UTC m=+1.605116389" watchObservedRunningTime="2024-11-12 22:42:52.048763796 +0000 UTC m=+1.806567573" Nov 12 22:42:52.049084 kubelet[2668]: I1112 22:42:52.048921 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.04890219 podStartE2EDuration="2.04890219s" podCreationTimestamp="2024-11-12 22:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:42:52.04855088 +0000 UTC m=+1.806354657" watchObservedRunningTime="2024-11-12 22:42:52.04890219 +0000 UTC m=+1.806705977" Nov 12 22:42:52.495653 kubelet[2668]: E1112 22:42:52.495597 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:53.454015 sudo[1667]: pam_unix(sudo:session): session closed for user root Nov 12 22:42:53.456040 sshd[1666]: Connection closed by 10.0.0.1 port 50728 Nov 12 22:42:53.473288 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:53.478252 systemd[1]: sshd@6-10.0.0.46:22-10.0.0.1:50728.service: Deactivated successfully. Nov 12 22:42:53.481189 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 22:42:53.481443 systemd[1]: session-7.scope: Consumed 5.602s CPU time, 188.1M memory peak, 0B memory swap peak. Nov 12 22:42:53.481930 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. Nov 12 22:42:53.482880 systemd-logind[1469]: Removed session 7. Nov 12 22:42:53.496846 kubelet[2668]: E1112 22:42:53.496809 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:56.493832 kubelet[2668]: E1112 22:42:56.493789 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:56.500926 kubelet[2668]: E1112 22:42:56.500888 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:58.716647 kubelet[2668]: E1112 22:42:58.716598 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:59.505738 kubelet[2668]: E1112 22:42:59.505707 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:59.524204 update_engine[1474]: I20241112 22:42:59.524112 1474 update_attempter.cc:509] Updating boot flags... Nov 12 22:42:59.566100 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2749) Nov 12 22:42:59.611189 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2749) Nov 12 22:42:59.638569 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2749) Nov 12 22:43:03.377356 kubelet[2668]: I1112 22:43:03.377318 2668 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 22:43:03.377838 containerd[1486]: time="2024-11-12T22:43:03.377756761Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 22:43:03.378143 kubelet[2668]: I1112 22:43:03.378066 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 22:43:03.414399 kubelet[2668]: E1112 22:43:03.414323 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:04.128573 kubelet[2668]: I1112 22:43:04.126290 2668 topology_manager.go:215] "Topology Admit Handler" podUID="64c76a0a-b23a-4437-b140-f3a38e676f12" podNamespace="kube-system" podName="kube-proxy-7z5mg" Nov 12 22:43:04.165678 systemd[1]: Created slice kubepods-besteffort-pod64c76a0a_b23a_4437_b140_f3a38e676f12.slice - libcontainer container kubepods-besteffort-pod64c76a0a_b23a_4437_b140_f3a38e676f12.slice. Nov 12 22:43:04.180916 kubelet[2668]: I1112 22:43:04.180458 2668 topology_manager.go:215] "Topology Admit Handler" podUID="ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" podNamespace="kube-system" podName="cilium-tkfrf" Nov 12 22:43:04.198225 systemd[1]: Created slice kubepods-burstable-podecb89379_1e7f_4d7a_a674_bb54fae4ebd2.slice - libcontainer container kubepods-burstable-podecb89379_1e7f_4d7a_a674_bb54fae4ebd2.slice. Nov 12 22:43:04.237631 kubelet[2668]: I1112 22:43:04.236378 2668 topology_manager.go:215] "Topology Admit Handler" podUID="8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1" podNamespace="kube-system" podName="cilium-operator-5cc964979-zfj99" Nov 12 22:43:04.245909 kubelet[2668]: I1112 22:43:04.245757 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/64c76a0a-b23a-4437-b140-f3a38e676f12-kube-proxy\") pod \"kube-proxy-7z5mg\" (UID: \"64c76a0a-b23a-4437-b140-f3a38e676f12\") " pod="kube-system/kube-proxy-7z5mg" Nov 12 22:43:04.248444 kubelet[2668]: I1112 22:43:04.248276 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64c76a0a-b23a-4437-b140-f3a38e676f12-lib-modules\") pod \"kube-proxy-7z5mg\" (UID: \"64c76a0a-b23a-4437-b140-f3a38e676f12\") " pod="kube-system/kube-proxy-7z5mg" Nov 12 22:43:04.248444 kubelet[2668]: I1112 22:43:04.248319 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64c76a0a-b23a-4437-b140-f3a38e676f12-xtables-lock\") pod \"kube-proxy-7z5mg\" (UID: \"64c76a0a-b23a-4437-b140-f3a38e676f12\") " pod="kube-system/kube-proxy-7z5mg" Nov 12 22:43:04.248444 kubelet[2668]: I1112 22:43:04.248346 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjnd7\" (UniqueName: \"kubernetes.io/projected/64c76a0a-b23a-4437-b140-f3a38e676f12-kube-api-access-pjnd7\") pod \"kube-proxy-7z5mg\" (UID: \"64c76a0a-b23a-4437-b140-f3a38e676f12\") " pod="kube-system/kube-proxy-7z5mg" Nov 12 22:43:04.258208 systemd[1]: Created slice kubepods-besteffort-pod8f9a7e46_a05e_42fd_aef1_ddd1b9b389a1.slice - libcontainer container kubepods-besteffort-pod8f9a7e46_a05e_42fd_aef1_ddd1b9b389a1.slice. Nov 12 22:43:04.349323 kubelet[2668]: I1112 22:43:04.349224 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-lib-modules\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349323 kubelet[2668]: I1112 22:43:04.349300 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-config-path\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349323 kubelet[2668]: I1112 22:43:04.349334 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1-cilium-config-path\") pod \"cilium-operator-5cc964979-zfj99\" (UID: \"8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1\") " pod="kube-system/cilium-operator-5cc964979-zfj99" Nov 12 22:43:04.349624 kubelet[2668]: I1112 22:43:04.349360 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-hubble-tls\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349624 kubelet[2668]: I1112 22:43:04.349404 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-xtables-lock\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349624 kubelet[2668]: I1112 22:43:04.349438 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-host-proc-sys-net\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349624 kubelet[2668]: I1112 22:43:04.349472 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-hostproc\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349624 kubelet[2668]: I1112 22:43:04.349503 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-cgroup\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349624 kubelet[2668]: I1112 22:43:04.349532 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cni-path\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349839 kubelet[2668]: I1112 22:43:04.349560 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-etc-cni-netd\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349839 kubelet[2668]: I1112 22:43:04.349588 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-clustermesh-secrets\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349839 kubelet[2668]: I1112 22:43:04.349621 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzm4m\" (UniqueName: \"kubernetes.io/projected/8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1-kube-api-access-tzm4m\") pod \"cilium-operator-5cc964979-zfj99\" (UID: \"8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1\") " pod="kube-system/cilium-operator-5cc964979-zfj99" Nov 12 22:43:04.349839 kubelet[2668]: I1112 22:43:04.349653 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-bpf-maps\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.349839 kubelet[2668]: I1112 22:43:04.349688 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-host-proc-sys-kernel\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.350013 kubelet[2668]: I1112 22:43:04.349717 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2ds2\" (UniqueName: \"kubernetes.io/projected/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-kube-api-access-b2ds2\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.350013 kubelet[2668]: I1112 22:43:04.349776 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-run\") pod \"cilium-tkfrf\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " pod="kube-system/cilium-tkfrf" Nov 12 22:43:04.487546 kubelet[2668]: E1112 22:43:04.487300 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:04.489859 containerd[1486]: time="2024-11-12T22:43:04.489772018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7z5mg,Uid:64c76a0a-b23a-4437-b140-f3a38e676f12,Namespace:kube-system,Attempt:0,}" Nov 12 22:43:04.505551 kubelet[2668]: E1112 22:43:04.504919 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:04.505774 containerd[1486]: time="2024-11-12T22:43:04.505633085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tkfrf,Uid:ecb89379-1e7f-4d7a-a674-bb54fae4ebd2,Namespace:kube-system,Attempt:0,}" Nov 12 22:43:04.563362 kubelet[2668]: E1112 22:43:04.562660 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:04.566079 containerd[1486]: time="2024-11-12T22:43:04.565930767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zfj99,Uid:8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1,Namespace:kube-system,Attempt:0,}" Nov 12 22:43:04.594150 containerd[1486]: time="2024-11-12T22:43:04.593612694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:43:04.594150 containerd[1486]: time="2024-11-12T22:43:04.593795449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:43:04.594150 containerd[1486]: time="2024-11-12T22:43:04.593819064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:04.594150 containerd[1486]: time="2024-11-12T22:43:04.593964048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:04.652629 containerd[1486]: time="2024-11-12T22:43:04.652131203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:43:04.652629 containerd[1486]: time="2024-11-12T22:43:04.652232965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:43:04.652629 containerd[1486]: time="2024-11-12T22:43:04.652250719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:04.652629 containerd[1486]: time="2024-11-12T22:43:04.652394320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:04.666542 systemd[1]: Started cri-containerd-1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0.scope - libcontainer container 1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0. Nov 12 22:43:04.668971 containerd[1486]: time="2024-11-12T22:43:04.668037566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:43:04.668971 containerd[1486]: time="2024-11-12T22:43:04.668325440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:43:04.668971 containerd[1486]: time="2024-11-12T22:43:04.668385804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:04.668971 containerd[1486]: time="2024-11-12T22:43:04.668649121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:04.714400 systemd[1]: Started cri-containerd-cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d.scope - libcontainer container cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d. Nov 12 22:43:04.721457 systemd[1]: Started cri-containerd-7c4de8e64a6f043edde7f4a76581993a8aa06c52b453d17f2280b583cbcf6755.scope - libcontainer container 7c4de8e64a6f043edde7f4a76581993a8aa06c52b453d17f2280b583cbcf6755. Nov 12 22:43:04.745344 containerd[1486]: time="2024-11-12T22:43:04.744907538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tkfrf,Uid:ecb89379-1e7f-4d7a-a674-bb54fae4ebd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\"" Nov 12 22:43:04.747426 kubelet[2668]: E1112 22:43:04.747383 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:04.751744 containerd[1486]: time="2024-11-12T22:43:04.751673696Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 22:43:04.772355 containerd[1486]: time="2024-11-12T22:43:04.772292355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7z5mg,Uid:64c76a0a-b23a-4437-b140-f3a38e676f12,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c4de8e64a6f043edde7f4a76581993a8aa06c52b453d17f2280b583cbcf6755\"" Nov 12 22:43:04.773932 kubelet[2668]: E1112 22:43:04.773900 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:04.779084 containerd[1486]: time="2024-11-12T22:43:04.779004401Z" level=info msg="CreateContainer within sandbox \"7c4de8e64a6f043edde7f4a76581993a8aa06c52b453d17f2280b583cbcf6755\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 22:43:04.796542 containerd[1486]: time="2024-11-12T22:43:04.796461604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zfj99,Uid:8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d\"" Nov 12 22:43:04.797418 kubelet[2668]: E1112 22:43:04.797356 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:04.838393 containerd[1486]: time="2024-11-12T22:43:04.838319455Z" level=info msg="CreateContainer within sandbox \"7c4de8e64a6f043edde7f4a76581993a8aa06c52b453d17f2280b583cbcf6755\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"399a44b9e2e124df0354b5f03c2c0667f7b7334fd531de4a577584189c8c1a79\"" Nov 12 22:43:04.839523 containerd[1486]: time="2024-11-12T22:43:04.839414234Z" level=info msg="StartContainer for \"399a44b9e2e124df0354b5f03c2c0667f7b7334fd531de4a577584189c8c1a79\"" Nov 12 22:43:04.883370 systemd[1]: Started cri-containerd-399a44b9e2e124df0354b5f03c2c0667f7b7334fd531de4a577584189c8c1a79.scope - libcontainer container 399a44b9e2e124df0354b5f03c2c0667f7b7334fd531de4a577584189c8c1a79. Nov 12 22:43:04.929855 containerd[1486]: time="2024-11-12T22:43:04.929767099Z" level=info msg="StartContainer for \"399a44b9e2e124df0354b5f03c2c0667f7b7334fd531de4a577584189c8c1a79\" returns successfully" Nov 12 22:43:05.523831 kubelet[2668]: E1112 22:43:05.523762 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:12.421546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3038012841.mount: Deactivated successfully. Nov 12 22:43:15.324850 containerd[1486]: time="2024-11-12T22:43:15.324759486Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:43:15.325696 containerd[1486]: time="2024-11-12T22:43:15.325667846Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735339" Nov 12 22:43:15.327808 containerd[1486]: time="2024-11-12T22:43:15.327762880Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:43:15.329318 containerd[1486]: time="2024-11-12T22:43:15.329283533Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.577283711s" Nov 12 22:43:15.329318 containerd[1486]: time="2024-11-12T22:43:15.329312107Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 22:43:15.330109 containerd[1486]: time="2024-11-12T22:43:15.329886909Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 22:43:15.331639 containerd[1486]: time="2024-11-12T22:43:15.331577813Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:43:15.349558 containerd[1486]: time="2024-11-12T22:43:15.349483133Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\"" Nov 12 22:43:15.350232 containerd[1486]: time="2024-11-12T22:43:15.350182289Z" level=info msg="StartContainer for \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\"" Nov 12 22:43:15.387340 systemd[1]: Started cri-containerd-66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6.scope - libcontainer container 66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6. Nov 12 22:43:15.421606 containerd[1486]: time="2024-11-12T22:43:15.421428337Z" level=info msg="StartContainer for \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\" returns successfully" Nov 12 22:43:15.433992 systemd[1]: cri-containerd-66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6.scope: Deactivated successfully. Nov 12 22:43:15.555836 kubelet[2668]: E1112 22:43:15.555804 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:15.885796 kubelet[2668]: I1112 22:43:15.885755 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7z5mg" podStartSLOduration=11.885685887 podStartE2EDuration="11.885685887s" podCreationTimestamp="2024-11-12 22:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:43:05.550812471 +0000 UTC m=+15.308616268" watchObservedRunningTime="2024-11-12 22:43:15.885685887 +0000 UTC m=+25.643489664" Nov 12 22:43:16.342472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6-rootfs.mount: Deactivated successfully. Nov 12 22:43:16.557663 kubelet[2668]: E1112 22:43:16.557626 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:16.714751 containerd[1486]: time="2024-11-12T22:43:16.709748271Z" level=info msg="shim disconnected" id=66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6 namespace=k8s.io Nov 12 22:43:16.714751 containerd[1486]: time="2024-11-12T22:43:16.709809758Z" level=warning msg="cleaning up after shim disconnected" id=66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6 namespace=k8s.io Nov 12 22:43:16.714751 containerd[1486]: time="2024-11-12T22:43:16.709820227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:17.560752 kubelet[2668]: E1112 22:43:17.560710 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:17.563164 containerd[1486]: time="2024-11-12T22:43:17.563121295Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:43:17.588200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702921569.mount: Deactivated successfully. Nov 12 22:43:17.588677 containerd[1486]: time="2024-11-12T22:43:17.588632735Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\"" Nov 12 22:43:17.589387 containerd[1486]: time="2024-11-12T22:43:17.589365824Z" level=info msg="StartContainer for \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\"" Nov 12 22:43:17.623297 systemd[1]: Started cri-containerd-1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7.scope - libcontainer container 1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7. Nov 12 22:43:17.656217 containerd[1486]: time="2024-11-12T22:43:17.656157939Z" level=info msg="StartContainer for \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\" returns successfully" Nov 12 22:43:17.668019 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:43:17.668819 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:43:17.668978 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:43:17.674363 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:43:17.674584 systemd[1]: cri-containerd-1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7.scope: Deactivated successfully. Nov 12 22:43:17.697474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7-rootfs.mount: Deactivated successfully. Nov 12 22:43:17.709167 containerd[1486]: time="2024-11-12T22:43:17.708784386Z" level=info msg="shim disconnected" id=1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7 namespace=k8s.io Nov 12 22:43:17.709167 containerd[1486]: time="2024-11-12T22:43:17.708843126Z" level=warning msg="cleaning up after shim disconnected" id=1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7 namespace=k8s.io Nov 12 22:43:17.709167 containerd[1486]: time="2024-11-12T22:43:17.708853656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:17.711652 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:43:18.564102 kubelet[2668]: E1112 22:43:18.564040 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:18.566498 containerd[1486]: time="2024-11-12T22:43:18.566447545Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:43:18.755646 containerd[1486]: time="2024-11-12T22:43:18.755590456Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\"" Nov 12 22:43:18.756078 containerd[1486]: time="2024-11-12T22:43:18.756036065Z" level=info msg="StartContainer for \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\"" Nov 12 22:43:18.788225 systemd[1]: Started cri-containerd-dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b.scope - libcontainer container dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b. Nov 12 22:43:18.849983 systemd[1]: cri-containerd-dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b.scope: Deactivated successfully. Nov 12 22:43:18.857839 containerd[1486]: time="2024-11-12T22:43:18.857794693Z" level=info msg="StartContainer for \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\" returns successfully" Nov 12 22:43:18.878293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b-rootfs.mount: Deactivated successfully. Nov 12 22:43:18.903638 systemd[1]: Started sshd@7-10.0.0.46:22-10.0.0.1:44066.service - OpenSSH per-connection server daemon (10.0.0.1:44066). Nov 12 22:43:18.912455 containerd[1486]: time="2024-11-12T22:43:18.912384663Z" level=info msg="shim disconnected" id=dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b namespace=k8s.io Nov 12 22:43:18.912658 containerd[1486]: time="2024-11-12T22:43:18.912456589Z" level=warning msg="cleaning up after shim disconnected" id=dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b namespace=k8s.io Nov 12 22:43:18.912658 containerd[1486]: time="2024-11-12T22:43:18.912468522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:18.961674 sshd[3246]: Accepted publickey for core from 10.0.0.1 port 44066 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:18.963550 sshd-session[3246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:18.968834 systemd-logind[1469]: New session 8 of user core. Nov 12 22:43:18.976277 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 22:43:19.126089 sshd[3260]: Connection closed by 10.0.0.1 port 44066 Nov 12 22:43:19.126400 sshd-session[3246]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:19.130345 systemd[1]: sshd@7-10.0.0.46:22-10.0.0.1:44066.service: Deactivated successfully. Nov 12 22:43:19.132508 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 22:43:19.133457 systemd-logind[1469]: Session 8 logged out. Waiting for processes to exit. Nov 12 22:43:19.134461 systemd-logind[1469]: Removed session 8. Nov 12 22:43:19.566949 kubelet[2668]: E1112 22:43:19.566912 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:19.569770 containerd[1486]: time="2024-11-12T22:43:19.569734337Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:43:19.902539 containerd[1486]: time="2024-11-12T22:43:19.902353447Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\"" Nov 12 22:43:19.903355 containerd[1486]: time="2024-11-12T22:43:19.903258229Z" level=info msg="StartContainer for \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\"" Nov 12 22:43:19.922383 containerd[1486]: time="2024-11-12T22:43:19.921681716Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:43:19.933202 systemd[1]: Started cri-containerd-2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a.scope - libcontainer container 2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a. Nov 12 22:43:19.934371 containerd[1486]: time="2024-11-12T22:43:19.934306806Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907185" Nov 12 22:43:19.943584 containerd[1486]: time="2024-11-12T22:43:19.943521219Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:43:19.944833 containerd[1486]: time="2024-11-12T22:43:19.944786840Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.614862551s" Nov 12 22:43:19.944892 containerd[1486]: time="2024-11-12T22:43:19.944834610Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 22:43:19.947338 containerd[1486]: time="2024-11-12T22:43:19.947289367Z" level=info msg="CreateContainer within sandbox \"cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 22:43:19.962566 systemd[1]: cri-containerd-2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a.scope: Deactivated successfully. Nov 12 22:43:19.971412 containerd[1486]: time="2024-11-12T22:43:19.971044955Z" level=info msg="StartContainer for \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\" returns successfully" Nov 12 22:43:19.980193 containerd[1486]: time="2024-11-12T22:43:19.980132951Z" level=info msg="CreateContainer within sandbox \"cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\"" Nov 12 22:43:19.980706 containerd[1486]: time="2024-11-12T22:43:19.980662867Z" level=info msg="StartContainer for \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\"" Nov 12 22:43:20.014289 systemd[1]: Started cri-containerd-ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a.scope - libcontainer container ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a. Nov 12 22:43:20.573648 containerd[1486]: time="2024-11-12T22:43:20.573341360Z" level=info msg="StartContainer for \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\" returns successfully" Nov 12 22:43:20.576418 containerd[1486]: time="2024-11-12T22:43:20.576349017Z" level=info msg="shim disconnected" id=2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a namespace=k8s.io Nov 12 22:43:20.576418 containerd[1486]: time="2024-11-12T22:43:20.576401776Z" level=warning msg="cleaning up after shim disconnected" id=2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a namespace=k8s.io Nov 12 22:43:20.576418 containerd[1486]: time="2024-11-12T22:43:20.576412446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:43:20.587862 kubelet[2668]: E1112 22:43:20.587834 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:21.591514 kubelet[2668]: E1112 22:43:21.591484 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:21.591514 kubelet[2668]: E1112 22:43:21.591512 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:21.593637 containerd[1486]: time="2024-11-12T22:43:21.593420270Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:43:21.618097 kubelet[2668]: I1112 22:43:21.618014 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-zfj99" podStartSLOduration=2.470842196 podStartE2EDuration="17.617931172s" podCreationTimestamp="2024-11-12 22:43:04 +0000 UTC" firstStartedPulling="2024-11-12 22:43:04.798061989 +0000 UTC m=+14.555865766" lastFinishedPulling="2024-11-12 22:43:19.945150965 +0000 UTC m=+29.702954742" observedRunningTime="2024-11-12 22:43:21.617599248 +0000 UTC m=+31.375403035" watchObservedRunningTime="2024-11-12 22:43:21.617931172 +0000 UTC m=+31.375734949" Nov 12 22:43:21.629384 containerd[1486]: time="2024-11-12T22:43:21.629329506Z" level=info msg="CreateContainer within sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\"" Nov 12 22:43:21.629799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134844294.mount: Deactivated successfully. Nov 12 22:43:21.632720 containerd[1486]: time="2024-11-12T22:43:21.632674937Z" level=info msg="StartContainer for \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\"" Nov 12 22:43:21.684175 systemd[1]: Started cri-containerd-74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4.scope - libcontainer container 74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4. Nov 12 22:43:21.721078 containerd[1486]: time="2024-11-12T22:43:21.720098665Z" level=info msg="StartContainer for \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\" returns successfully" Nov 12 22:43:21.919589 kubelet[2668]: I1112 22:43:21.919449 2668 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 22:43:21.937992 kubelet[2668]: I1112 22:43:21.936794 2668 topology_manager.go:215] "Topology Admit Handler" podUID="1f7ba850-1fcf-4441-bffb-fd59e2b5fd24" podNamespace="kube-system" podName="coredns-76f75df574-r5ls2" Nov 12 22:43:21.944479 systemd[1]: Created slice kubepods-burstable-pod1f7ba850_1fcf_4441_bffb_fd59e2b5fd24.slice - libcontainer container kubepods-burstable-pod1f7ba850_1fcf_4441_bffb_fd59e2b5fd24.slice. Nov 12 22:43:21.946423 kubelet[2668]: I1112 22:43:21.946395 2668 topology_manager.go:215] "Topology Admit Handler" podUID="0c9165de-0d9a-4652-9f20-31c09d51f5bc" podNamespace="kube-system" podName="coredns-76f75df574-zrplf" Nov 12 22:43:21.952002 systemd[1]: Created slice kubepods-burstable-pod0c9165de_0d9a_4652_9f20_31c09d51f5bc.slice - libcontainer container kubepods-burstable-pod0c9165de_0d9a_4652_9f20_31c09d51f5bc.slice. Nov 12 22:43:21.975913 kubelet[2668]: I1112 22:43:21.975875 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f7ba850-1fcf-4441-bffb-fd59e2b5fd24-config-volume\") pod \"coredns-76f75df574-r5ls2\" (UID: \"1f7ba850-1fcf-4441-bffb-fd59e2b5fd24\") " pod="kube-system/coredns-76f75df574-r5ls2" Nov 12 22:43:21.975913 kubelet[2668]: I1112 22:43:21.975917 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqsf2\" (UniqueName: \"kubernetes.io/projected/1f7ba850-1fcf-4441-bffb-fd59e2b5fd24-kube-api-access-sqsf2\") pod \"coredns-76f75df574-r5ls2\" (UID: \"1f7ba850-1fcf-4441-bffb-fd59e2b5fd24\") " pod="kube-system/coredns-76f75df574-r5ls2" Nov 12 22:43:21.976149 kubelet[2668]: I1112 22:43:21.975938 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c9165de-0d9a-4652-9f20-31c09d51f5bc-config-volume\") pod \"coredns-76f75df574-zrplf\" (UID: \"0c9165de-0d9a-4652-9f20-31c09d51f5bc\") " pod="kube-system/coredns-76f75df574-zrplf" Nov 12 22:43:21.976149 kubelet[2668]: I1112 22:43:21.975955 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84qj\" (UniqueName: \"kubernetes.io/projected/0c9165de-0d9a-4652-9f20-31c09d51f5bc-kube-api-access-x84qj\") pod \"coredns-76f75df574-zrplf\" (UID: \"0c9165de-0d9a-4652-9f20-31c09d51f5bc\") " pod="kube-system/coredns-76f75df574-zrplf" Nov 12 22:43:22.248974 kubelet[2668]: E1112 22:43:22.248935 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:22.249535 containerd[1486]: time="2024-11-12T22:43:22.249485550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r5ls2,Uid:1f7ba850-1fcf-4441-bffb-fd59e2b5fd24,Namespace:kube-system,Attempt:0,}" Nov 12 22:43:22.254234 kubelet[2668]: E1112 22:43:22.254190 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:22.254960 containerd[1486]: time="2024-11-12T22:43:22.254926170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zrplf,Uid:0c9165de-0d9a-4652-9f20-31c09d51f5bc,Namespace:kube-system,Attempt:0,}" Nov 12 22:43:22.596006 kubelet[2668]: E1112 22:43:22.595840 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:22.596526 kubelet[2668]: E1112 22:43:22.596169 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:22.609924 kubelet[2668]: I1112 22:43:22.609879 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tkfrf" podStartSLOduration=8.030831633 podStartE2EDuration="18.60982892s" podCreationTimestamp="2024-11-12 22:43:04 +0000 UTC" firstStartedPulling="2024-11-12 22:43:04.750651975 +0000 UTC m=+14.508455752" lastFinishedPulling="2024-11-12 22:43:15.329649262 +0000 UTC m=+25.087453039" observedRunningTime="2024-11-12 22:43:22.60898307 +0000 UTC m=+32.366786867" watchObservedRunningTime="2024-11-12 22:43:22.60982892 +0000 UTC m=+32.367632697" Nov 12 22:43:23.999900 systemd-networkd[1397]: cilium_host: Link UP Nov 12 22:43:24.000135 systemd-networkd[1397]: cilium_net: Link UP Nov 12 22:43:24.000572 systemd-networkd[1397]: cilium_net: Gained carrier Nov 12 22:43:24.000801 systemd-networkd[1397]: cilium_host: Gained carrier Nov 12 22:43:24.000975 systemd-networkd[1397]: cilium_net: Gained IPv6LL Nov 12 22:43:24.001233 systemd-networkd[1397]: cilium_host: Gained IPv6LL Nov 12 22:43:24.103458 systemd-networkd[1397]: cilium_vxlan: Link UP Nov 12 22:43:24.103468 systemd-networkd[1397]: cilium_vxlan: Gained carrier Nov 12 22:43:24.143286 systemd[1]: Started sshd@8-10.0.0.46:22-10.0.0.1:44076.service - OpenSSH per-connection server daemon (10.0.0.1:44076). Nov 12 22:43:24.189282 sshd[3620]: Accepted publickey for core from 10.0.0.1 port 44076 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:24.190931 sshd-session[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:24.198862 systemd-logind[1469]: New session 9 of user core. Nov 12 22:43:24.203213 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 22:43:24.326089 kernel: NET: Registered PF_ALG protocol family Nov 12 22:43:24.330975 sshd[3622]: Connection closed by 10.0.0.1 port 44076 Nov 12 22:43:24.331357 sshd-session[3620]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:24.335509 systemd[1]: sshd@8-10.0.0.46:22-10.0.0.1:44076.service: Deactivated successfully. Nov 12 22:43:24.337992 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 22:43:24.339331 systemd-logind[1469]: Session 9 logged out. Waiting for processes to exit. Nov 12 22:43:24.340664 systemd-logind[1469]: Removed session 9. Nov 12 22:43:24.507144 kubelet[2668]: E1112 22:43:24.507082 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:24.993847 systemd-networkd[1397]: lxc_health: Link UP Nov 12 22:43:25.007616 systemd-networkd[1397]: lxc_health: Gained carrier Nov 12 22:43:25.328481 systemd-networkd[1397]: lxcbd01916c8da4: Link UP Nov 12 22:43:25.339623 kernel: eth0: renamed from tmp20644 Nov 12 22:43:25.340191 systemd-networkd[1397]: cilium_vxlan: Gained IPv6LL Nov 12 22:43:25.351341 systemd-networkd[1397]: lxcbd01916c8da4: Gained carrier Nov 12 22:43:25.353307 systemd-networkd[1397]: lxce671d989a30d: Link UP Nov 12 22:43:25.360170 kernel: eth0: renamed from tmp094f4 Nov 12 22:43:25.365899 systemd-networkd[1397]: lxce671d989a30d: Gained carrier Nov 12 22:43:26.507111 kubelet[2668]: E1112 22:43:26.507043 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:26.605357 kubelet[2668]: E1112 22:43:26.603580 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:26.673258 systemd-networkd[1397]: lxc_health: Gained IPv6LL Nov 12 22:43:27.121507 systemd-networkd[1397]: lxcbd01916c8da4: Gained IPv6LL Nov 12 22:43:27.185259 systemd-networkd[1397]: lxce671d989a30d: Gained IPv6LL Nov 12 22:43:27.604922 kubelet[2668]: E1112 22:43:27.604889 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:28.919456 containerd[1486]: time="2024-11-12T22:43:28.919308328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:43:28.919456 containerd[1486]: time="2024-11-12T22:43:28.919385935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:43:28.919456 containerd[1486]: time="2024-11-12T22:43:28.919407094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:28.920000 containerd[1486]: time="2024-11-12T22:43:28.919522821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:28.925394 containerd[1486]: time="2024-11-12T22:43:28.924572197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:43:28.925394 containerd[1486]: time="2024-11-12T22:43:28.924625838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:43:28.925394 containerd[1486]: time="2024-11-12T22:43:28.924640826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:28.925394 containerd[1486]: time="2024-11-12T22:43:28.924823330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:43:28.946351 systemd[1]: Started cri-containerd-20644d080e019c01e6a2486c01d5688a86851fec3d5571986610cbf085c93b41.scope - libcontainer container 20644d080e019c01e6a2486c01d5688a86851fec3d5571986610cbf085c93b41. Nov 12 22:43:28.952789 systemd[1]: Started cri-containerd-094f4a841c47e280b37991187e4333c6b284b87faa056e4291da442d9e2e28e8.scope - libcontainer container 094f4a841c47e280b37991187e4333c6b284b87faa056e4291da442d9e2e28e8. Nov 12 22:43:28.963675 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:43:28.967264 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:43:28.999747 containerd[1486]: time="2024-11-12T22:43:28.999699833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zrplf,Uid:0c9165de-0d9a-4652-9f20-31c09d51f5bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"094f4a841c47e280b37991187e4333c6b284b87faa056e4291da442d9e2e28e8\"" Nov 12 22:43:29.000950 kubelet[2668]: E1112 22:43:29.000815 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:29.003389 containerd[1486]: time="2024-11-12T22:43:29.003353317Z" level=info msg="CreateContainer within sandbox \"094f4a841c47e280b37991187e4333c6b284b87faa056e4291da442d9e2e28e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:43:29.004522 containerd[1486]: time="2024-11-12T22:43:29.003776482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r5ls2,Uid:1f7ba850-1fcf-4441-bffb-fd59e2b5fd24,Namespace:kube-system,Attempt:0,} returns sandbox id \"20644d080e019c01e6a2486c01d5688a86851fec3d5571986610cbf085c93b41\"" Nov 12 22:43:29.004593 kubelet[2668]: E1112 22:43:29.004273 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:29.007616 containerd[1486]: time="2024-11-12T22:43:29.007556493Z" level=info msg="CreateContainer within sandbox \"20644d080e019c01e6a2486c01d5688a86851fec3d5571986610cbf085c93b41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:43:29.260178 containerd[1486]: time="2024-11-12T22:43:29.260117783Z" level=info msg="CreateContainer within sandbox \"094f4a841c47e280b37991187e4333c6b284b87faa056e4291da442d9e2e28e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40b5975308bb8591aa863501599074fd7b7b3babbc4ec303d7b297d5440fc81a\"" Nov 12 22:43:29.261520 containerd[1486]: time="2024-11-12T22:43:29.260639493Z" level=info msg="StartContainer for \"40b5975308bb8591aa863501599074fd7b7b3babbc4ec303d7b297d5440fc81a\"" Nov 12 22:43:29.265646 containerd[1486]: time="2024-11-12T22:43:29.265596135Z" level=info msg="CreateContainer within sandbox \"20644d080e019c01e6a2486c01d5688a86851fec3d5571986610cbf085c93b41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"451aea64b96ccf7d2711efb49bb2e80b3041150201eaa69c56d728d192dbeb30\"" Nov 12 22:43:29.266268 containerd[1486]: time="2024-11-12T22:43:29.266207282Z" level=info msg="StartContainer for \"451aea64b96ccf7d2711efb49bb2e80b3041150201eaa69c56d728d192dbeb30\"" Nov 12 22:43:29.290228 systemd[1]: Started cri-containerd-40b5975308bb8591aa863501599074fd7b7b3babbc4ec303d7b297d5440fc81a.scope - libcontainer container 40b5975308bb8591aa863501599074fd7b7b3babbc4ec303d7b297d5440fc81a. Nov 12 22:43:29.294382 systemd[1]: Started cri-containerd-451aea64b96ccf7d2711efb49bb2e80b3041150201eaa69c56d728d192dbeb30.scope - libcontainer container 451aea64b96ccf7d2711efb49bb2e80b3041150201eaa69c56d728d192dbeb30. Nov 12 22:43:29.348088 systemd[1]: Started sshd@9-10.0.0.46:22-10.0.0.1:36938.service - OpenSSH per-connection server daemon (10.0.0.1:36938). Nov 12 22:43:29.409530 containerd[1486]: time="2024-11-12T22:43:29.409484176Z" level=info msg="StartContainer for \"40b5975308bb8591aa863501599074fd7b7b3babbc4ec303d7b297d5440fc81a\" returns successfully" Nov 12 22:43:29.410823 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 36938 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:29.412827 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:29.417500 systemd-logind[1469]: New session 10 of user core. Nov 12 22:43:29.419140 containerd[1486]: time="2024-11-12T22:43:29.419099950Z" level=info msg="StartContainer for \"451aea64b96ccf7d2711efb49bb2e80b3041150201eaa69c56d728d192dbeb30\" returns successfully" Nov 12 22:43:29.424228 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 22:43:29.569593 sshd[4074]: Connection closed by 10.0.0.1 port 36938 Nov 12 22:43:29.570284 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:29.573736 systemd-logind[1469]: Session 10 logged out. Waiting for processes to exit. Nov 12 22:43:29.574159 systemd[1]: sshd@9-10.0.0.46:22-10.0.0.1:36938.service: Deactivated successfully. Nov 12 22:43:29.576933 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 22:43:29.579325 systemd-logind[1469]: Removed session 10. Nov 12 22:43:29.609027 kubelet[2668]: E1112 22:43:29.608828 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:29.610920 kubelet[2668]: E1112 22:43:29.610881 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:29.623144 kubelet[2668]: I1112 22:43:29.623093 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-r5ls2" podStartSLOduration=25.623043389 podStartE2EDuration="25.623043389s" podCreationTimestamp="2024-11-12 22:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:43:29.621096632 +0000 UTC m=+39.378900410" watchObservedRunningTime="2024-11-12 22:43:29.623043389 +0000 UTC m=+39.380847166" Nov 12 22:43:29.645762 kubelet[2668]: I1112 22:43:29.645719 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zrplf" podStartSLOduration=25.645678651 podStartE2EDuration="25.645678651s" podCreationTimestamp="2024-11-12 22:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:43:29.645332932 +0000 UTC m=+39.403136709" watchObservedRunningTime="2024-11-12 22:43:29.645678651 +0000 UTC m=+39.403482428" Nov 12 22:43:30.612318 kubelet[2668]: E1112 22:43:30.612284 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:30.612318 kubelet[2668]: E1112 22:43:30.612284 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:31.613960 kubelet[2668]: E1112 22:43:31.613929 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:31.613960 kubelet[2668]: E1112 22:43:31.613939 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:43:34.614583 systemd[1]: Started sshd@10-10.0.0.46:22-10.0.0.1:36942.service - OpenSSH per-connection server daemon (10.0.0.1:36942). Nov 12 22:43:34.689555 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 36942 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:34.695928 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:34.708507 systemd-logind[1469]: New session 11 of user core. Nov 12 22:43:34.717037 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 22:43:34.848002 sshd[4117]: Connection closed by 10.0.0.1 port 36942 Nov 12 22:43:34.848530 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:34.852632 systemd[1]: sshd@10-10.0.0.46:22-10.0.0.1:36942.service: Deactivated successfully. Nov 12 22:43:34.854871 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 22:43:34.855501 systemd-logind[1469]: Session 11 logged out. Waiting for processes to exit. Nov 12 22:43:34.856451 systemd-logind[1469]: Removed session 11. Nov 12 22:43:39.906523 systemd[1]: Started sshd@11-10.0.0.46:22-10.0.0.1:42096.service - OpenSSH per-connection server daemon (10.0.0.1:42096). Nov 12 22:43:40.086782 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 42096 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:40.089453 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:40.099296 systemd-logind[1469]: New session 12 of user core. Nov 12 22:43:40.105126 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 22:43:40.403263 sshd[4134]: Connection closed by 10.0.0.1 port 42096 Nov 12 22:43:40.403694 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:40.416835 systemd[1]: sshd@11-10.0.0.46:22-10.0.0.1:42096.service: Deactivated successfully. Nov 12 22:43:40.419992 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 22:43:40.432205 systemd-logind[1469]: Session 12 logged out. Waiting for processes to exit. Nov 12 22:43:40.455132 systemd[1]: Started sshd@12-10.0.0.46:22-10.0.0.1:42104.service - OpenSSH per-connection server daemon (10.0.0.1:42104). Nov 12 22:43:40.457332 systemd-logind[1469]: Removed session 12. Nov 12 22:43:40.511081 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 42104 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:40.516011 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:40.534499 systemd-logind[1469]: New session 13 of user core. Nov 12 22:43:40.545451 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 22:43:40.899847 sshd[4149]: Connection closed by 10.0.0.1 port 42104 Nov 12 22:43:40.892352 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:40.920304 systemd[1]: sshd@12-10.0.0.46:22-10.0.0.1:42104.service: Deactivated successfully. Nov 12 22:43:40.928910 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 22:43:40.942745 systemd-logind[1469]: Session 13 logged out. Waiting for processes to exit. Nov 12 22:43:40.958993 systemd[1]: Started sshd@13-10.0.0.46:22-10.0.0.1:42120.service - OpenSSH per-connection server daemon (10.0.0.1:42120). Nov 12 22:43:40.969642 systemd-logind[1469]: Removed session 13. Nov 12 22:43:41.015807 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 42120 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:41.018289 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:41.033558 systemd-logind[1469]: New session 14 of user core. Nov 12 22:43:41.041384 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 22:43:41.330802 sshd[4161]: Connection closed by 10.0.0.1 port 42120 Nov 12 22:43:41.331453 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:41.348419 systemd[1]: sshd@13-10.0.0.46:22-10.0.0.1:42120.service: Deactivated successfully. Nov 12 22:43:41.356605 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 22:43:41.359814 systemd-logind[1469]: Session 14 logged out. Waiting for processes to exit. Nov 12 22:43:41.374332 systemd-logind[1469]: Removed session 14. Nov 12 22:43:46.378404 systemd[1]: Started sshd@14-10.0.0.46:22-10.0.0.1:42122.service - OpenSSH per-connection server daemon (10.0.0.1:42122). Nov 12 22:43:46.501142 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 42122 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:46.503509 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:46.524434 systemd-logind[1469]: New session 15 of user core. Nov 12 22:43:46.538537 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 22:43:46.791774 sshd[4176]: Connection closed by 10.0.0.1 port 42122 Nov 12 22:43:46.798491 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:46.805344 systemd[1]: sshd@14-10.0.0.46:22-10.0.0.1:42122.service: Deactivated successfully. Nov 12 22:43:46.810540 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 22:43:46.823972 systemd-logind[1469]: Session 15 logged out. Waiting for processes to exit. Nov 12 22:43:46.831350 systemd-logind[1469]: Removed session 15. Nov 12 22:43:51.804209 systemd[1]: Started sshd@15-10.0.0.46:22-10.0.0.1:56092.service - OpenSSH per-connection server daemon (10.0.0.1:56092). Nov 12 22:43:51.849079 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 56092 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:51.850972 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:51.855135 systemd-logind[1469]: New session 16 of user core. Nov 12 22:43:51.871252 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 22:43:51.983232 sshd[4192]: Connection closed by 10.0.0.1 port 56092 Nov 12 22:43:51.983629 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:51.988516 systemd[1]: sshd@15-10.0.0.46:22-10.0.0.1:56092.service: Deactivated successfully. Nov 12 22:43:51.991024 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 22:43:51.991727 systemd-logind[1469]: Session 16 logged out. Waiting for processes to exit. Nov 12 22:43:51.992847 systemd-logind[1469]: Removed session 16. Nov 12 22:43:56.994803 systemd[1]: Started sshd@16-10.0.0.46:22-10.0.0.1:56094.service - OpenSSH per-connection server daemon (10.0.0.1:56094). Nov 12 22:43:57.035692 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 56094 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:57.037189 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:57.040776 systemd-logind[1469]: New session 17 of user core. Nov 12 22:43:57.054183 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 22:43:57.182913 sshd[4206]: Connection closed by 10.0.0.1 port 56094 Nov 12 22:43:57.183329 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:57.195180 systemd[1]: sshd@16-10.0.0.46:22-10.0.0.1:56094.service: Deactivated successfully. Nov 12 22:43:57.197018 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 22:43:57.198565 systemd-logind[1469]: Session 17 logged out. Waiting for processes to exit. Nov 12 22:43:57.205354 systemd[1]: Started sshd@17-10.0.0.46:22-10.0.0.1:56104.service - OpenSSH per-connection server daemon (10.0.0.1:56104). Nov 12 22:43:57.206418 systemd-logind[1469]: Removed session 17. Nov 12 22:43:57.241641 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 56104 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:57.243321 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:57.247393 systemd-logind[1469]: New session 18 of user core. Nov 12 22:43:57.261203 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 22:43:57.927078 sshd[4220]: Connection closed by 10.0.0.1 port 56104 Nov 12 22:43:57.927726 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Nov 12 22:43:57.945642 systemd[1]: sshd@17-10.0.0.46:22-10.0.0.1:56104.service: Deactivated successfully. Nov 12 22:43:57.947756 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 22:43:57.949370 systemd-logind[1469]: Session 18 logged out. Waiting for processes to exit. Nov 12 22:43:57.956334 systemd[1]: Started sshd@18-10.0.0.46:22-10.0.0.1:56118.service - OpenSSH per-connection server daemon (10.0.0.1:56118). Nov 12 22:43:57.957538 systemd-logind[1469]: Removed session 18. Nov 12 22:43:58.002876 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 56118 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:43:58.004565 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:43:58.009456 systemd-logind[1469]: New session 19 of user core. Nov 12 22:43:58.018170 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 22:44:00.069153 sshd[4232]: Connection closed by 10.0.0.1 port 56118 Nov 12 22:44:00.069648 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:00.083032 systemd[1]: sshd@18-10.0.0.46:22-10.0.0.1:56118.service: Deactivated successfully. Nov 12 22:44:00.085398 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 22:44:00.087525 systemd-logind[1469]: Session 19 logged out. Waiting for processes to exit. Nov 12 22:44:00.096769 systemd[1]: Started sshd@19-10.0.0.46:22-10.0.0.1:56442.service - OpenSSH per-connection server daemon (10.0.0.1:56442). Nov 12 22:44:00.098583 systemd-logind[1469]: Removed session 19. Nov 12 22:44:00.136531 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 56442 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:00.138679 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:00.143453 systemd-logind[1469]: New session 20 of user core. Nov 12 22:44:00.157200 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 22:44:00.444426 sshd[4265]: Connection closed by 10.0.0.1 port 56442 Nov 12 22:44:00.444783 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:00.454654 systemd[1]: sshd@19-10.0.0.46:22-10.0.0.1:56442.service: Deactivated successfully. Nov 12 22:44:00.456853 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 22:44:00.458931 systemd-logind[1469]: Session 20 logged out. Waiting for processes to exit. Nov 12 22:44:00.468360 systemd[1]: Started sshd@20-10.0.0.46:22-10.0.0.1:56444.service - OpenSSH per-connection server daemon (10.0.0.1:56444). Nov 12 22:44:00.469365 systemd-logind[1469]: Removed session 20. Nov 12 22:44:00.515088 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 56444 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:00.516950 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:00.521555 systemd-logind[1469]: New session 21 of user core. Nov 12 22:44:00.531368 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 22:44:00.644040 sshd[4278]: Connection closed by 10.0.0.1 port 56444 Nov 12 22:44:00.644456 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:00.649028 systemd[1]: sshd@20-10.0.0.46:22-10.0.0.1:56444.service: Deactivated successfully. Nov 12 22:44:00.651213 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 22:44:00.651901 systemd-logind[1469]: Session 21 logged out. Waiting for processes to exit. Nov 12 22:44:00.652927 systemd-logind[1469]: Removed session 21. Nov 12 22:44:05.479945 kubelet[2668]: E1112 22:44:05.479885 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:05.658309 systemd[1]: Started sshd@21-10.0.0.46:22-10.0.0.1:56450.service - OpenSSH per-connection server daemon (10.0.0.1:56450). Nov 12 22:44:05.700625 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 56450 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:05.702395 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:05.706721 systemd-logind[1469]: New session 22 of user core. Nov 12 22:44:05.717249 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 22:44:05.850207 sshd[4294]: Connection closed by 10.0.0.1 port 56450 Nov 12 22:44:05.850520 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:05.854859 systemd[1]: sshd@21-10.0.0.46:22-10.0.0.1:56450.service: Deactivated successfully. Nov 12 22:44:05.857127 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 22:44:05.857824 systemd-logind[1469]: Session 22 logged out. Waiting for processes to exit. Nov 12 22:44:05.858774 systemd-logind[1469]: Removed session 22. Nov 12 22:44:06.479987 kubelet[2668]: E1112 22:44:06.479931 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:09.479264 kubelet[2668]: E1112 22:44:09.479205 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:10.862362 systemd[1]: Started sshd@22-10.0.0.46:22-10.0.0.1:51048.service - OpenSSH per-connection server daemon (10.0.0.1:51048). Nov 12 22:44:10.907447 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 51048 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:10.909680 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:10.913977 systemd-logind[1469]: New session 23 of user core. Nov 12 22:44:10.925380 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 22:44:11.045310 sshd[4311]: Connection closed by 10.0.0.1 port 51048 Nov 12 22:44:11.045666 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:11.050560 systemd[1]: sshd@22-10.0.0.46:22-10.0.0.1:51048.service: Deactivated successfully. Nov 12 22:44:11.052483 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 22:44:11.053209 systemd-logind[1469]: Session 23 logged out. Waiting for processes to exit. Nov 12 22:44:11.054083 systemd-logind[1469]: Removed session 23. Nov 12 22:44:12.480654 kubelet[2668]: E1112 22:44:12.480591 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:16.057658 systemd[1]: Started sshd@23-10.0.0.46:22-10.0.0.1:51052.service - OpenSSH per-connection server daemon (10.0.0.1:51052). Nov 12 22:44:16.098858 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 51052 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:16.100549 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:16.104659 systemd-logind[1469]: New session 24 of user core. Nov 12 22:44:16.114352 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 22:44:16.224126 sshd[4325]: Connection closed by 10.0.0.1 port 51052 Nov 12 22:44:16.224551 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:16.228637 systemd[1]: sshd@23-10.0.0.46:22-10.0.0.1:51052.service: Deactivated successfully. Nov 12 22:44:16.231319 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 22:44:16.232139 systemd-logind[1469]: Session 24 logged out. Waiting for processes to exit. Nov 12 22:44:16.233268 systemd-logind[1469]: Removed session 24. Nov 12 22:44:21.242910 systemd[1]: Started sshd@24-10.0.0.46:22-10.0.0.1:46298.service - OpenSSH per-connection server daemon (10.0.0.1:46298). Nov 12 22:44:21.283793 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 46298 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:21.285295 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:21.289096 systemd-logind[1469]: New session 25 of user core. Nov 12 22:44:21.298182 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 22:44:21.408596 sshd[4339]: Connection closed by 10.0.0.1 port 46298 Nov 12 22:44:21.409120 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:21.421536 systemd[1]: sshd@24-10.0.0.46:22-10.0.0.1:46298.service: Deactivated successfully. Nov 12 22:44:21.423477 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 22:44:21.425349 systemd-logind[1469]: Session 25 logged out. Waiting for processes to exit. Nov 12 22:44:21.426728 systemd[1]: Started sshd@25-10.0.0.46:22-10.0.0.1:46306.service - OpenSSH per-connection server daemon (10.0.0.1:46306). Nov 12 22:44:21.427458 systemd-logind[1469]: Removed session 25. Nov 12 22:44:21.470547 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 46306 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:21.472422 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:21.477458 systemd-logind[1469]: New session 26 of user core. Nov 12 22:44:21.486243 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 22:44:22.896080 containerd[1486]: time="2024-11-12T22:44:22.895907718Z" level=info msg="StopContainer for \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\" with timeout 30 (s)" Nov 12 22:44:22.901010 containerd[1486]: time="2024-11-12T22:44:22.900955403Z" level=info msg="Stop container \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\" with signal terminated" Nov 12 22:44:22.916388 systemd[1]: cri-containerd-ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a.scope: Deactivated successfully. Nov 12 22:44:22.936812 containerd[1486]: time="2024-11-12T22:44:22.936745547Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:44:22.940189 containerd[1486]: time="2024-11-12T22:44:22.940139055Z" level=info msg="StopContainer for \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\" with timeout 2 (s)" Nov 12 22:44:22.941014 containerd[1486]: time="2024-11-12T22:44:22.940968052Z" level=info msg="Stop container \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\" with signal terminated" Nov 12 22:44:22.948994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a-rootfs.mount: Deactivated successfully. Nov 12 22:44:22.949905 systemd-networkd[1397]: lxc_health: Link DOWN Nov 12 22:44:22.949911 systemd-networkd[1397]: lxc_health: Lost carrier Nov 12 22:44:22.971837 containerd[1486]: time="2024-11-12T22:44:22.971740990Z" level=info msg="shim disconnected" id=ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a namespace=k8s.io Nov 12 22:44:22.971837 containerd[1486]: time="2024-11-12T22:44:22.971828025Z" level=warning msg="cleaning up after shim disconnected" id=ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a namespace=k8s.io Nov 12 22:44:22.972042 containerd[1486]: time="2024-11-12T22:44:22.971861208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:44:22.982108 systemd[1]: cri-containerd-74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4.scope: Deactivated successfully. Nov 12 22:44:22.982456 systemd[1]: cri-containerd-74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4.scope: Consumed 7.243s CPU time. Nov 12 22:44:23.006758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4-rootfs.mount: Deactivated successfully. Nov 12 22:44:23.025542 containerd[1486]: time="2024-11-12T22:44:23.025496825Z" level=info msg="StopContainer for \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\" returns successfully" Nov 12 22:44:23.029373 containerd[1486]: time="2024-11-12T22:44:23.029326290Z" level=info msg="StopPodSandbox for \"cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d\"" Nov 12 22:44:23.031033 containerd[1486]: time="2024-11-12T22:44:23.030975887Z" level=info msg="Container to stop \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:44:23.033534 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d-shm.mount: Deactivated successfully. Nov 12 22:44:23.034774 containerd[1486]: time="2024-11-12T22:44:23.034729388Z" level=info msg="shim disconnected" id=74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4 namespace=k8s.io Nov 12 22:44:23.034876 containerd[1486]: time="2024-11-12T22:44:23.034857452Z" level=warning msg="cleaning up after shim disconnected" id=74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4 namespace=k8s.io Nov 12 22:44:23.034876 containerd[1486]: time="2024-11-12T22:44:23.034870406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:44:23.038235 systemd[1]: cri-containerd-cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d.scope: Deactivated successfully. Nov 12 22:44:23.053025 containerd[1486]: time="2024-11-12T22:44:23.052947404Z" level=warning msg="cleanup warnings time=\"2024-11-12T22:44:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 22:44:23.067438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d-rootfs.mount: Deactivated successfully. Nov 12 22:44:23.086692 containerd[1486]: time="2024-11-12T22:44:23.086641758Z" level=info msg="StopContainer for \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\" returns successfully" Nov 12 22:44:23.087155 containerd[1486]: time="2024-11-12T22:44:23.087119447Z" level=info msg="StopPodSandbox for \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\"" Nov 12 22:44:23.087221 containerd[1486]: time="2024-11-12T22:44:23.087148242Z" level=info msg="Container to stop \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:44:23.087221 containerd[1486]: time="2024-11-12T22:44:23.087189460Z" level=info msg="Container to stop \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:44:23.087221 containerd[1486]: time="2024-11-12T22:44:23.087197445Z" level=info msg="Container to stop \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:44:23.087221 containerd[1486]: time="2024-11-12T22:44:23.087205270Z" level=info msg="Container to stop \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:44:23.087221 containerd[1486]: time="2024-11-12T22:44:23.087212974Z" level=info msg="Container to stop \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:44:23.095965 systemd[1]: cri-containerd-1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0.scope: Deactivated successfully. Nov 12 22:44:23.103017 containerd[1486]: time="2024-11-12T22:44:23.102924251Z" level=info msg="shim disconnected" id=cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d namespace=k8s.io Nov 12 22:44:23.103017 containerd[1486]: time="2024-11-12T22:44:23.103003472Z" level=warning msg="cleaning up after shim disconnected" id=cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d namespace=k8s.io Nov 12 22:44:23.103017 containerd[1486]: time="2024-11-12T22:44:23.103012679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:44:23.117648 containerd[1486]: time="2024-11-12T22:44:23.115818450Z" level=warning msg="cleanup warnings time=\"2024-11-12T22:44:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 22:44:23.117648 containerd[1486]: time="2024-11-12T22:44:23.117640124Z" level=info msg="TearDown network for sandbox \"cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d\" successfully" Nov 12 22:44:23.117980 containerd[1486]: time="2024-11-12T22:44:23.117673899Z" level=info msg="StopPodSandbox for \"cb1e0ad92d3518197368fc3abbbef772de4011d04b7f9fccd9f3a1a2bb8a796d\" returns successfully" Nov 12 22:44:23.174159 containerd[1486]: time="2024-11-12T22:44:23.173921997Z" level=info msg="shim disconnected" id=1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0 namespace=k8s.io Nov 12 22:44:23.174159 containerd[1486]: time="2024-11-12T22:44:23.173987070Z" level=warning msg="cleaning up after shim disconnected" id=1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0 namespace=k8s.io Nov 12 22:44:23.174159 containerd[1486]: time="2024-11-12T22:44:23.173995867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:44:23.189793 containerd[1486]: time="2024-11-12T22:44:23.189729968Z" level=info msg="TearDown network for sandbox \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" successfully" Nov 12 22:44:23.189793 containerd[1486]: time="2024-11-12T22:44:23.189778961Z" level=info msg="StopPodSandbox for \"1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0\" returns successfully" Nov 12 22:44:23.197897 kubelet[2668]: I1112 22:44:23.197864 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzm4m\" (UniqueName: \"kubernetes.io/projected/8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1-kube-api-access-tzm4m\") pod \"8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1\" (UID: \"8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1\") " Nov 12 22:44:23.198432 kubelet[2668]: I1112 22:44:23.197911 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1-cilium-config-path\") pod \"8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1\" (UID: \"8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1\") " Nov 12 22:44:23.201656 kubelet[2668]: I1112 22:44:23.201626 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1" (UID: "8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:44:23.202336 kubelet[2668]: I1112 22:44:23.202312 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1-kube-api-access-tzm4m" (OuterVolumeSpecName: "kube-api-access-tzm4m") pod "8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1" (UID: "8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1"). InnerVolumeSpecName "kube-api-access-tzm4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:44:23.298865 kubelet[2668]: I1112 22:44:23.298779 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-lib-modules\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.298865 kubelet[2668]: I1112 22:44:23.298851 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-cgroup\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.298865 kubelet[2668]: I1112 22:44:23.298892 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-clustermesh-secrets\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299165 kubelet[2668]: I1112 22:44:23.298936 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2ds2\" (UniqueName: \"kubernetes.io/projected/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-kube-api-access-b2ds2\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299165 kubelet[2668]: I1112 22:44:23.298962 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-bpf-maps\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299165 kubelet[2668]: I1112 22:44:23.298968 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.299165 kubelet[2668]: I1112 22:44:23.298974 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.299165 kubelet[2668]: I1112 22:44:23.298986 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-run\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299309 kubelet[2668]: I1112 22:44:23.299073 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.299309 kubelet[2668]: I1112 22:44:23.299089 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cni-path\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299309 kubelet[2668]: I1112 22:44:23.299112 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.299309 kubelet[2668]: I1112 22:44:23.299120 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-host-proc-sys-kernel\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299309 kubelet[2668]: I1112 22:44:23.299139 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-host-proc-sys-net\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299428 kubelet[2668]: I1112 22:44:23.299137 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cni-path" (OuterVolumeSpecName: "cni-path") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.299428 kubelet[2668]: I1112 22:44:23.299157 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-xtables-lock\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299428 kubelet[2668]: I1112 22:44:23.299186 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-etc-cni-netd\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299428 kubelet[2668]: I1112 22:44:23.299213 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-config-path\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299428 kubelet[2668]: I1112 22:44:23.299229 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-hostproc\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299428 kubelet[2668]: I1112 22:44:23.299247 2668 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-hubble-tls\") pod \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\" (UID: \"ecb89379-1e7f-4d7a-a674-bb54fae4ebd2\") " Nov 12 22:44:23.299577 kubelet[2668]: I1112 22:44:23.299296 2668 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.299577 kubelet[2668]: I1112 22:44:23.299307 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.299577 kubelet[2668]: I1112 22:44:23.299318 2668 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.299577 kubelet[2668]: I1112 22:44:23.299329 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.299577 kubelet[2668]: I1112 22:44:23.299339 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.299577 kubelet[2668]: I1112 22:44:23.299348 2668 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.299577 kubelet[2668]: I1112 22:44:23.299359 2668 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tzm4m\" (UniqueName: \"kubernetes.io/projected/8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1-kube-api-access-tzm4m\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.301716 kubelet[2668]: I1112 22:44:23.299842 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.301716 kubelet[2668]: I1112 22:44:23.299907 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.301716 kubelet[2668]: I1112 22:44:23.299931 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.302202 kubelet[2668]: I1112 22:44:23.302164 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-hostproc" (OuterVolumeSpecName: "hostproc") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.302237 kubelet[2668]: I1112 22:44:23.302213 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:44:23.302522 kubelet[2668]: I1112 22:44:23.302488 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-kube-api-access-b2ds2" (OuterVolumeSpecName: "kube-api-access-b2ds2") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "kube-api-access-b2ds2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:44:23.302863 kubelet[2668]: I1112 22:44:23.302825 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:44:23.303479 kubelet[2668]: I1112 22:44:23.303448 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 22:44:23.303886 kubelet[2668]: I1112 22:44:23.303849 2668 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" (UID: "ecb89379-1e7f-4d7a-a674-bb54fae4ebd2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:44:23.399929 kubelet[2668]: I1112 22:44:23.399856 2668 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-b2ds2\" (UniqueName: \"kubernetes.io/projected/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-kube-api-access-b2ds2\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.399929 kubelet[2668]: I1112 22:44:23.399905 2668 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.399929 kubelet[2668]: I1112 22:44:23.399916 2668 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.399929 kubelet[2668]: I1112 22:44:23.399925 2668 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.399929 kubelet[2668]: I1112 22:44:23.399936 2668 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.399929 kubelet[2668]: I1112 22:44:23.399948 2668 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.399929 kubelet[2668]: I1112 22:44:23.399958 2668 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.400451 kubelet[2668]: I1112 22:44:23.399969 2668 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.400451 kubelet[2668]: I1112 22:44:23.399978 2668 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 22:44:23.748144 kubelet[2668]: I1112 22:44:23.748030 2668 scope.go:117] "RemoveContainer" containerID="ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a" Nov 12 22:44:23.754774 containerd[1486]: time="2024-11-12T22:44:23.754522183Z" level=info msg="RemoveContainer for \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\"" Nov 12 22:44:23.756273 systemd[1]: Removed slice kubepods-besteffort-pod8f9a7e46_a05e_42fd_aef1_ddd1b9b389a1.slice - libcontainer container kubepods-besteffort-pod8f9a7e46_a05e_42fd_aef1_ddd1b9b389a1.slice. Nov 12 22:44:23.759557 systemd[1]: Removed slice kubepods-burstable-podecb89379_1e7f_4d7a_a674_bb54fae4ebd2.slice - libcontainer container kubepods-burstable-podecb89379_1e7f_4d7a_a674_bb54fae4ebd2.slice. Nov 12 22:44:23.759912 systemd[1]: kubepods-burstable-podecb89379_1e7f_4d7a_a674_bb54fae4ebd2.slice: Consumed 7.361s CPU time. Nov 12 22:44:23.788617 containerd[1486]: time="2024-11-12T22:44:23.788459641Z" level=info msg="RemoveContainer for \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\" returns successfully" Nov 12 22:44:23.788898 kubelet[2668]: I1112 22:44:23.788823 2668 scope.go:117] "RemoveContainer" containerID="ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a" Nov 12 22:44:23.789229 containerd[1486]: time="2024-11-12T22:44:23.789132771Z" level=error msg="ContainerStatus for \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\": not found" Nov 12 22:44:23.798215 kubelet[2668]: E1112 22:44:23.798187 2668 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\": not found" containerID="ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a" Nov 12 22:44:23.798357 kubelet[2668]: I1112 22:44:23.798299 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a"} err="failed to get container status \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad2e15dc2168c72a2aadaa919447269537a6644ff2d577c118cbf2e17c14146a\": not found" Nov 12 22:44:23.798357 kubelet[2668]: I1112 22:44:23.798315 2668 scope.go:117] "RemoveContainer" containerID="74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4" Nov 12 22:44:23.799667 containerd[1486]: time="2024-11-12T22:44:23.799630190Z" level=info msg="RemoveContainer for \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\"" Nov 12 22:44:23.820122 containerd[1486]: time="2024-11-12T22:44:23.820037088Z" level=info msg="RemoveContainer for \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\" returns successfully" Nov 12 22:44:23.820349 kubelet[2668]: I1112 22:44:23.820320 2668 scope.go:117] "RemoveContainer" containerID="2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a" Nov 12 22:44:23.821493 containerd[1486]: time="2024-11-12T22:44:23.821451359Z" level=info msg="RemoveContainer for \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\"" Nov 12 22:44:23.839608 containerd[1486]: time="2024-11-12T22:44:23.839548164Z" level=info msg="RemoveContainer for \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\" returns successfully" Nov 12 22:44:23.839933 kubelet[2668]: I1112 22:44:23.839849 2668 scope.go:117] "RemoveContainer" containerID="dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b" Nov 12 22:44:23.841341 containerd[1486]: time="2024-11-12T22:44:23.841163486Z" level=info msg="RemoveContainer for \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\"" Nov 12 22:44:23.858588 containerd[1486]: time="2024-11-12T22:44:23.858535923Z" level=info msg="RemoveContainer for \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\" returns successfully" Nov 12 22:44:23.858794 kubelet[2668]: I1112 22:44:23.858749 2668 scope.go:117] "RemoveContainer" containerID="1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7" Nov 12 22:44:23.859724 containerd[1486]: time="2024-11-12T22:44:23.859697202Z" level=info msg="RemoveContainer for \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\"" Nov 12 22:44:23.889955 containerd[1486]: time="2024-11-12T22:44:23.889888774Z" level=info msg="RemoveContainer for \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\" returns successfully" Nov 12 22:44:23.890267 kubelet[2668]: I1112 22:44:23.890237 2668 scope.go:117] "RemoveContainer" containerID="66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6" Nov 12 22:44:23.891708 containerd[1486]: time="2024-11-12T22:44:23.891666085Z" level=info msg="RemoveContainer for \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\"" Nov 12 22:44:23.903428 containerd[1486]: time="2024-11-12T22:44:23.903364669Z" level=info msg="RemoveContainer for \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\" returns successfully" Nov 12 22:44:23.903938 kubelet[2668]: I1112 22:44:23.903681 2668 scope.go:117] "RemoveContainer" containerID="74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4" Nov 12 22:44:23.903989 containerd[1486]: time="2024-11-12T22:44:23.903930645Z" level=error msg="ContainerStatus for \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\": not found" Nov 12 22:44:23.904128 kubelet[2668]: E1112 22:44:23.904101 2668 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\": not found" containerID="74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4" Nov 12 22:44:23.904161 kubelet[2668]: I1112 22:44:23.904150 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4"} err="failed to get container status \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\": rpc error: code = NotFound desc = an error occurred when try to find container \"74a84aef7bbbcb10f22a0b9208e932ae0a31c2fb795ac7e559c2e3404403eea4\": not found" Nov 12 22:44:23.904204 kubelet[2668]: I1112 22:44:23.904164 2668 scope.go:117] "RemoveContainer" containerID="2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a" Nov 12 22:44:23.904352 containerd[1486]: time="2024-11-12T22:44:23.904291161Z" level=error msg="ContainerStatus for \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\": not found" Nov 12 22:44:23.904540 kubelet[2668]: E1112 22:44:23.904380 2668 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\": not found" containerID="2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a" Nov 12 22:44:23.904540 kubelet[2668]: I1112 22:44:23.904407 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a"} err="failed to get container status \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f0dd0bf11a39c56d3b791954faf8919fbb92cf34661f0f9464e82ef2b24d73a\": not found" Nov 12 22:44:23.904540 kubelet[2668]: I1112 22:44:23.904423 2668 scope.go:117] "RemoveContainer" containerID="dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b" Nov 12 22:44:23.904723 containerd[1486]: time="2024-11-12T22:44:23.904678177Z" level=error msg="ContainerStatus for \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\": not found" Nov 12 22:44:23.904839 kubelet[2668]: E1112 22:44:23.904822 2668 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\": not found" containerID="dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b" Nov 12 22:44:23.904893 kubelet[2668]: I1112 22:44:23.904845 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b"} err="failed to get container status \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc014f1039a26fe57f41c19b5a5da5cec5aba38bd5aa8a8b8064a766ed36731b\": not found" Nov 12 22:44:23.904893 kubelet[2668]: I1112 22:44:23.904855 2668 scope.go:117] "RemoveContainer" containerID="1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7" Nov 12 22:44:23.905072 containerd[1486]: time="2024-11-12T22:44:23.905021350Z" level=error msg="ContainerStatus for \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\": not found" Nov 12 22:44:23.905182 kubelet[2668]: E1112 22:44:23.905155 2668 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\": not found" containerID="1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7" Nov 12 22:44:23.905216 kubelet[2668]: I1112 22:44:23.905197 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7"} err="failed to get container status \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1893498cbcdbd19faeccdaef3628ff6702b83c764d00aed6657b3e86da42e3a7\": not found" Nov 12 22:44:23.905216 kubelet[2668]: I1112 22:44:23.905209 2668 scope.go:117] "RemoveContainer" containerID="66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6" Nov 12 22:44:23.905407 containerd[1486]: time="2024-11-12T22:44:23.905373359Z" level=error msg="ContainerStatus for \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\": not found" Nov 12 22:44:23.905507 kubelet[2668]: E1112 22:44:23.905489 2668 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\": not found" containerID="66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6" Nov 12 22:44:23.905577 kubelet[2668]: I1112 22:44:23.905516 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6"} err="failed to get container status \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"66fa427c370d5df89997be27555760670e3cd7f4e30423fdaf205ae7d0d79bc6\": not found" Nov 12 22:44:23.910510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0-rootfs.mount: Deactivated successfully. Nov 12 22:44:23.910658 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f80410818d818fc98883daa14cf0cabfc8756ce4e07f30d84e71cb21a5724b0-shm.mount: Deactivated successfully. Nov 12 22:44:23.910756 systemd[1]: var-lib-kubelet-pods-ecb89379\x2d1e7f\x2d4d7a\x2da674\x2dbb54fae4ebd2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db2ds2.mount: Deactivated successfully. Nov 12 22:44:23.910875 systemd[1]: var-lib-kubelet-pods-8f9a7e46\x2da05e\x2d42fd\x2daef1\x2dddd1b9b389a1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtzm4m.mount: Deactivated successfully. Nov 12 22:44:23.910982 systemd[1]: var-lib-kubelet-pods-ecb89379\x2d1e7f\x2d4d7a\x2da674\x2dbb54fae4ebd2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 22:44:23.911106 systemd[1]: var-lib-kubelet-pods-ecb89379\x2d1e7f\x2d4d7a\x2da674\x2dbb54fae4ebd2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 22:44:24.482135 kubelet[2668]: I1112 22:44:24.482093 2668 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1" path="/var/lib/kubelet/pods/8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1/volumes" Nov 12 22:44:24.482889 kubelet[2668]: I1112 22:44:24.482858 2668 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" path="/var/lib/kubelet/pods/ecb89379-1e7f-4d7a-a674-bb54fae4ebd2/volumes" Nov 12 22:44:24.794852 sshd[4353]: Connection closed by 10.0.0.1 port 46306 Nov 12 22:44:24.795712 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:24.805727 systemd[1]: sshd@25-10.0.0.46:22-10.0.0.1:46306.service: Deactivated successfully. Nov 12 22:44:24.808015 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 22:44:24.809764 systemd-logind[1469]: Session 26 logged out. Waiting for processes to exit. Nov 12 22:44:24.821696 systemd[1]: Started sshd@26-10.0.0.46:22-10.0.0.1:46308.service - OpenSSH per-connection server daemon (10.0.0.1:46308). Nov 12 22:44:24.823210 systemd-logind[1469]: Removed session 26. Nov 12 22:44:24.864799 sshd[4515]: Accepted publickey for core from 10.0.0.1 port 46308 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:24.866769 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:24.871086 systemd-logind[1469]: New session 27 of user core. Nov 12 22:44:24.879220 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 22:44:25.480099 sshd[4517]: Connection closed by 10.0.0.1 port 46308 Nov 12 22:44:25.480565 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:25.499242 systemd[1]: sshd@26-10.0.0.46:22-10.0.0.1:46308.service: Deactivated successfully. Nov 12 22:44:25.502157 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 22:44:25.504697 systemd-logind[1469]: Session 27 logged out. Waiting for processes to exit. Nov 12 22:44:25.515526 kubelet[2668]: I1112 22:44:25.515473 2668 topology_manager.go:215] "Topology Admit Handler" podUID="9462cda1-078d-4e8b-a0ec-823ebe5c3a57" podNamespace="kube-system" podName="cilium-q559q" Nov 12 22:44:25.515941 kubelet[2668]: E1112 22:44:25.515552 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" containerName="apply-sysctl-overwrites" Nov 12 22:44:25.515941 kubelet[2668]: E1112 22:44:25.515565 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" containerName="mount-bpf-fs" Nov 12 22:44:25.515941 kubelet[2668]: E1112 22:44:25.515574 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" containerName="clean-cilium-state" Nov 12 22:44:25.515941 kubelet[2668]: E1112 22:44:25.515582 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1" containerName="cilium-operator" Nov 12 22:44:25.515941 kubelet[2668]: E1112 22:44:25.515593 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" containerName="mount-cgroup" Nov 12 22:44:25.515941 kubelet[2668]: E1112 22:44:25.515601 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" containerName="cilium-agent" Nov 12 22:44:25.515941 kubelet[2668]: I1112 22:44:25.515627 2668 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb89379-1e7f-4d7a-a674-bb54fae4ebd2" containerName="cilium-agent" Nov 12 22:44:25.515941 kubelet[2668]: I1112 22:44:25.515637 2668 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9a7e46-a05e-42fd-aef1-ddd1b9b389a1" containerName="cilium-operator" Nov 12 22:44:25.518284 systemd[1]: Started sshd@27-10.0.0.46:22-10.0.0.1:46318.service - OpenSSH per-connection server daemon (10.0.0.1:46318). Nov 12 22:44:25.523035 systemd-logind[1469]: Removed session 27. Nov 12 22:44:25.533707 systemd[1]: Created slice kubepods-burstable-pod9462cda1_078d_4e8b_a0ec_823ebe5c3a57.slice - libcontainer container kubepods-burstable-pod9462cda1_078d_4e8b_a0ec_823ebe5c3a57.slice. Nov 12 22:44:25.535803 kubelet[2668]: E1112 22:44:25.535389 2668 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 22:44:25.560916 sshd[4528]: Accepted publickey for core from 10.0.0.1 port 46318 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:25.562444 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:25.566604 systemd-logind[1469]: New session 28 of user core. Nov 12 22:44:25.577186 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 22:44:25.611906 kubelet[2668]: I1112 22:44:25.611869 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-cilium-run\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.611906 kubelet[2668]: I1112 22:44:25.611911 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-cni-path\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612080 kubelet[2668]: I1112 22:44:25.611938 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-cilium-config-path\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612080 kubelet[2668]: I1112 22:44:25.611963 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-host-proc-sys-net\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612080 kubelet[2668]: I1112 22:44:25.612010 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8f2\" (UniqueName: \"kubernetes.io/projected/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-kube-api-access-hx8f2\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612080 kubelet[2668]: I1112 22:44:25.612043 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-hostproc\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612193 kubelet[2668]: I1112 22:44:25.612100 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-xtables-lock\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612193 kubelet[2668]: I1112 22:44:25.612152 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-bpf-maps\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612245 kubelet[2668]: I1112 22:44:25.612207 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-hubble-tls\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612245 kubelet[2668]: I1112 22:44:25.612235 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-clustermesh-secrets\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612381 kubelet[2668]: I1112 22:44:25.612253 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-host-proc-sys-kernel\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612381 kubelet[2668]: I1112 22:44:25.612271 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-cilium-ipsec-secrets\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612381 kubelet[2668]: I1112 22:44:25.612310 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-etc-cni-netd\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612381 kubelet[2668]: I1112 22:44:25.612379 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-lib-modules\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.612495 kubelet[2668]: I1112 22:44:25.612414 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9462cda1-078d-4e8b-a0ec-823ebe5c3a57-cilium-cgroup\") pod \"cilium-q559q\" (UID: \"9462cda1-078d-4e8b-a0ec-823ebe5c3a57\") " pod="kube-system/cilium-q559q" Nov 12 22:44:25.628992 sshd[4530]: Connection closed by 10.0.0.1 port 46318 Nov 12 22:44:25.628354 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:25.641923 systemd[1]: sshd@27-10.0.0.46:22-10.0.0.1:46318.service: Deactivated successfully. Nov 12 22:44:25.644334 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 22:44:25.646103 systemd-logind[1469]: Session 28 logged out. Waiting for processes to exit. Nov 12 22:44:25.656513 systemd[1]: Started sshd@28-10.0.0.46:22-10.0.0.1:46322.service - OpenSSH per-connection server daemon (10.0.0.1:46322). Nov 12 22:44:25.657584 systemd-logind[1469]: Removed session 28. Nov 12 22:44:25.695600 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 46322 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:44:25.697385 sshd-session[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:44:25.702218 systemd-logind[1469]: New session 29 of user core. Nov 12 22:44:25.708230 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 22:44:25.838640 kubelet[2668]: E1112 22:44:25.838494 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:25.841126 containerd[1486]: time="2024-11-12T22:44:25.841081611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q559q,Uid:9462cda1-078d-4e8b-a0ec-823ebe5c3a57,Namespace:kube-system,Attempt:0,}" Nov 12 22:44:26.155084 containerd[1486]: time="2024-11-12T22:44:26.154868963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:44:26.155084 containerd[1486]: time="2024-11-12T22:44:26.154936451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:44:26.155084 containerd[1486]: time="2024-11-12T22:44:26.154946731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:44:26.155296 containerd[1486]: time="2024-11-12T22:44:26.155026072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:44:26.180442 systemd[1]: Started cri-containerd-a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f.scope - libcontainer container a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f. Nov 12 22:44:26.206932 containerd[1486]: time="2024-11-12T22:44:26.206862074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q559q,Uid:9462cda1-078d-4e8b-a0ec-823ebe5c3a57,Namespace:kube-system,Attempt:0,} returns sandbox id \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\"" Nov 12 22:44:26.207654 kubelet[2668]: E1112 22:44:26.207627 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:26.210077 containerd[1486]: time="2024-11-12T22:44:26.210011349Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:44:26.226274 containerd[1486]: time="2024-11-12T22:44:26.226193542Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c67221da551014e3ee079a7d8c6a5603bc66b2b264714a64528b7581579033e5\"" Nov 12 22:44:26.227134 containerd[1486]: time="2024-11-12T22:44:26.226819391Z" level=info msg="StartContainer for \"c67221da551014e3ee079a7d8c6a5603bc66b2b264714a64528b7581579033e5\"" Nov 12 22:44:26.261279 systemd[1]: Started cri-containerd-c67221da551014e3ee079a7d8c6a5603bc66b2b264714a64528b7581579033e5.scope - libcontainer container c67221da551014e3ee079a7d8c6a5603bc66b2b264714a64528b7581579033e5. Nov 12 22:44:26.293619 containerd[1486]: time="2024-11-12T22:44:26.293565219Z" level=info msg="StartContainer for \"c67221da551014e3ee079a7d8c6a5603bc66b2b264714a64528b7581579033e5\" returns successfully" Nov 12 22:44:26.302908 systemd[1]: cri-containerd-c67221da551014e3ee079a7d8c6a5603bc66b2b264714a64528b7581579033e5.scope: Deactivated successfully. Nov 12 22:44:26.335449 containerd[1486]: time="2024-11-12T22:44:26.335361511Z" level=info msg="shim disconnected" id=c67221da551014e3ee079a7d8c6a5603bc66b2b264714a64528b7581579033e5 namespace=k8s.io Nov 12 22:44:26.335449 containerd[1486]: time="2024-11-12T22:44:26.335420583Z" level=warning msg="cleaning up after shim disconnected" id=c67221da551014e3ee079a7d8c6a5603bc66b2b264714a64528b7581579033e5 namespace=k8s.io Nov 12 22:44:26.335449 containerd[1486]: time="2024-11-12T22:44:26.335430022Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:44:26.764236 kubelet[2668]: E1112 22:44:26.764196 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:26.765829 containerd[1486]: time="2024-11-12T22:44:26.765789671Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:44:26.781160 containerd[1486]: time="2024-11-12T22:44:26.781107552Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"727ff471a9c888a1650009fa69932c99438e5a17280e0b2fdc6dc07867c07bdc\"" Nov 12 22:44:26.781643 containerd[1486]: time="2024-11-12T22:44:26.781613333Z" level=info msg="StartContainer for \"727ff471a9c888a1650009fa69932c99438e5a17280e0b2fdc6dc07867c07bdc\"" Nov 12 22:44:26.811238 systemd[1]: Started cri-containerd-727ff471a9c888a1650009fa69932c99438e5a17280e0b2fdc6dc07867c07bdc.scope - libcontainer container 727ff471a9c888a1650009fa69932c99438e5a17280e0b2fdc6dc07867c07bdc. Nov 12 22:44:26.839503 containerd[1486]: time="2024-11-12T22:44:26.839456872Z" level=info msg="StartContainer for \"727ff471a9c888a1650009fa69932c99438e5a17280e0b2fdc6dc07867c07bdc\" returns successfully" Nov 12 22:44:26.845796 systemd[1]: cri-containerd-727ff471a9c888a1650009fa69932c99438e5a17280e0b2fdc6dc07867c07bdc.scope: Deactivated successfully. Nov 12 22:44:26.868475 containerd[1486]: time="2024-11-12T22:44:26.868390149Z" level=info msg="shim disconnected" id=727ff471a9c888a1650009fa69932c99438e5a17280e0b2fdc6dc07867c07bdc namespace=k8s.io Nov 12 22:44:26.868475 containerd[1486]: time="2024-11-12T22:44:26.868461374Z" level=warning msg="cleaning up after shim disconnected" id=727ff471a9c888a1650009fa69932c99438e5a17280e0b2fdc6dc07867c07bdc namespace=k8s.io Nov 12 22:44:26.868475 containerd[1486]: time="2024-11-12T22:44:26.868471944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:44:27.719649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-727ff471a9c888a1650009fa69932c99438e5a17280e0b2fdc6dc07867c07bdc-rootfs.mount: Deactivated successfully. Nov 12 22:44:27.769842 kubelet[2668]: E1112 22:44:27.769793 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:27.772559 containerd[1486]: time="2024-11-12T22:44:27.772495499Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:44:27.801045 containerd[1486]: time="2024-11-12T22:44:27.800984414Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a4423dd36765d02b6d2f7b136a1cea806a6051623af1b70bd516d6b3838dcff9\"" Nov 12 22:44:27.801631 containerd[1486]: time="2024-11-12T22:44:27.801582682Z" level=info msg="StartContainer for \"a4423dd36765d02b6d2f7b136a1cea806a6051623af1b70bd516d6b3838dcff9\"" Nov 12 22:44:27.837294 systemd[1]: Started cri-containerd-a4423dd36765d02b6d2f7b136a1cea806a6051623af1b70bd516d6b3838dcff9.scope - libcontainer container a4423dd36765d02b6d2f7b136a1cea806a6051623af1b70bd516d6b3838dcff9. Nov 12 22:44:27.906084 systemd[1]: cri-containerd-a4423dd36765d02b6d2f7b136a1cea806a6051623af1b70bd516d6b3838dcff9.scope: Deactivated successfully. Nov 12 22:44:27.956268 containerd[1486]: time="2024-11-12T22:44:27.956223302Z" level=info msg="StartContainer for \"a4423dd36765d02b6d2f7b136a1cea806a6051623af1b70bd516d6b3838dcff9\" returns successfully" Nov 12 22:44:28.133247 containerd[1486]: time="2024-11-12T22:44:28.133033781Z" level=info msg="shim disconnected" id=a4423dd36765d02b6d2f7b136a1cea806a6051623af1b70bd516d6b3838dcff9 namespace=k8s.io Nov 12 22:44:28.133247 containerd[1486]: time="2024-11-12T22:44:28.133107171Z" level=warning msg="cleaning up after shim disconnected" id=a4423dd36765d02b6d2f7b136a1cea806a6051623af1b70bd516d6b3838dcff9 namespace=k8s.io Nov 12 22:44:28.133247 containerd[1486]: time="2024-11-12T22:44:28.133116368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:44:28.719745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4423dd36765d02b6d2f7b136a1cea806a6051623af1b70bd516d6b3838dcff9-rootfs.mount: Deactivated successfully. Nov 12 22:44:28.773625 kubelet[2668]: E1112 22:44:28.773590 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:28.775843 containerd[1486]: time="2024-11-12T22:44:28.775781326Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:44:28.876042 containerd[1486]: time="2024-11-12T22:44:28.875998874Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38c46ca8062f0344e1427d0a64efce5b1f4ccd5e0b323f2878f00e4c5f54201f\"" Nov 12 22:44:28.876672 containerd[1486]: time="2024-11-12T22:44:28.876637517Z" level=info msg="StartContainer for \"38c46ca8062f0344e1427d0a64efce5b1f4ccd5e0b323f2878f00e4c5f54201f\"" Nov 12 22:44:28.909332 systemd[1]: Started cri-containerd-38c46ca8062f0344e1427d0a64efce5b1f4ccd5e0b323f2878f00e4c5f54201f.scope - libcontainer container 38c46ca8062f0344e1427d0a64efce5b1f4ccd5e0b323f2878f00e4c5f54201f. Nov 12 22:44:28.938290 systemd[1]: cri-containerd-38c46ca8062f0344e1427d0a64efce5b1f4ccd5e0b323f2878f00e4c5f54201f.scope: Deactivated successfully. Nov 12 22:44:29.003960 containerd[1486]: time="2024-11-12T22:44:29.003812956Z" level=info msg="StartContainer for \"38c46ca8062f0344e1427d0a64efce5b1f4ccd5e0b323f2878f00e4c5f54201f\" returns successfully" Nov 12 22:44:29.117772 containerd[1486]: time="2024-11-12T22:44:29.117704326Z" level=info msg="shim disconnected" id=38c46ca8062f0344e1427d0a64efce5b1f4ccd5e0b323f2878f00e4c5f54201f namespace=k8s.io Nov 12 22:44:29.117772 containerd[1486]: time="2024-11-12T22:44:29.117761926Z" level=warning msg="cleaning up after shim disconnected" id=38c46ca8062f0344e1427d0a64efce5b1f4ccd5e0b323f2878f00e4c5f54201f namespace=k8s.io Nov 12 22:44:29.117772 containerd[1486]: time="2024-11-12T22:44:29.117770462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:44:29.719462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38c46ca8062f0344e1427d0a64efce5b1f4ccd5e0b323f2878f00e4c5f54201f-rootfs.mount: Deactivated successfully. Nov 12 22:44:29.777326 kubelet[2668]: E1112 22:44:29.777294 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:29.782278 containerd[1486]: time="2024-11-12T22:44:29.782234694Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:44:29.810616 containerd[1486]: time="2024-11-12T22:44:29.810442716Z" level=info msg="CreateContainer within sandbox \"a20d558c14a8116635854a1a8ccedd81d69e44d860b53f5b45e89613a45b021f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"232b23a32fe51d2f06bd946f146c02bdd1b6ba6e8e3fbb49c40061811eb65fa5\"" Nov 12 22:44:29.811459 containerd[1486]: time="2024-11-12T22:44:29.811407509Z" level=info msg="StartContainer for \"232b23a32fe51d2f06bd946f146c02bdd1b6ba6e8e3fbb49c40061811eb65fa5\"" Nov 12 22:44:29.862221 systemd[1]: Started cri-containerd-232b23a32fe51d2f06bd946f146c02bdd1b6ba6e8e3fbb49c40061811eb65fa5.scope - libcontainer container 232b23a32fe51d2f06bd946f146c02bdd1b6ba6e8e3fbb49c40061811eb65fa5. Nov 12 22:44:29.897836 containerd[1486]: time="2024-11-12T22:44:29.897694483Z" level=info msg="StartContainer for \"232b23a32fe51d2f06bd946f146c02bdd1b6ba6e8e3fbb49c40061811eb65fa5\" returns successfully" Nov 12 22:44:30.327075 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 22:44:30.785070 kubelet[2668]: E1112 22:44:30.784067 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:30.807766 kubelet[2668]: I1112 22:44:30.807706 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-q559q" podStartSLOduration=5.807663386 podStartE2EDuration="5.807663386s" podCreationTimestamp="2024-11-12 22:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:44:30.807515955 +0000 UTC m=+100.565319732" watchObservedRunningTime="2024-11-12 22:44:30.807663386 +0000 UTC m=+100.565467163" Nov 12 22:44:31.841221 kubelet[2668]: E1112 22:44:31.840263 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:33.544768 systemd-networkd[1397]: lxc_health: Link UP Nov 12 22:44:33.558092 systemd-networkd[1397]: lxc_health: Gained carrier Nov 12 22:44:33.843941 kubelet[2668]: E1112 22:44:33.843774 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:34.792608 kubelet[2668]: E1112 22:44:34.792558 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:35.537223 systemd-networkd[1397]: lxc_health: Gained IPv6LL Nov 12 22:44:35.795262 kubelet[2668]: E1112 22:44:35.793982 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:36.378386 systemd[1]: run-containerd-runc-k8s.io-232b23a32fe51d2f06bd946f146c02bdd1b6ba6e8e3fbb49c40061811eb65fa5-runc.uXUYDZ.mount: Deactivated successfully. Nov 12 22:44:38.482119 systemd[1]: run-containerd-runc-k8s.io-232b23a32fe51d2f06bd946f146c02bdd1b6ba6e8e3fbb49c40061811eb65fa5-runc.KFCteD.mount: Deactivated successfully. Nov 12 22:44:40.480402 kubelet[2668]: E1112 22:44:40.480354 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:44:40.628483 sshd[4540]: Connection closed by 10.0.0.1 port 46322 Nov 12 22:44:40.628998 sshd-session[4537]: pam_unix(sshd:session): session closed for user core Nov 12 22:44:40.632945 systemd[1]: sshd@28-10.0.0.46:22-10.0.0.1:46322.service: Deactivated successfully. Nov 12 22:44:40.635022 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 22:44:40.635751 systemd-logind[1469]: Session 29 logged out. Waiting for processes to exit. Nov 12 22:44:40.636848 systemd-logind[1469]: Removed session 29. Nov 12 22:44:41.479909 kubelet[2668]: E1112 22:44:41.479850 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"