Nov 12 22:39:34.975395 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 21:10:03 -00 2024 Nov 12 22:39:34.975431 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:39:34.975446 kernel: BIOS-provided physical RAM map: Nov 12 22:39:34.975456 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 22:39:34.975464 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 22:39:34.975474 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 22:39:34.975485 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 12 22:39:34.975495 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 12 22:39:34.975504 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 22:39:34.975517 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 12 22:39:34.975530 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 22:39:34.975539 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 22:39:34.975548 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 22:39:34.975557 kernel: NX (Execute Disable) protection: active Nov 12 22:39:34.975568 kernel: APIC: Static calls initialized Nov 12 22:39:34.975581 kernel: SMBIOS 2.8 present. Nov 12 22:39:34.975591 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 12 22:39:34.975601 kernel: Hypervisor detected: KVM Nov 12 22:39:34.975611 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 22:39:34.975620 kernel: kvm-clock: using sched offset of 4515582671 cycles Nov 12 22:39:34.975630 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 22:39:34.975641 kernel: tsc: Detected 2794.748 MHz processor Nov 12 22:39:34.975651 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 22:39:34.975661 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 22:39:34.975671 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 12 22:39:34.975685 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 22:39:34.975695 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 22:39:34.975705 kernel: Using GB pages for direct mapping Nov 12 22:39:34.975715 kernel: ACPI: Early table checksum verification disabled Nov 12 22:39:34.975725 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 12 22:39:34.975735 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:39:34.975745 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:39:34.975755 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:39:34.975769 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 12 22:39:34.975779 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:39:34.975789 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:39:34.975798 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:39:34.975808 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 22:39:34.975818 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Nov 12 22:39:34.975828 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Nov 12 22:39:34.975847 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 12 22:39:34.975861 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Nov 12 22:39:34.975872 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Nov 12 22:39:34.975883 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Nov 12 22:39:34.975893 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Nov 12 22:39:34.975904 kernel: No NUMA configuration found Nov 12 22:39:34.975915 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 12 22:39:34.975925 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 12 22:39:34.975940 kernel: Zone ranges: Nov 12 22:39:34.975950 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 22:39:34.975961 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 12 22:39:34.975971 kernel: Normal empty Nov 12 22:39:34.975982 kernel: Movable zone start for each node Nov 12 22:39:34.975993 kernel: Early memory node ranges Nov 12 22:39:34.976002 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 22:39:34.976012 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 12 22:39:34.976023 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 12 22:39:34.976043 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 22:39:34.976054 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 22:39:34.976064 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 12 22:39:34.976075 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 22:39:34.976094 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 22:39:34.976105 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 22:39:34.976116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 22:39:34.976146 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 22:39:34.976157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 22:39:34.976173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 22:39:34.976184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 22:39:34.976194 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 22:39:34.976204 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 22:39:34.976215 kernel: TSC deadline timer available Nov 12 22:39:34.976225 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 22:39:34.976236 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 22:39:34.976247 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 22:39:34.976257 kernel: kvm-guest: setup PV sched yield Nov 12 22:39:34.976268 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 12 22:39:34.976283 kernel: Booting paravirtualized kernel on KVM Nov 12 22:39:34.976294 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 22:39:34.976305 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 22:39:34.976316 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 22:39:34.976324 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 22:39:34.976333 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 22:39:34.976343 kernel: kvm-guest: PV spinlocks enabled Nov 12 22:39:34.976353 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 22:39:34.976365 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:39:34.976379 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 22:39:34.976389 kernel: random: crng init done Nov 12 22:39:34.976399 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 22:39:34.976410 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 22:39:34.976420 kernel: Fallback order for Node 0: 0 Nov 12 22:39:34.976430 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 12 22:39:34.976440 kernel: Policy zone: DMA32 Nov 12 22:39:34.976451 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 22:39:34.976466 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22736K rodata, 42968K init, 2220K bss, 136900K reserved, 0K cma-reserved) Nov 12 22:39:34.976476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 22:39:34.976487 kernel: ftrace: allocating 37801 entries in 148 pages Nov 12 22:39:34.976497 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 22:39:34.976507 kernel: Dynamic Preempt: voluntary Nov 12 22:39:34.976517 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 22:39:34.976528 kernel: rcu: RCU event tracing is enabled. Nov 12 22:39:34.976539 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 22:39:34.976550 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 22:39:34.976564 kernel: Rude variant of Tasks RCU enabled. Nov 12 22:39:34.976575 kernel: Tracing variant of Tasks RCU enabled. Nov 12 22:39:34.976589 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 22:39:34.976600 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 22:39:34.976610 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 22:39:34.976620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 22:39:34.976631 kernel: Console: colour VGA+ 80x25 Nov 12 22:39:34.976641 kernel: printk: console [ttyS0] enabled Nov 12 22:39:34.976652 kernel: ACPI: Core revision 20230628 Nov 12 22:39:34.976667 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 22:39:34.976678 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 22:39:34.976688 kernel: x2apic enabled Nov 12 22:39:34.976699 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 22:39:34.976710 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 22:39:34.976720 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 22:39:34.976731 kernel: kvm-guest: setup PV IPIs Nov 12 22:39:34.976756 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 22:39:34.976767 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 22:39:34.976778 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 12 22:39:34.976790 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 22:39:34.976800 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 22:39:34.976814 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 22:39:34.976825 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 22:39:34.976836 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 22:39:34.976847 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 22:39:34.976861 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 22:39:34.976872 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 22:39:34.976883 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 22:39:34.976894 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 22:39:34.976905 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 22:39:34.976916 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 22:39:34.976927 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 22:39:34.976938 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 22:39:34.976949 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 22:39:34.976965 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 22:39:34.976976 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 22:39:34.976987 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 22:39:34.976998 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 22:39:34.977009 kernel: Freeing SMP alternatives memory: 32K Nov 12 22:39:34.977020 kernel: pid_max: default: 32768 minimum: 301 Nov 12 22:39:34.977032 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 22:39:34.977043 kernel: landlock: Up and running. Nov 12 22:39:34.977054 kernel: SELinux: Initializing. Nov 12 22:39:34.977070 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:39:34.977094 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 22:39:34.977106 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 22:39:34.977117 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:39:34.977252 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:39:34.977268 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 22:39:34.977279 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 22:39:34.977290 kernel: ... version: 0 Nov 12 22:39:34.977306 kernel: ... bit width: 48 Nov 12 22:39:34.977317 kernel: ... generic registers: 6 Nov 12 22:39:34.977328 kernel: ... value mask: 0000ffffffffffff Nov 12 22:39:34.977338 kernel: ... max period: 00007fffffffffff Nov 12 22:39:34.977349 kernel: ... fixed-purpose events: 0 Nov 12 22:39:34.977360 kernel: ... event mask: 000000000000003f Nov 12 22:39:34.977371 kernel: signal: max sigframe size: 1776 Nov 12 22:39:34.977382 kernel: rcu: Hierarchical SRCU implementation. Nov 12 22:39:34.977393 kernel: rcu: Max phase no-delay instances is 400. Nov 12 22:39:34.977404 kernel: smp: Bringing up secondary CPUs ... Nov 12 22:39:34.977421 kernel: smpboot: x86: Booting SMP configuration: Nov 12 22:39:34.977434 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 22:39:34.977445 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 22:39:34.977456 kernel: smpboot: Max logical packages: 1 Nov 12 22:39:34.977467 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 12 22:39:34.977478 kernel: devtmpfs: initialized Nov 12 22:39:34.977489 kernel: x86/mm: Memory block size: 128MB Nov 12 22:39:34.977500 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 22:39:34.977512 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 22:39:34.977526 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 22:39:34.977537 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 22:39:34.977548 kernel: audit: initializing netlink subsys (disabled) Nov 12 22:39:34.977559 kernel: audit: type=2000 audit(1731451174.442:1): state=initialized audit_enabled=0 res=1 Nov 12 22:39:34.977570 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 22:39:34.977582 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 22:39:34.977593 kernel: cpuidle: using governor menu Nov 12 22:39:34.977604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 22:39:34.977615 kernel: dca service started, version 1.12.1 Nov 12 22:39:34.977629 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 22:39:34.977641 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 22:39:34.977652 kernel: PCI: Using configuration type 1 for base access Nov 12 22:39:34.977664 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 22:39:34.977675 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 22:39:34.977686 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 22:39:34.977697 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 22:39:34.977708 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 22:39:34.977719 kernel: ACPI: Added _OSI(Module Device) Nov 12 22:39:34.977733 kernel: ACPI: Added _OSI(Processor Device) Nov 12 22:39:34.977744 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 22:39:34.977755 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 22:39:34.977766 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 22:39:34.977777 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 22:39:34.977788 kernel: ACPI: Interpreter enabled Nov 12 22:39:34.977798 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 22:39:34.977810 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 22:39:34.977821 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 22:39:34.977836 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 22:39:34.977848 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 22:39:34.977859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 22:39:34.978372 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 22:39:34.978560 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 22:39:34.978734 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 22:39:34.978752 kernel: PCI host bridge to bus 0000:00 Nov 12 22:39:34.978971 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 22:39:34.979190 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 22:39:34.979363 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 22:39:34.979530 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 22:39:34.979696 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 22:39:34.979867 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 12 22:39:34.980168 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 22:39:34.980432 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 22:39:34.980683 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 22:39:34.980876 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 12 22:39:34.981055 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 12 22:39:34.981272 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 12 22:39:34.981450 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 22:39:34.981662 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 22:39:34.981842 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 12 22:39:34.982018 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 12 22:39:34.982293 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 12 22:39:34.982520 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 22:39:34.982705 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 22:39:34.982891 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 12 22:39:34.983089 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 12 22:39:34.983318 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 22:39:34.983565 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 12 22:39:34.983750 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 12 22:39:34.983923 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 12 22:39:34.984107 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 12 22:39:34.984323 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 22:39:34.984504 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 22:39:34.984700 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 22:39:34.984876 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 12 22:39:34.985042 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 12 22:39:34.985274 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 22:39:34.985447 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 12 22:39:34.985464 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 22:39:34.985482 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 22:39:34.985494 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 22:39:34.985506 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 22:39:34.985517 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 22:39:34.985529 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 22:39:34.985540 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 22:39:34.985552 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 22:39:34.985563 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 22:39:34.985575 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 22:39:34.985589 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 22:39:34.985601 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 22:39:34.985612 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 22:39:34.985622 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 22:39:34.985634 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 22:39:34.985645 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 22:39:34.985656 kernel: iommu: Default domain type: Translated Nov 12 22:39:34.985667 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 22:39:34.985678 kernel: PCI: Using ACPI for IRQ routing Nov 12 22:39:34.985694 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 22:39:34.985706 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 22:39:34.985718 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 12 22:39:34.985897 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 22:39:34.986075 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 22:39:34.986279 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 22:39:34.986297 kernel: vgaarb: loaded Nov 12 22:39:34.986309 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 22:39:34.986327 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 22:39:34.986339 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 22:39:34.986351 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 22:39:34.986395 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 22:39:34.986407 kernel: pnp: PnP ACPI init Nov 12 22:39:34.986637 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 22:39:34.986656 kernel: pnp: PnP ACPI: found 6 devices Nov 12 22:39:34.986668 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 22:39:34.986684 kernel: NET: Registered PF_INET protocol family Nov 12 22:39:34.986696 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 22:39:34.986707 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 22:39:34.986719 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 22:39:34.986731 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 22:39:34.986743 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 22:39:34.986754 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 22:39:34.986766 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:39:34.986778 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 22:39:34.986794 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 22:39:34.986806 kernel: NET: Registered PF_XDP protocol family Nov 12 22:39:34.986965 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 22:39:34.987156 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 22:39:34.987316 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 22:39:34.987471 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 22:39:34.987745 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 22:39:34.987911 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 12 22:39:34.987934 kernel: PCI: CLS 0 bytes, default 64 Nov 12 22:39:34.987947 kernel: Initialise system trusted keyrings Nov 12 22:39:34.987958 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 22:39:34.987970 kernel: Key type asymmetric registered Nov 12 22:39:34.987981 kernel: Asymmetric key parser 'x509' registered Nov 12 22:39:34.987992 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 22:39:34.988004 kernel: io scheduler mq-deadline registered Nov 12 22:39:34.988014 kernel: io scheduler kyber registered Nov 12 22:39:34.988025 kernel: io scheduler bfq registered Nov 12 22:39:34.988040 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 22:39:34.988053 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 22:39:34.988064 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 22:39:34.988076 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 22:39:34.988098 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 22:39:34.988109 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 22:39:34.988134 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 22:39:34.988146 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 22:39:34.988158 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 22:39:34.988357 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 22:39:34.988376 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 22:39:34.988535 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 22:39:34.988697 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T22:39:34 UTC (1731451174) Nov 12 22:39:34.988854 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 22:39:34.988871 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 22:39:34.988883 kernel: NET: Registered PF_INET6 protocol family Nov 12 22:39:34.988895 kernel: Segment Routing with IPv6 Nov 12 22:39:34.988912 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 22:39:34.988923 kernel: NET: Registered PF_PACKET protocol family Nov 12 22:39:34.988935 kernel: Key type dns_resolver registered Nov 12 22:39:34.988946 kernel: IPI shorthand broadcast: enabled Nov 12 22:39:34.988958 kernel: sched_clock: Marking stable (802002888, 110628668)->(939113719, -26482163) Nov 12 22:39:34.988969 kernel: registered taskstats version 1 Nov 12 22:39:34.988980 kernel: Loading compiled-in X.509 certificates Nov 12 22:39:34.988991 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: d04cb2ddbd5c3ca82936c51f5645ef0dcbdcd3b4' Nov 12 22:39:34.989002 kernel: Key type .fscrypt registered Nov 12 22:39:34.989016 kernel: Key type fscrypt-provisioning registered Nov 12 22:39:34.989028 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 22:39:34.989039 kernel: ima: Allocated hash algorithm: sha1 Nov 12 22:39:34.989050 kernel: ima: No architecture policies found Nov 12 22:39:34.989062 kernel: clk: Disabling unused clocks Nov 12 22:39:34.989074 kernel: Freeing unused kernel image (initmem) memory: 42968K Nov 12 22:39:34.989097 kernel: Write protecting the kernel read-only data: 36864k Nov 12 22:39:34.989109 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Nov 12 22:39:34.989136 kernel: Run /init as init process Nov 12 22:39:34.989152 kernel: with arguments: Nov 12 22:39:34.989163 kernel: /init Nov 12 22:39:34.989174 kernel: with environment: Nov 12 22:39:34.989185 kernel: HOME=/ Nov 12 22:39:34.989196 kernel: TERM=linux Nov 12 22:39:34.989207 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 22:39:34.989221 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:39:34.989236 systemd[1]: Detected virtualization kvm. Nov 12 22:39:34.989252 systemd[1]: Detected architecture x86-64. Nov 12 22:39:34.989264 systemd[1]: Running in initrd. Nov 12 22:39:34.989276 systemd[1]: No hostname configured, using default hostname. Nov 12 22:39:34.989288 systemd[1]: Hostname set to . Nov 12 22:39:34.989301 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:39:34.989313 systemd[1]: Queued start job for default target initrd.target. Nov 12 22:39:34.989327 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:39:34.989339 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:39:34.989357 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 22:39:34.989383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:39:34.989398 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 22:39:34.989411 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 22:39:34.989426 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 22:39:34.989442 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 22:39:34.989454 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:39:34.989466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:39:34.989479 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:39:34.989491 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:39:34.989503 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:39:34.989515 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:39:34.989527 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:39:34.989543 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:39:34.989556 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:39:34.989571 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:39:34.989584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:39:34.989596 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:39:34.989608 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:39:34.989621 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:39:34.989634 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 22:39:34.989651 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:39:34.989664 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 22:39:34.989677 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 22:39:34.989690 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:39:34.989703 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:39:34.989715 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:39:34.989728 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 22:39:34.989742 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:39:34.989755 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 22:39:34.989806 systemd-journald[193]: Collecting audit messages is disabled. Nov 12 22:39:34.989843 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:39:34.989859 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:39:34.989872 systemd-journald[193]: Journal started Nov 12 22:39:34.989901 systemd-journald[193]: Runtime Journal (/run/log/journal/a6cfc0c54cc948e9bbdb8250185d156f) is 6.0M, max 48.4M, 42.3M free. Nov 12 22:39:34.980558 systemd-modules-load[194]: Inserted module 'overlay' Nov 12 22:39:34.992809 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:39:35.011174 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 22:39:35.013525 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 12 22:39:35.033677 kernel: Bridge firewalling registered Nov 12 22:39:35.030465 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:39:35.030941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:39:35.045452 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:39:35.048188 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:39:35.049278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:39:35.055792 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:39:35.061556 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:39:35.064490 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:39:35.066832 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 22:39:35.068031 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:39:35.085529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:39:35.094307 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:39:35.102220 dracut-cmdline[227]: dracut-dracut-053 Nov 12 22:39:35.106320 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 12 22:39:35.136496 systemd-resolved[230]: Positive Trust Anchors: Nov 12 22:39:35.136516 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:39:35.136551 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:39:35.139108 systemd-resolved[230]: Defaulting to hostname 'linux'. Nov 12 22:39:35.140488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:39:35.147439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:39:35.209200 kernel: SCSI subsystem initialized Nov 12 22:39:35.219158 kernel: Loading iSCSI transport class v2.0-870. Nov 12 22:39:35.230172 kernel: iscsi: registered transport (tcp) Nov 12 22:39:35.252156 kernel: iscsi: registered transport (qla4xxx) Nov 12 22:39:35.252230 kernel: QLogic iSCSI HBA Driver Nov 12 22:39:35.305275 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 22:39:35.326399 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 22:39:35.353859 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 22:39:35.353966 kernel: device-mapper: uevent: version 1.0.3 Nov 12 22:39:35.353980 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 22:39:35.397164 kernel: raid6: avx2x4 gen() 28957 MB/s Nov 12 22:39:35.414168 kernel: raid6: avx2x2 gen() 27109 MB/s Nov 12 22:39:35.431343 kernel: raid6: avx2x1 gen() 21061 MB/s Nov 12 22:39:35.431430 kernel: raid6: using algorithm avx2x4 gen() 28957 MB/s Nov 12 22:39:35.449337 kernel: raid6: .... xor() 7292 MB/s, rmw enabled Nov 12 22:39:35.449415 kernel: raid6: using avx2x2 recovery algorithm Nov 12 22:39:35.470152 kernel: xor: automatically using best checksumming function avx Nov 12 22:39:35.639164 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 22:39:35.654685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:39:35.664500 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:39:35.677525 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 12 22:39:35.682380 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:39:35.691422 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 22:39:35.708098 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Nov 12 22:39:35.747364 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:39:35.758317 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:39:35.825455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:39:35.834423 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 22:39:35.851300 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 22:39:35.854262 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:39:35.855886 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:39:35.856418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:39:35.870228 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 22:39:35.888399 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 22:39:35.888602 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 22:39:35.888618 kernel: GPT:9289727 != 19775487 Nov 12 22:39:35.888632 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 22:39:35.888646 kernel: GPT:9289727 != 19775487 Nov 12 22:39:35.888659 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 22:39:35.888680 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:39:35.870399 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 22:39:35.890996 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:39:35.899103 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 22:39:35.899167 kernel: libata version 3.00 loaded. Nov 12 22:39:35.908661 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 22:39:35.908709 kernel: AES CTR mode by8 optimization enabled Nov 12 22:39:35.915342 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 22:39:35.938822 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 22:39:35.938840 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 22:39:35.939028 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 22:39:35.939220 kernel: scsi host0: ahci Nov 12 22:39:35.939397 kernel: scsi host1: ahci Nov 12 22:39:35.939565 kernel: scsi host2: ahci Nov 12 22:39:35.939761 kernel: scsi host3: ahci Nov 12 22:39:35.939935 kernel: scsi host4: ahci Nov 12 22:39:35.940116 kernel: scsi host5: ahci Nov 12 22:39:35.940331 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 12 22:39:35.940346 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 12 22:39:35.940357 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 12 22:39:35.940372 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 12 22:39:35.940383 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 12 22:39:35.940393 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 12 22:39:35.940403 kernel: BTRFS: device fsid d498af32-b44b-4318-a942-3a646ccb9d0a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (465) Nov 12 22:39:35.940414 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (470) Nov 12 22:39:35.930562 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:39:35.930686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:39:35.947066 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:39:35.948516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:39:35.948700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:39:35.950299 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:39:35.973481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:39:35.983229 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 22:39:35.998492 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 22:39:36.023656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:39:36.038381 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 22:39:36.039994 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 22:39:36.048080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:39:36.060368 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 22:39:36.063580 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 22:39:36.076377 disk-uuid[557]: Primary Header is updated. Nov 12 22:39:36.076377 disk-uuid[557]: Secondary Entries is updated. Nov 12 22:39:36.076377 disk-uuid[557]: Secondary Header is updated. Nov 12 22:39:36.081156 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:39:36.087159 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:39:36.095484 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:39:36.240193 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 22:39:36.247168 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 22:39:36.247282 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 22:39:36.248502 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 22:39:36.248530 kernel: ata3.00: applying bridge limits Nov 12 22:39:36.250167 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 22:39:36.250259 kernel: ata3.00: configured for UDMA/100 Nov 12 22:39:36.253158 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 22:39:36.256192 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 22:39:36.256290 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 22:39:36.301175 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 22:39:36.315399 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 22:39:36.315429 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 22:39:37.096266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 22:39:37.101464 disk-uuid[558]: The operation has completed successfully. Nov 12 22:39:37.172528 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 22:39:37.174714 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 22:39:37.209358 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 22:39:37.220306 sh[594]: Success Nov 12 22:39:37.267165 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 22:39:37.351977 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 22:39:37.375966 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 22:39:37.387728 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 22:39:37.410615 kernel: BTRFS info (device dm-0): first mount of filesystem d498af32-b44b-4318-a942-3a646ccb9d0a Nov 12 22:39:37.410702 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:39:37.410719 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 22:39:37.411959 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 22:39:37.413245 kernel: BTRFS info (device dm-0): using free space tree Nov 12 22:39:37.440347 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 22:39:37.442427 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 22:39:37.463652 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 22:39:37.473107 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 22:39:37.502970 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:39:37.503087 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:39:37.503104 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:39:37.542722 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:39:37.560353 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 22:39:37.565411 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:39:37.593091 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 22:39:37.607261 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 22:39:37.837655 ignition[692]: Ignition 2.20.0 Nov 12 22:39:37.837680 ignition[692]: Stage: fetch-offline Nov 12 22:39:37.837752 ignition[692]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:39:37.837772 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:39:37.837903 ignition[692]: parsed url from cmdline: "" Nov 12 22:39:37.837909 ignition[692]: no config URL provided Nov 12 22:39:37.837915 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 22:39:37.837927 ignition[692]: no config at "/usr/lib/ignition/user.ign" Nov 12 22:39:37.837960 ignition[692]: op(1): [started] loading QEMU firmware config module Nov 12 22:39:37.837965 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 22:39:37.975429 ignition[692]: op(1): [finished] loading QEMU firmware config module Nov 12 22:39:37.985981 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:39:38.012485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:39:38.055875 ignition[692]: parsing config with SHA512: c1a2f37acca4834ba8739e0c5b9a5a75c75fb943fe405d6dc1e3f000aa0b2044290112b61cb362b09f3664a4235c7dc2ccfe11bf4b3cb8129d680ac35d6f3592 Nov 12 22:39:38.074308 systemd-networkd[782]: lo: Link UP Nov 12 22:39:38.074325 systemd-networkd[782]: lo: Gained carrier Nov 12 22:39:38.076489 systemd-networkd[782]: Enumeration completed Nov 12 22:39:38.076674 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:39:38.077134 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:39:38.077139 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:39:38.082317 systemd-networkd[782]: eth0: Link UP Nov 12 22:39:38.082329 systemd-networkd[782]: eth0: Gained carrier Nov 12 22:39:38.082359 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:39:38.089346 systemd[1]: Reached target network.target - Network. Nov 12 22:39:38.135810 unknown[692]: fetched base config from "system" Nov 12 22:39:38.136107 unknown[692]: fetched user config from "qemu" Nov 12 22:39:38.136812 ignition[692]: fetch-offline: fetch-offline passed Nov 12 22:39:38.142448 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:39:38.136936 ignition[692]: Ignition finished successfully Nov 12 22:39:38.145887 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 22:39:38.148665 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:39:38.163594 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 22:39:38.208838 ignition[785]: Ignition 2.20.0 Nov 12 22:39:38.208850 ignition[785]: Stage: kargs Nov 12 22:39:38.209143 ignition[785]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:39:38.209162 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:39:38.214270 ignition[785]: kargs: kargs passed Nov 12 22:39:38.214357 ignition[785]: Ignition finished successfully Nov 12 22:39:38.226114 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 22:39:38.242470 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 22:39:38.524675 ignition[794]: Ignition 2.20.0 Nov 12 22:39:38.524696 ignition[794]: Stage: disks Nov 12 22:39:38.524935 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 12 22:39:38.524949 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:39:38.535265 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 22:39:38.526198 ignition[794]: disks: disks passed Nov 12 22:39:38.549089 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 22:39:38.526264 ignition[794]: Ignition finished successfully Nov 12 22:39:38.572465 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:39:38.578558 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:39:38.583574 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:39:38.585093 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:39:38.621537 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 22:39:38.658695 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 22:39:38.671886 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 22:39:38.690344 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 22:39:38.967183 kernel: EXT4-fs (vda9): mounted filesystem 62325592-ead9-4e81-b706-99baa0cf9fff r/w with ordered data mode. Quota mode: none. Nov 12 22:39:38.967992 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 22:39:38.969774 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 22:39:38.981443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:39:38.985992 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 22:39:38.989731 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 22:39:38.991557 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Nov 12 22:39:38.989810 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 22:39:38.989860 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:39:38.995156 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:39:38.995194 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:39:38.995223 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:39:38.999159 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:39:39.004500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:39:39.006799 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 22:39:39.012417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 22:39:39.069983 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 22:39:39.075445 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Nov 12 22:39:39.083928 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 22:39:39.090183 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 22:39:39.238310 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 22:39:39.252378 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 22:39:39.254570 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 22:39:39.264395 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 22:39:39.265980 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:39:39.304315 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 22:39:39.309223 ignition[924]: INFO : Ignition 2.20.0 Nov 12 22:39:39.309223 ignition[924]: INFO : Stage: mount Nov 12 22:39:39.311166 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:39:39.311166 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:39:39.314387 ignition[924]: INFO : mount: mount passed Nov 12 22:39:39.315244 ignition[924]: INFO : Ignition finished successfully Nov 12 22:39:39.318070 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 22:39:39.331545 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 22:39:39.340391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 22:39:39.357034 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Nov 12 22:39:39.357098 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 12 22:39:39.357114 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 22:39:39.359156 kernel: BTRFS info (device vda6): using free space tree Nov 12 22:39:39.362155 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 22:39:39.365411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 22:39:39.394568 ignition[956]: INFO : Ignition 2.20.0 Nov 12 22:39:39.394568 ignition[956]: INFO : Stage: files Nov 12 22:39:39.396573 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:39:39.396573 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:39:39.399602 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Nov 12 22:39:39.401184 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 22:39:39.401184 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 22:39:39.407278 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 22:39:39.409111 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 22:39:39.411175 unknown[956]: wrote ssh authorized keys file for user: core Nov 12 22:39:39.412498 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 22:39:39.414058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 22:39:39.414058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 22:39:39.414058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:39:39.414058 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 22:39:39.427977 systemd-networkd[782]: eth0: Gained IPv6LL Nov 12 22:39:39.482416 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 22:39:39.877775 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 22:39:39.877775 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:39:39.882898 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 22:39:40.368624 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 12 22:39:40.597158 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 22:39:40.597158 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:39:40.601237 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 22:39:41.030599 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 12 22:39:41.707951 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 22:39:41.707951 ignition[956]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 12 22:39:41.712498 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 22:39:41.715340 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 22:39:41.715340 ignition[956]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 12 22:39:41.715340 ignition[956]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 12 22:39:41.720241 ignition[956]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:39:41.722104 ignition[956]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 22:39:41.722104 ignition[956]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 12 22:39:41.722104 ignition[956]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Nov 12 22:39:41.726518 ignition[956]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:39:41.726518 ignition[956]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 22:39:41.726518 ignition[956]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Nov 12 22:39:41.731957 ignition[956]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 22:39:41.780052 ignition[956]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:39:41.790179 ignition[956]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 22:39:41.792324 ignition[956]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 22:39:41.792324 ignition[956]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 12 22:39:41.795517 ignition[956]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 22:39:41.797383 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:39:41.799537 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 22:39:41.801367 ignition[956]: INFO : files: files passed Nov 12 22:39:41.802156 ignition[956]: INFO : Ignition finished successfully Nov 12 22:39:41.805659 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 22:39:41.816347 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 22:39:41.817729 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 22:39:41.826818 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 22:39:41.827017 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 22:39:41.831102 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 22:39:41.833408 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:39:41.835232 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:39:41.836811 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 22:39:41.841374 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:39:41.841741 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 22:39:41.848441 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 22:39:41.878967 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 22:39:41.879114 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 22:39:41.881996 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 22:39:41.884513 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 22:39:41.887001 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 22:39:41.894338 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 22:39:41.910679 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:39:41.913633 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 22:39:41.931372 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:39:41.932884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:39:41.935811 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 22:39:41.938028 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 22:39:41.938197 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 22:39:41.940777 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 22:39:41.942589 systemd[1]: Stopped target basic.target - Basic System. Nov 12 22:39:41.945009 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 22:39:41.947292 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 22:39:41.949467 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 22:39:41.951999 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 22:39:41.954530 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 22:39:41.957044 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 22:39:41.959401 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 22:39:41.961973 systemd[1]: Stopped target swap.target - Swaps. Nov 12 22:39:41.964062 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 22:39:41.964230 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 22:39:41.966820 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:39:41.968590 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:39:41.971084 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 22:39:41.971255 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:39:41.973525 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 22:39:41.973693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 22:39:41.976397 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 22:39:41.976555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 22:39:41.978723 systemd[1]: Stopped target paths.target - Path Units. Nov 12 22:39:41.980768 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 22:39:41.984213 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:39:41.985776 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 22:39:41.987756 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 22:39:41.989847 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 22:39:41.990008 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 22:39:41.991815 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 22:39:41.991950 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 22:39:41.993942 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 22:39:41.994112 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 22:39:41.996702 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 22:39:41.996862 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 22:39:42.006412 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 22:39:42.009064 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 22:39:42.010091 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 22:39:42.010338 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:39:42.012848 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 22:39:42.013088 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 22:39:42.022028 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 22:39:42.022236 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 22:39:42.031267 ignition[1010]: INFO : Ignition 2.20.0 Nov 12 22:39:42.031267 ignition[1010]: INFO : Stage: umount Nov 12 22:39:42.033883 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 22:39:42.033883 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 22:39:42.033883 ignition[1010]: INFO : umount: umount passed Nov 12 22:39:42.033883 ignition[1010]: INFO : Ignition finished successfully Nov 12 22:39:42.035242 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 22:39:42.035392 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 22:39:42.045268 systemd[1]: Stopped target network.target - Network. Nov 12 22:39:42.046931 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 22:39:42.047052 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 22:39:42.049382 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 22:39:42.049458 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 22:39:42.052581 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 22:39:42.052657 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 22:39:42.055462 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 22:39:42.055586 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 22:39:42.058311 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 22:39:42.060828 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 22:39:42.064972 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 22:39:42.065899 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 22:39:42.066066 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 22:39:42.066180 systemd-networkd[782]: eth0: DHCPv6 lease lost Nov 12 22:39:42.069282 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 22:39:42.069489 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 22:39:42.072121 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 22:39:42.072382 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 22:39:42.081622 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 22:39:42.081708 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:39:42.084302 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 22:39:42.084429 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 22:39:42.097364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 22:39:42.100417 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 22:39:42.100540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 22:39:42.103423 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:39:42.103514 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:39:42.105207 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 22:39:42.105277 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 22:39:42.107880 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 22:39:42.108015 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:39:42.111508 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:39:42.126066 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 22:39:42.126259 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 22:39:42.131590 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 22:39:42.131882 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:39:42.134429 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 22:39:42.134507 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 22:39:42.136352 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 22:39:42.136406 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:39:42.138408 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 22:39:42.138476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 22:39:42.140955 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 22:39:42.141028 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 22:39:42.143372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 22:39:42.143436 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 22:39:42.155610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 22:39:42.158205 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 22:39:42.158342 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:39:42.160934 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 22:39:42.161006 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:39:42.163494 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 22:39:42.163577 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:39:42.165989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 22:39:42.166063 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:39:42.169252 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 22:39:42.169415 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 22:39:42.171709 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 22:39:42.186357 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 22:39:42.196071 systemd[1]: Switching root. Nov 12 22:39:42.238201 systemd-journald[193]: Journal stopped Nov 12 22:39:43.552654 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 12 22:39:43.552717 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 22:39:43.552739 kernel: SELinux: policy capability open_perms=1 Nov 12 22:39:43.552751 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 22:39:43.552762 kernel: SELinux: policy capability always_check_network=0 Nov 12 22:39:43.552774 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 22:39:43.552785 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 22:39:43.552796 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 22:39:43.552816 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 22:39:43.552831 kernel: audit: type=1403 audit(1731451182.683:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 22:39:43.552857 systemd[1]: Successfully loaded SELinux policy in 43.269ms. Nov 12 22:39:43.552877 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.979ms. Nov 12 22:39:43.552892 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 22:39:43.552904 systemd[1]: Detected virtualization kvm. Nov 12 22:39:43.552917 systemd[1]: Detected architecture x86-64. Nov 12 22:39:43.552930 systemd[1]: Detected first boot. Nov 12 22:39:43.552942 systemd[1]: Initializing machine ID from VM UUID. Nov 12 22:39:43.552954 zram_generator::config[1074]: No configuration found. Nov 12 22:39:43.552970 systemd[1]: Populated /etc with preset unit settings. Nov 12 22:39:43.552985 systemd[1]: Queued start job for default target multi-user.target. Nov 12 22:39:43.553000 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 22:39:43.553016 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 22:39:43.553030 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 22:39:43.553044 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 22:39:43.553058 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 22:39:43.553072 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 22:39:43.553093 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 22:39:43.553107 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 22:39:43.553134 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 22:39:43.553148 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 22:39:43.553162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 22:39:43.553177 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 22:39:43.553190 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 22:39:43.553227 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 22:39:43.553242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 22:39:43.553259 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 22:39:43.553279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 22:39:43.553293 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 22:39:43.553306 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 22:39:43.553320 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 22:39:43.553334 systemd[1]: Reached target slices.target - Slice Units. Nov 12 22:39:43.553348 systemd[1]: Reached target swap.target - Swaps. Nov 12 22:39:43.553363 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 22:39:43.553379 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 22:39:43.553393 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 22:39:43.553407 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 22:39:43.553421 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 22:39:43.553435 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 22:39:43.553448 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 22:39:43.553462 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 22:39:43.553476 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 22:39:43.553490 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 22:39:43.553508 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 22:39:43.553523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:39:43.553552 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 22:39:43.553567 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 22:39:43.553581 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 22:39:43.553594 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 22:39:43.553609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:39:43.553625 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 22:39:43.553642 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 22:39:43.553663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:39:43.553681 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:39:43.553699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:39:43.553716 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 22:39:43.553733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:39:43.553751 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 22:39:43.553774 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 22:39:43.553792 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 22:39:43.553813 kernel: fuse: init (API version 7.39) Nov 12 22:39:43.553830 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 22:39:43.553855 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 22:39:43.553869 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 22:39:43.553883 kernel: loop: module loaded Nov 12 22:39:43.553896 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 22:39:43.553913 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 22:39:43.553946 systemd-journald[1156]: Collecting audit messages is disabled. Nov 12 22:39:43.553980 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:39:43.553994 systemd-journald[1156]: Journal started Nov 12 22:39:43.554019 systemd-journald[1156]: Runtime Journal (/run/log/journal/a6cfc0c54cc948e9bbdb8250185d156f) is 6.0M, max 48.4M, 42.3M free. Nov 12 22:39:43.557261 kernel: ACPI: bus type drm_connector registered Nov 12 22:39:43.559876 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 22:39:43.563475 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 22:39:43.564739 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 22:39:43.566091 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 22:39:43.567968 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 22:39:43.569284 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 22:39:43.570704 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 22:39:43.572330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 22:39:43.574288 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 22:39:43.574532 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 22:39:43.576393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:39:43.576643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:39:43.578484 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:39:43.578713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:39:43.580184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:39:43.580419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:39:43.582022 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 22:39:43.582363 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 22:39:43.584140 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:39:43.584516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:39:43.586327 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 22:39:43.588298 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 22:39:43.591459 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 22:39:43.598103 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 22:39:43.616115 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 22:39:43.627222 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 22:39:43.631070 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 22:39:43.632595 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 22:39:43.638330 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 22:39:43.645164 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 22:39:43.646565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:39:43.648542 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 22:39:43.650021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:39:43.654345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:39:43.664482 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 22:39:43.667917 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 22:39:43.669525 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 22:39:43.682640 systemd-journald[1156]: Time spent on flushing to /var/log/journal/a6cfc0c54cc948e9bbdb8250185d156f is 20.093ms for 947 entries. Nov 12 22:39:43.682640 systemd-journald[1156]: System Journal (/var/log/journal/a6cfc0c54cc948e9bbdb8250185d156f) is 8.0M, max 195.6M, 187.6M free. Nov 12 22:39:43.763588 systemd-journald[1156]: Received client request to flush runtime journal. Nov 12 22:39:43.699470 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 22:39:43.702490 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 22:39:43.705027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 22:39:43.710266 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:39:43.745673 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 22:39:43.760396 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Nov 12 22:39:43.760412 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Nov 12 22:39:43.766946 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 22:39:43.768882 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 22:39:43.783462 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 22:39:43.785096 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 22:39:43.813323 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 22:39:43.820398 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 22:39:43.904087 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Nov 12 22:39:43.904135 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Nov 12 22:39:43.914988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 22:39:44.575515 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 22:39:44.586649 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 22:39:44.614634 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Nov 12 22:39:44.635467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 22:39:44.650450 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 22:39:44.679388 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 22:39:44.693006 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1242) Nov 12 22:39:44.691153 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 12 22:39:44.695151 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1242) Nov 12 22:39:44.767189 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1250) Nov 12 22:39:44.810255 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 22:39:44.822150 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 22:39:44.834570 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 22:39:44.838151 kernel: ACPI: button: Power Button [PWRF] Nov 12 22:39:44.852197 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 22:39:44.872146 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 22:39:44.880910 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 22:39:44.881241 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 22:39:44.896465 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 22:39:44.895269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 22:39:44.900111 systemd-networkd[1247]: lo: Link UP Nov 12 22:39:44.900146 systemd-networkd[1247]: lo: Gained carrier Nov 12 22:39:44.902411 systemd-networkd[1247]: Enumeration completed Nov 12 22:39:44.902988 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:39:44.902996 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 22:39:44.904006 systemd-networkd[1247]: eth0: Link UP Nov 12 22:39:44.904011 systemd-networkd[1247]: eth0: Gained carrier Nov 12 22:39:44.904029 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 22:39:44.938216 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 22:39:44.952977 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 22:39:44.978323 systemd-networkd[1247]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 22:39:44.999494 kernel: kvm_amd: TSC scaling supported Nov 12 22:39:44.999596 kernel: kvm_amd: Nested Virtualization enabled Nov 12 22:39:44.999650 kernel: kvm_amd: Nested Paging enabled Nov 12 22:39:45.000896 kernel: kvm_amd: LBR virtualization supported Nov 12 22:39:45.002475 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 22:39:45.002513 kernel: kvm_amd: Virtual GIF supported Nov 12 22:39:45.026240 kernel: EDAC MC: Ver: 3.0.0 Nov 12 22:39:45.078174 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 22:39:45.096498 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 22:39:45.098534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 22:39:45.108812 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:39:45.149023 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 22:39:45.150790 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 22:39:45.192771 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 22:39:45.202510 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 22:39:45.250274 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 22:39:45.258706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 22:39:45.265428 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 22:39:45.269163 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 22:39:45.274652 systemd[1]: Reached target machines.target - Containers. Nov 12 22:39:45.279149 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 22:39:45.305723 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 22:39:45.321655 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 22:39:45.324383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:39:45.327715 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 22:39:45.342710 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 22:39:45.357351 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 22:39:45.368474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 22:39:45.378171 kernel: loop0: detected capacity change from 0 to 138184 Nov 12 22:39:45.391767 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 22:39:45.440112 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 22:39:45.448231 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 22:39:45.471344 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 22:39:45.633169 kernel: loop1: detected capacity change from 0 to 140992 Nov 12 22:39:45.666204 kernel: loop2: detected capacity change from 0 to 211296 Nov 12 22:39:45.753369 kernel: loop3: detected capacity change from 0 to 138184 Nov 12 22:39:45.770188 kernel: loop4: detected capacity change from 0 to 140992 Nov 12 22:39:45.783214 kernel: loop5: detected capacity change from 0 to 211296 Nov 12 22:39:45.791834 (sd-merge)[1309]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 22:39:45.792807 (sd-merge)[1309]: Merged extensions into '/usr'. Nov 12 22:39:45.797793 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 22:39:45.797811 systemd[1]: Reloading... Nov 12 22:39:45.882695 zram_generator::config[1343]: No configuration found. Nov 12 22:39:46.071408 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 22:39:46.189232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:39:46.256013 systemd[1]: Reloading finished in 457 ms. Nov 12 22:39:46.277042 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 22:39:46.279099 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 22:39:46.296321 systemd[1]: Starting ensure-sysext.service... Nov 12 22:39:46.312388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 22:39:46.321245 systemd[1]: Reloading requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Nov 12 22:39:46.321263 systemd[1]: Reloading... Nov 12 22:39:46.353653 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 22:39:46.354274 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 22:39:46.355908 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 22:39:46.356433 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 12 22:39:46.356560 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 12 22:39:46.366083 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:39:46.366106 systemd-tmpfiles[1382]: Skipping /boot Nov 12 22:39:46.388585 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 22:39:46.388607 systemd-tmpfiles[1382]: Skipping /boot Nov 12 22:39:46.397295 zram_generator::config[1417]: No configuration found. Nov 12 22:39:46.537635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:39:46.613082 systemd[1]: Reloading finished in 291 ms. Nov 12 22:39:46.636107 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 22:39:46.657284 systemd-networkd[1247]: eth0: Gained IPv6LL Nov 12 22:39:46.660536 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:39:46.666895 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 22:39:46.672321 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 22:39:46.678479 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 22:39:46.686324 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 22:39:46.692470 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 22:39:46.706232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:39:46.706562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:39:46.708898 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:39:46.722978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:39:46.729569 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:39:46.731313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:39:46.731544 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:39:46.736597 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 22:39:46.741486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:39:46.741832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:39:46.745395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:39:46.745712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:39:46.750384 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:39:46.750780 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:39:46.760015 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 22:39:46.765425 augenrules[1492]: No rules Nov 12 22:39:46.765256 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:39:46.765665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:39:46.771516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:39:46.774750 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:39:46.779583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:39:46.781185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:39:46.787563 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 22:39:46.788964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:39:46.791452 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:39:46.791945 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:39:46.794284 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 22:39:46.797263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:39:46.797618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:39:46.800367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:39:46.800696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:39:46.803202 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:39:46.803533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:39:46.815402 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 22:39:46.826963 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:39:46.835665 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:39:46.837187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 22:39:46.839982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 22:39:46.844287 systemd-resolved[1464]: Positive Trust Anchors: Nov 12 22:39:46.844317 systemd-resolved[1464]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 22:39:46.844357 systemd-resolved[1464]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 22:39:46.845252 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 22:39:46.851246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 22:39:46.859946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 22:39:46.860925 systemd-resolved[1464]: Defaulting to hostname 'linux'. Nov 12 22:39:46.861439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 22:39:46.861604 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 22:39:46.861723 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 22:39:46.864624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 22:39:46.864970 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 22:39:46.866932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 22:39:46.869355 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 22:39:46.869609 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 22:39:46.871534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 22:39:46.871817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 22:39:46.873244 augenrules[1519]: /sbin/augenrules: No change Nov 12 22:39:46.873880 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 22:39:46.874244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 22:39:46.878814 systemd[1]: Finished ensure-sysext.service. Nov 12 22:39:46.882040 augenrules[1546]: No rules Nov 12 22:39:46.883434 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:39:46.883923 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:39:46.891466 systemd[1]: Reached target network.target - Network. Nov 12 22:39:46.892647 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 22:39:46.893836 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 22:39:46.895155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 22:39:46.895253 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 22:39:46.908439 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 22:39:46.995743 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 22:39:47.741264 systemd-resolved[1464]: Clock change detected. Flushing caches. Nov 12 22:39:47.741311 systemd-timesyncd[1558]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 22:39:47.741385 systemd-timesyncd[1558]: Initial clock synchronization to Tue 2024-11-12 22:39:47.741023 UTC. Nov 12 22:39:47.742437 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 22:39:47.743975 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 22:39:47.745482 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 22:39:47.746863 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 22:39:47.748311 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 22:39:47.748346 systemd[1]: Reached target paths.target - Path Units. Nov 12 22:39:47.749414 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 22:39:47.751007 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 22:39:47.752490 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 22:39:47.754018 systemd[1]: Reached target timers.target - Timer Units. Nov 12 22:39:47.756557 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 22:39:47.761426 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 22:39:47.767880 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 22:39:47.770613 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 22:39:47.772092 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 22:39:47.773380 systemd[1]: Reached target basic.target - Basic System. Nov 12 22:39:47.774925 systemd[1]: System is tainted: cgroupsv1 Nov 12 22:39:47.774985 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:39:47.775020 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 22:39:47.777270 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 22:39:47.780747 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 22:39:47.783662 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 22:39:47.789029 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 22:39:47.791812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 22:39:47.795084 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 22:39:47.797724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:39:47.802635 jq[1566]: false Nov 12 22:39:47.806127 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 22:39:47.815117 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 22:39:47.821380 extend-filesystems[1568]: Found loop3 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found loop4 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found loop5 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found sr0 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found vda Nov 12 22:39:47.823253 extend-filesystems[1568]: Found vda1 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found vda2 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found vda3 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found usr Nov 12 22:39:47.823253 extend-filesystems[1568]: Found vda4 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found vda6 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found vda7 Nov 12 22:39:47.823253 extend-filesystems[1568]: Found vda9 Nov 12 22:39:47.823253 extend-filesystems[1568]: Checking size of /dev/vda9 Nov 12 22:39:47.821439 dbus-daemon[1564]: [system] SELinux support is enabled Nov 12 22:39:47.824921 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 22:39:47.831873 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 22:39:47.839100 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 22:39:47.851037 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 22:39:47.852564 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 22:39:47.854177 extend-filesystems[1568]: Resized partition /dev/vda9 Nov 12 22:39:47.864307 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1245) Nov 12 22:39:47.864407 extend-filesystems[1596]: resize2fs 1.47.1 (20-May-2024) Nov 12 22:39:47.872539 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 22:39:47.870107 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 22:39:47.875010 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 22:39:47.884581 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 22:39:47.894185 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 22:39:47.894616 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 22:39:47.896921 jq[1601]: true Nov 12 22:39:47.902684 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 22:39:47.903082 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 22:39:47.906717 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 22:39:47.921016 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 22:39:47.922597 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 22:39:47.922995 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 22:39:47.949740 update_engine[1593]: I20241112 22:39:47.923029 1593 main.cc:92] Flatcar Update Engine starting Nov 12 22:39:47.949740 update_engine[1593]: I20241112 22:39:47.927407 1593 update_check_scheduler.cc:74] Next update check in 11m21s Nov 12 22:39:47.952888 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 22:39:47.952888 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 22:39:47.952888 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 22:39:47.958870 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Nov 12 22:39:47.966865 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 22:39:47.967117 sshd_keygen[1602]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 22:39:47.967233 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 22:39:47.967745 (ntainerd)[1613]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 22:39:47.971380 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 22:39:47.971765 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 22:39:47.971870 systemd-logind[1591]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 22:39:47.971899 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 22:39:47.975087 systemd-logind[1591]: New seat seat0. Nov 12 22:39:47.979348 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 22:39:47.983945 jq[1612]: true Nov 12 22:39:48.033427 dbus-daemon[1564]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 12 22:39:48.039271 tar[1610]: linux-amd64/helm Nov 12 22:39:48.047188 systemd[1]: Started update-engine.service - Update Engine. Nov 12 22:39:48.051130 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 22:39:48.051353 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 22:39:48.051497 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 22:39:48.053335 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 22:39:48.053453 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 22:39:48.055552 bash[1652]: Updated "/home/core/.ssh/authorized_keys" Nov 12 22:39:48.056078 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 22:39:48.098608 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 22:39:48.109953 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 22:39:48.132665 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 22:39:48.176332 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 22:39:48.178342 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 22:39:48.183518 locksmithd[1656]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 22:39:48.189986 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 22:39:48.190507 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 22:39:48.244312 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 22:39:48.304449 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 22:39:48.333039 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 22:39:48.336832 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 22:39:48.339026 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 22:39:48.758272 containerd[1613]: time="2024-11-12T22:39:48.757531841Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 12 22:39:48.794063 containerd[1613]: time="2024-11-12T22:39:48.793955171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:39:48.797370 containerd[1613]: time="2024-11-12T22:39:48.797284355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:39:48.797370 containerd[1613]: time="2024-11-12T22:39:48.797349267Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 22:39:48.797370 containerd[1613]: time="2024-11-12T22:39:48.797377470Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 22:39:48.797662 containerd[1613]: time="2024-11-12T22:39:48.797634382Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 22:39:48.797662 containerd[1613]: time="2024-11-12T22:39:48.797659559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 22:39:48.797760 containerd[1613]: time="2024-11-12T22:39:48.797732256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:39:48.797760 containerd[1613]: time="2024-11-12T22:39:48.797750410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:39:48.798066 containerd[1613]: time="2024-11-12T22:39:48.798039311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:39:48.798066 containerd[1613]: time="2024-11-12T22:39:48.798058978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 22:39:48.798123 containerd[1613]: time="2024-11-12T22:39:48.798072574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:39:48.798123 containerd[1613]: time="2024-11-12T22:39:48.798082232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 22:39:48.798196 containerd[1613]: time="2024-11-12T22:39:48.798179304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:39:48.798455 containerd[1613]: time="2024-11-12T22:39:48.798429734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 22:39:48.798637 containerd[1613]: time="2024-11-12T22:39:48.798612316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 22:39:48.798637 containerd[1613]: time="2024-11-12T22:39:48.798631542Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 22:39:48.802569 containerd[1613]: time="2024-11-12T22:39:48.802499407Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 22:39:48.802638 containerd[1613]: time="2024-11-12T22:39:48.802620755Z" level=info msg="metadata content store policy set" policy=shared Nov 12 22:39:48.873660 containerd[1613]: time="2024-11-12T22:39:48.873577950Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 22:39:48.873792 containerd[1613]: time="2024-11-12T22:39:48.873679791Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 22:39:48.873792 containerd[1613]: time="2024-11-12T22:39:48.873699909Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 22:39:48.873792 containerd[1613]: time="2024-11-12T22:39:48.873718284Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 22:39:48.873792 containerd[1613]: time="2024-11-12T22:39:48.873736748Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 22:39:48.905610 containerd[1613]: time="2024-11-12T22:39:48.905449081Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 22:39:48.906154 containerd[1613]: time="2024-11-12T22:39:48.906049778Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 22:39:48.906544 containerd[1613]: time="2024-11-12T22:39:48.906415434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 22:39:48.906544 containerd[1613]: time="2024-11-12T22:39:48.906452784Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 22:39:48.906544 containerd[1613]: time="2024-11-12T22:39:48.906474846Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 22:39:48.906544 containerd[1613]: time="2024-11-12T22:39:48.906495985Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 22:39:48.906544 containerd[1613]: time="2024-11-12T22:39:48.906524809Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 22:39:48.906544 containerd[1613]: time="2024-11-12T22:39:48.906546490Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 22:39:48.906544 containerd[1613]: time="2024-11-12T22:39:48.906566017Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906585112Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906601443Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906617062Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906637941Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906665503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906682876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906698215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906713944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906729193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906744692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906758247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906773125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906787362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.906871 containerd[1613]: time="2024-11-12T22:39:48.906806838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.906821526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.906837235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.906856572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.906873614Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.906898410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.906937834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.906953233Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.907030307Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.907055394Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.907069902Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.907085731Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.907098044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.907113584Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 22:39:48.907341 containerd[1613]: time="2024-11-12T22:39:48.907126328Z" level=info msg="NRI interface is disabled by configuration." Nov 12 22:39:48.907684 containerd[1613]: time="2024-11-12T22:39:48.907139021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 22:39:48.907714 containerd[1613]: time="2024-11-12T22:39:48.907579267Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 22:39:48.907714 containerd[1613]: time="2024-11-12T22:39:48.907649639Z" level=info msg="Connect containerd service" Nov 12 22:39:48.907714 containerd[1613]: time="2024-11-12T22:39:48.907717587Z" level=info msg="using legacy CRI server" Nov 12 22:39:48.907714 containerd[1613]: time="2024-11-12T22:39:48.907731012Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 22:39:48.908074 containerd[1613]: time="2024-11-12T22:39:48.907939262Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 22:39:48.910464 containerd[1613]: time="2024-11-12T22:39:48.909639702Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:39:48.910464 containerd[1613]: time="2024-11-12T22:39:48.910141082Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 22:39:48.910464 containerd[1613]: time="2024-11-12T22:39:48.910199923Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 22:39:48.910464 containerd[1613]: time="2024-11-12T22:39:48.910263462Z" level=info msg="Start subscribing containerd event" Nov 12 22:39:48.910464 containerd[1613]: time="2024-11-12T22:39:48.910313165Z" level=info msg="Start recovering state" Nov 12 22:39:48.910464 containerd[1613]: time="2024-11-12T22:39:48.910394959Z" level=info msg="Start event monitor" Nov 12 22:39:48.910464 containerd[1613]: time="2024-11-12T22:39:48.910407953Z" level=info msg="Start snapshots syncer" Nov 12 22:39:48.910464 containerd[1613]: time="2024-11-12T22:39:48.910417992Z" level=info msg="Start cni network conf syncer for default" Nov 12 22:39:48.910464 containerd[1613]: time="2024-11-12T22:39:48.910430575Z" level=info msg="Start streaming server" Nov 12 22:39:48.912281 containerd[1613]: time="2024-11-12T22:39:48.911698574Z" level=info msg="containerd successfully booted in 0.156955s" Nov 12 22:39:48.911818 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 22:39:49.039958 tar[1610]: linux-amd64/LICENSE Nov 12 22:39:49.039958 tar[1610]: linux-amd64/README.md Nov 12 22:39:49.062063 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 22:39:49.703529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:39:49.705426 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 22:39:49.710117 systemd[1]: Startup finished in 8.908s (kernel) + 6.325s (userspace) = 15.233s. Nov 12 22:39:49.720701 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:39:49.781386 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 22:39:49.796537 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:51326.service - OpenSSH per-connection server daemon (10.0.0.1:51326). Nov 12 22:39:49.981748 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 51326 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:39:49.982803 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:39:50.008643 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 22:39:50.090566 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 22:39:50.094453 systemd-logind[1591]: New session 1 of user core. Nov 12 22:39:50.133649 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 22:39:50.182640 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 22:39:50.190888 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 22:39:50.350828 systemd[1715]: Queued start job for default target default.target. Nov 12 22:39:50.351474 systemd[1715]: Created slice app.slice - User Application Slice. Nov 12 22:39:50.351507 systemd[1715]: Reached target paths.target - Paths. Nov 12 22:39:50.351524 systemd[1715]: Reached target timers.target - Timers. Nov 12 22:39:50.382222 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 22:39:50.391945 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 22:39:50.392052 systemd[1715]: Reached target sockets.target - Sockets. Nov 12 22:39:50.392074 systemd[1715]: Reached target basic.target - Basic System. Nov 12 22:39:50.392132 systemd[1715]: Reached target default.target - Main User Target. Nov 12 22:39:50.392176 systemd[1715]: Startup finished in 189ms. Nov 12 22:39:50.392589 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 22:39:50.395202 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 22:39:50.458363 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:51328.service - OpenSSH per-connection server daemon (10.0.0.1:51328). Nov 12 22:39:50.515489 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 51328 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:39:50.517628 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:39:50.523515 systemd-logind[1591]: New session 2 of user core. Nov 12 22:39:50.534550 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 22:39:50.597380 sshd[1730]: Connection closed by 10.0.0.1 port 51328 Nov 12 22:39:50.598002 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Nov 12 22:39:50.609335 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:51330.service - OpenSSH per-connection server daemon (10.0.0.1:51330). Nov 12 22:39:50.610645 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:51328.service: Deactivated successfully. Nov 12 22:39:50.613790 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 22:39:50.614867 systemd-logind[1591]: Session 2 logged out. Waiting for processes to exit. Nov 12 22:39:50.617164 systemd-logind[1591]: Removed session 2. Nov 12 22:39:50.665574 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 51330 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:39:50.667571 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:39:50.673179 systemd-logind[1591]: New session 3 of user core. Nov 12 22:39:50.682406 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 22:39:50.733824 sshd[1738]: Connection closed by 10.0.0.1 port 51330 Nov 12 22:39:50.735820 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Nov 12 22:39:50.749391 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:51336.service - OpenSSH per-connection server daemon (10.0.0.1:51336). Nov 12 22:39:50.750142 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:51330.service: Deactivated successfully. Nov 12 22:39:50.754699 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 22:39:50.755772 systemd-logind[1591]: Session 3 logged out. Waiting for processes to exit. Nov 12 22:39:50.757485 systemd-logind[1591]: Removed session 3. Nov 12 22:39:50.791167 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 51336 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:39:50.793470 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:39:50.798792 systemd-logind[1591]: New session 4 of user core. Nov 12 22:39:50.809298 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 22:39:50.854628 kubelet[1698]: E1112 22:39:50.854504 1698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:39:50.860525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:39:50.861578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:39:50.871036 sshd[1747]: Connection closed by 10.0.0.1 port 51336 Nov 12 22:39:50.871397 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Nov 12 22:39:50.882202 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:51340.service - OpenSSH per-connection server daemon (10.0.0.1:51340). Nov 12 22:39:50.882740 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:51336.service: Deactivated successfully. Nov 12 22:39:50.886005 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 22:39:50.886254 systemd-logind[1591]: Session 4 logged out. Waiting for processes to exit. Nov 12 22:39:50.888256 systemd-logind[1591]: Removed session 4. Nov 12 22:39:50.917626 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 51340 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:39:50.919860 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:39:50.924832 systemd-logind[1591]: New session 5 of user core. Nov 12 22:39:50.935368 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 22:39:50.995624 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 22:39:50.996019 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:39:51.015545 sudo[1758]: pam_unix(sudo:session): session closed for user root Nov 12 22:39:51.017745 sshd[1757]: Connection closed by 10.0.0.1 port 51340 Nov 12 22:39:51.018292 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Nov 12 22:39:51.031389 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:51354.service - OpenSSH per-connection server daemon (10.0.0.1:51354). Nov 12 22:39:51.032279 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:51340.service: Deactivated successfully. Nov 12 22:39:51.035874 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Nov 12 22:39:51.036715 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 22:39:51.039125 systemd-logind[1591]: Removed session 5. Nov 12 22:39:51.071528 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 51354 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:39:51.073523 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:39:51.081583 systemd-logind[1591]: New session 6 of user core. Nov 12 22:39:51.096518 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 22:39:51.156569 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 22:39:51.157091 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:39:51.162037 sudo[1768]: pam_unix(sudo:session): session closed for user root Nov 12 22:39:51.169708 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 12 22:39:51.170174 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:39:51.196543 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 12 22:39:51.237649 augenrules[1790]: No rules Nov 12 22:39:51.238878 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 22:39:51.239417 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 12 22:39:51.242098 sudo[1767]: pam_unix(sudo:session): session closed for user root Nov 12 22:39:51.250737 sshd[1766]: Connection closed by 10.0.0.1 port 51354 Nov 12 22:39:51.248993 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Nov 12 22:39:51.260493 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:51364.service - OpenSSH per-connection server daemon (10.0.0.1:51364). Nov 12 22:39:51.261539 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:51354.service: Deactivated successfully. Nov 12 22:39:51.264893 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 22:39:51.267000 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Nov 12 22:39:51.270307 systemd-logind[1591]: Removed session 6. Nov 12 22:39:51.303037 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 51364 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:39:51.304644 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:39:51.309565 systemd-logind[1591]: New session 7 of user core. Nov 12 22:39:51.319189 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 22:39:51.374056 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 22:39:51.374408 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 22:39:51.924404 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 22:39:51.924827 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 22:39:53.185072 dockerd[1823]: time="2024-11-12T22:39:53.184950503Z" level=info msg="Starting up" Nov 12 22:39:54.060997 dockerd[1823]: time="2024-11-12T22:39:54.060918949Z" level=info msg="Loading containers: start." Nov 12 22:39:54.423941 kernel: Initializing XFRM netlink socket Nov 12 22:39:54.527298 systemd-networkd[1247]: docker0: Link UP Nov 12 22:39:54.573664 dockerd[1823]: time="2024-11-12T22:39:54.573597899Z" level=info msg="Loading containers: done." Nov 12 22:39:54.609318 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2903892338-merged.mount: Deactivated successfully. Nov 12 22:39:54.611590 dockerd[1823]: time="2024-11-12T22:39:54.611534336Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 22:39:54.611714 dockerd[1823]: time="2024-11-12T22:39:54.611681693Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 12 22:39:54.611872 dockerd[1823]: time="2024-11-12T22:39:54.611843797Z" level=info msg="Daemon has completed initialization" Nov 12 22:39:54.666658 dockerd[1823]: time="2024-11-12T22:39:54.666570849Z" level=info msg="API listen on /run/docker.sock" Nov 12 22:39:54.666981 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 22:39:56.173767 containerd[1613]: time="2024-11-12T22:39:56.173715862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 22:39:58.180059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754208048.mount: Deactivated successfully. Nov 12 22:40:00.289116 containerd[1613]: time="2024-11-12T22:40:00.289025026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:00.319185 containerd[1613]: time="2024-11-12T22:40:00.319103194Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 22:40:00.348648 containerd[1613]: time="2024-11-12T22:40:00.348569024Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:00.392039 containerd[1613]: time="2024-11-12T22:40:00.391962848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:00.393707 containerd[1613]: time="2024-11-12T22:40:00.393498078Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 4.219734646s" Nov 12 22:40:00.393707 containerd[1613]: time="2024-11-12T22:40:00.393554103Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 22:40:00.423167 containerd[1613]: time="2024-11-12T22:40:00.423113688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 22:40:00.988611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 22:40:00.998199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:40:01.244698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:40:01.252658 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:40:01.407361 kubelet[2099]: E1112 22:40:01.407203 2099 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:40:01.416287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:40:01.416656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:40:03.526257 containerd[1613]: time="2024-11-12T22:40:03.526169784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:03.539920 containerd[1613]: time="2024-11-12T22:40:03.539841288Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 22:40:03.551947 containerd[1613]: time="2024-11-12T22:40:03.551889858Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:03.581797 containerd[1613]: time="2024-11-12T22:40:03.581729729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:03.583185 containerd[1613]: time="2024-11-12T22:40:03.583132410Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 3.159960683s" Nov 12 22:40:03.583185 containerd[1613]: time="2024-11-12T22:40:03.583182855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 22:40:03.609526 containerd[1613]: time="2024-11-12T22:40:03.609468168Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 22:40:05.913723 containerd[1613]: time="2024-11-12T22:40:05.913615370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:05.914473 containerd[1613]: time="2024-11-12T22:40:05.914333337Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 22:40:05.915682 containerd[1613]: time="2024-11-12T22:40:05.915649165Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:05.919201 containerd[1613]: time="2024-11-12T22:40:05.919155662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:05.920307 containerd[1613]: time="2024-11-12T22:40:05.920256337Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 2.310748093s" Nov 12 22:40:05.920307 containerd[1613]: time="2024-11-12T22:40:05.920292975Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 22:40:05.944370 containerd[1613]: time="2024-11-12T22:40:05.944320875Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 22:40:07.666768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702167810.mount: Deactivated successfully. Nov 12 22:40:08.067584 containerd[1613]: time="2024-11-12T22:40:08.067475286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:08.068763 containerd[1613]: time="2024-11-12T22:40:08.068646342Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 22:40:08.070394 containerd[1613]: time="2024-11-12T22:40:08.070340911Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:08.073934 containerd[1613]: time="2024-11-12T22:40:08.073852547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:08.074742 containerd[1613]: time="2024-11-12T22:40:08.074658209Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 2.130291458s" Nov 12 22:40:08.074742 containerd[1613]: time="2024-11-12T22:40:08.074697993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 22:40:08.108459 containerd[1613]: time="2024-11-12T22:40:08.108085619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 22:40:08.666733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018806725.mount: Deactivated successfully. Nov 12 22:40:09.774460 containerd[1613]: time="2024-11-12T22:40:09.774202658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:09.777632 containerd[1613]: time="2024-11-12T22:40:09.777580013Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 22:40:09.779171 containerd[1613]: time="2024-11-12T22:40:09.779114141Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:09.783239 containerd[1613]: time="2024-11-12T22:40:09.783166241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:09.784916 containerd[1613]: time="2024-11-12T22:40:09.784828018Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.676680734s" Nov 12 22:40:09.784993 containerd[1613]: time="2024-11-12T22:40:09.784898040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 22:40:09.809819 containerd[1613]: time="2024-11-12T22:40:09.809768850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 22:40:10.430891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056980966.mount: Deactivated successfully. Nov 12 22:40:10.436661 containerd[1613]: time="2024-11-12T22:40:10.436584227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:10.437400 containerd[1613]: time="2024-11-12T22:40:10.437336137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 22:40:10.438770 containerd[1613]: time="2024-11-12T22:40:10.438704233Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:10.441635 containerd[1613]: time="2024-11-12T22:40:10.441587401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:10.442578 containerd[1613]: time="2024-11-12T22:40:10.442514640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 632.698261ms" Nov 12 22:40:10.442578 containerd[1613]: time="2024-11-12T22:40:10.442561458Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 22:40:10.482706 containerd[1613]: time="2024-11-12T22:40:10.482658288Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 22:40:11.106516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113836838.mount: Deactivated successfully. Nov 12 22:40:11.488821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 22:40:11.499124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:40:11.651898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:40:11.661955 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 22:40:11.746426 kubelet[2234]: E1112 22:40:11.746126 2234 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 22:40:11.751985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 22:40:11.752358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 22:40:14.603780 containerd[1613]: time="2024-11-12T22:40:14.603721435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:14.604474 containerd[1613]: time="2024-11-12T22:40:14.604411650Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 22:40:14.605560 containerd[1613]: time="2024-11-12T22:40:14.605517324Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:14.608948 containerd[1613]: time="2024-11-12T22:40:14.608904717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:14.610092 containerd[1613]: time="2024-11-12T22:40:14.610057409Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.127352885s" Nov 12 22:40:14.610154 containerd[1613]: time="2024-11-12T22:40:14.610093347Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 22:40:17.284627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:40:17.302306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:40:17.324642 systemd[1]: Reloading requested from client PID 2357 ('systemctl') (unit session-7.scope)... Nov 12 22:40:17.324661 systemd[1]: Reloading... Nov 12 22:40:17.450987 zram_generator::config[2399]: No configuration found. Nov 12 22:40:17.714173 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:40:17.813975 systemd[1]: Reloading finished in 488 ms. Nov 12 22:40:17.869724 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 22:40:17.869853 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 22:40:17.870336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:40:17.884302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:40:18.025543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:40:18.032592 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:40:18.078988 kubelet[2457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:40:18.078988 kubelet[2457]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:40:18.078988 kubelet[2457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:40:18.080477 kubelet[2457]: I1112 22:40:18.080397 2457 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:40:18.517294 kubelet[2457]: I1112 22:40:18.517231 2457 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:40:18.517294 kubelet[2457]: I1112 22:40:18.517279 2457 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:40:18.517619 kubelet[2457]: I1112 22:40:18.517587 2457 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:40:19.061230 kubelet[2457]: I1112 22:40:19.061166 2457 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:40:19.073701 kubelet[2457]: E1112 22:40:19.073642 2457 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.106373 kubelet[2457]: I1112 22:40:19.106331 2457 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:40:19.107697 kubelet[2457]: I1112 22:40:19.107657 2457 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:40:19.107925 kubelet[2457]: I1112 22:40:19.107888 2457 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:40:19.108058 kubelet[2457]: I1112 22:40:19.107941 2457 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:40:19.108058 kubelet[2457]: I1112 22:40:19.107953 2457 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:40:19.108110 kubelet[2457]: I1112 22:40:19.108099 2457 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:40:19.108241 kubelet[2457]: I1112 22:40:19.108213 2457 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:40:19.108241 kubelet[2457]: I1112 22:40:19.108233 2457 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:40:19.108317 kubelet[2457]: I1112 22:40:19.108266 2457 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:40:19.108317 kubelet[2457]: I1112 22:40:19.108281 2457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:40:19.109583 kubelet[2457]: W1112 22:40:19.109470 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.109583 kubelet[2457]: E1112 22:40:19.109532 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.110439 kubelet[2457]: I1112 22:40:19.110135 2457 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:40:19.110788 kubelet[2457]: W1112 22:40:19.110708 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.110788 kubelet[2457]: E1112 22:40:19.110779 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.113592 kubelet[2457]: I1112 22:40:19.113529 2457 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:40:19.115970 kubelet[2457]: W1112 22:40:19.115927 2457 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 22:40:19.116972 kubelet[2457]: I1112 22:40:19.116768 2457 server.go:1256] "Started kubelet" Nov 12 22:40:19.116972 kubelet[2457]: I1112 22:40:19.116923 2457 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:40:19.117115 kubelet[2457]: I1112 22:40:19.117032 2457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:40:19.117604 kubelet[2457]: I1112 22:40:19.117569 2457 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:40:19.119014 kubelet[2457]: I1112 22:40:19.118414 2457 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:40:19.119768 kubelet[2457]: I1112 22:40:19.119730 2457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:40:19.122242 kubelet[2457]: E1112 22:40:19.121512 2457 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 22:40:19.122242 kubelet[2457]: I1112 22:40:19.121577 2457 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:40:19.122242 kubelet[2457]: I1112 22:40:19.121716 2457 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:40:19.122242 kubelet[2457]: I1112 22:40:19.121795 2457 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:40:19.122364 kubelet[2457]: W1112 22:40:19.122233 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.122364 kubelet[2457]: E1112 22:40:19.122277 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.122571 kubelet[2457]: E1112 22:40:19.122526 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Nov 12 22:40:19.123502 kubelet[2457]: E1112 22:40:19.123479 2457 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180759b5cd5eb2cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 22:40:19.116733135 +0000 UTC m=+1.079046654,LastTimestamp:2024-11-12 22:40:19.116733135 +0000 UTC m=+1.079046654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 22:40:19.124052 kubelet[2457]: I1112 22:40:19.124034 2457 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:40:19.124360 kubelet[2457]: I1112 22:40:19.124337 2457 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:40:19.125141 kubelet[2457]: E1112 22:40:19.125108 2457 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:40:19.125756 kubelet[2457]: I1112 22:40:19.125458 2457 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:40:19.149781 kubelet[2457]: I1112 22:40:19.149050 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:40:19.151806 kubelet[2457]: I1112 22:40:19.151756 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:40:19.151806 kubelet[2457]: I1112 22:40:19.151808 2457 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:40:19.151933 kubelet[2457]: I1112 22:40:19.151838 2457 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:40:19.151974 kubelet[2457]: E1112 22:40:19.151953 2457 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:40:19.154879 kubelet[2457]: W1112 22:40:19.153731 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.154879 kubelet[2457]: E1112 22:40:19.153795 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.165499 kubelet[2457]: I1112 22:40:19.165127 2457 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:40:19.165499 kubelet[2457]: I1112 22:40:19.165156 2457 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:40:19.165499 kubelet[2457]: I1112 22:40:19.165183 2457 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:40:19.223927 kubelet[2457]: I1112 22:40:19.223853 2457 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:40:19.224306 kubelet[2457]: E1112 22:40:19.224282 2457 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Nov 12 22:40:19.252942 kubelet[2457]: E1112 22:40:19.252836 2457 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 22:40:19.324771 kubelet[2457]: E1112 22:40:19.324132 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Nov 12 22:40:19.426418 kubelet[2457]: I1112 22:40:19.426362 2457 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:40:19.426741 kubelet[2457]: E1112 22:40:19.426722 2457 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Nov 12 22:40:19.434740 kubelet[2457]: I1112 22:40:19.434684 2457 policy_none.go:49] "None policy: Start" Nov 12 22:40:19.435378 kubelet[2457]: I1112 22:40:19.435353 2457 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:40:19.435438 kubelet[2457]: I1112 22:40:19.435384 2457 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:40:19.444763 kubelet[2457]: I1112 22:40:19.444718 2457 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:40:19.445110 kubelet[2457]: I1112 22:40:19.445065 2457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:40:19.446756 kubelet[2457]: E1112 22:40:19.446731 2457 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 22:40:19.453960 kubelet[2457]: I1112 22:40:19.453934 2457 topology_manager.go:215] "Topology Admit Handler" podUID="9a668c9f9a51e006233ae02f7f7de2b9" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:40:19.455054 kubelet[2457]: I1112 22:40:19.455034 2457 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:40:19.456022 kubelet[2457]: I1112 22:40:19.455988 2457 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:40:19.523580 kubelet[2457]: I1112 22:40:19.523533 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:40:19.523580 kubelet[2457]: I1112 22:40:19.523595 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:19.523817 kubelet[2457]: I1112 22:40:19.523624 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:19.523817 kubelet[2457]: I1112 22:40:19.523648 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a668c9f9a51e006233ae02f7f7de2b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a668c9f9a51e006233ae02f7f7de2b9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:40:19.523817 kubelet[2457]: I1112 22:40:19.523690 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a668c9f9a51e006233ae02f7f7de2b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a668c9f9a51e006233ae02f7f7de2b9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:40:19.523817 kubelet[2457]: I1112 22:40:19.523750 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a668c9f9a51e006233ae02f7f7de2b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9a668c9f9a51e006233ae02f7f7de2b9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:40:19.523817 kubelet[2457]: I1112 22:40:19.523809 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:19.523989 kubelet[2457]: I1112 22:40:19.523847 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:19.523989 kubelet[2457]: I1112 22:40:19.523872 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:19.725536 kubelet[2457]: E1112 22:40:19.725367 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Nov 12 22:40:19.760768 kubelet[2457]: E1112 22:40:19.760688 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:19.761374 kubelet[2457]: E1112 22:40:19.761351 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:19.761826 containerd[1613]: time="2024-11-12T22:40:19.761730586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9a668c9f9a51e006233ae02f7f7de2b9,Namespace:kube-system,Attempt:0,}" Nov 12 22:40:19.762341 containerd[1613]: time="2024-11-12T22:40:19.761757416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 22:40:19.763940 kubelet[2457]: E1112 22:40:19.763897 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:19.764549 containerd[1613]: time="2024-11-12T22:40:19.764492707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 22:40:19.828750 kubelet[2457]: I1112 22:40:19.828698 2457 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:40:19.829134 kubelet[2457]: E1112 22:40:19.829112 2457 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Nov 12 22:40:19.920020 kubelet[2457]: W1112 22:40:19.919849 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:19.920020 kubelet[2457]: E1112 22:40:19.919968 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:20.084840 kubelet[2457]: W1112 22:40:20.084741 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:20.084840 kubelet[2457]: E1112 22:40:20.084842 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:20.109224 kubelet[2457]: W1112 22:40:20.109141 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:20.109224 kubelet[2457]: E1112 22:40:20.109218 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:20.109771 kubelet[2457]: W1112 22:40:20.109517 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:20.109771 kubelet[2457]: E1112 22:40:20.109562 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Nov 12 22:40:20.292864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409066201.mount: Deactivated successfully. Nov 12 22:40:20.297939 containerd[1613]: time="2024-11-12T22:40:20.297863064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:40:20.300753 containerd[1613]: time="2024-11-12T22:40:20.300679799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 22:40:20.302746 containerd[1613]: time="2024-11-12T22:40:20.302681099Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:40:20.304218 containerd[1613]: time="2024-11-12T22:40:20.304159302Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:40:20.304981 containerd[1613]: time="2024-11-12T22:40:20.304928817Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:40:20.307127 containerd[1613]: time="2024-11-12T22:40:20.307072323Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:40:20.307592 containerd[1613]: time="2024-11-12T22:40:20.307545256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 22:40:20.309533 containerd[1613]: time="2024-11-12T22:40:20.309484879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 22:40:20.313648 containerd[1613]: time="2024-11-12T22:40:20.313577104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.705112ms" Nov 12 22:40:20.315440 containerd[1613]: time="2024-11-12T22:40:20.315387678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 553.136234ms" Nov 12 22:40:20.315732 containerd[1613]: time="2024-11-12T22:40:20.315678019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.040639ms" Nov 12 22:40:20.447637 containerd[1613]: time="2024-11-12T22:40:20.447432435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:40:20.447637 containerd[1613]: time="2024-11-12T22:40:20.447507580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:40:20.447637 containerd[1613]: time="2024-11-12T22:40:20.447522008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:20.449448 containerd[1613]: time="2024-11-12T22:40:20.447648362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:20.450537 containerd[1613]: time="2024-11-12T22:40:20.450300660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:40:20.450537 containerd[1613]: time="2024-11-12T22:40:20.450364303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:40:20.450537 containerd[1613]: time="2024-11-12T22:40:20.450385333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:20.450652 containerd[1613]: time="2024-11-12T22:40:20.450552386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:20.455927 containerd[1613]: time="2024-11-12T22:40:20.455106232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:40:20.455927 containerd[1613]: time="2024-11-12T22:40:20.455162400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:40:20.455927 containerd[1613]: time="2024-11-12T22:40:20.455177049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:20.455927 containerd[1613]: time="2024-11-12T22:40:20.455274967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:20.512185 containerd[1613]: time="2024-11-12T22:40:20.512136876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aa165e2a444e13dac71a7742087c206edc24bd1990baae2157d08066a651cf9\"" Nov 12 22:40:20.513326 kubelet[2457]: E1112 22:40:20.513305 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:20.518014 containerd[1613]: time="2024-11-12T22:40:20.517975842Z" level=info msg="CreateContainer within sandbox \"5aa165e2a444e13dac71a7742087c206edc24bd1990baae2157d08066a651cf9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 22:40:20.526184 containerd[1613]: time="2024-11-12T22:40:20.526127441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"48363fa99e9d359aa9a55272ca573eb606f9bff03608e9a1aeb13d47088b1e02\"" Nov 12 22:40:20.527347 kubelet[2457]: E1112 22:40:20.527046 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Nov 12 22:40:20.527686 kubelet[2457]: E1112 22:40:20.527533 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:20.528397 containerd[1613]: time="2024-11-12T22:40:20.528353316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9a668c9f9a51e006233ae02f7f7de2b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c623246bd7dec07d9f63060a389f83482fa3a00886ddf6e30673e3b6548bdee7\"" Nov 12 22:40:20.529164 kubelet[2457]: E1112 22:40:20.529081 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:20.532350 containerd[1613]: time="2024-11-12T22:40:20.532145374Z" level=info msg="CreateContainer within sandbox \"48363fa99e9d359aa9a55272ca573eb606f9bff03608e9a1aeb13d47088b1e02\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 22:40:20.532350 containerd[1613]: time="2024-11-12T22:40:20.532211671Z" level=info msg="CreateContainer within sandbox \"c623246bd7dec07d9f63060a389f83482fa3a00886ddf6e30673e3b6548bdee7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 22:40:20.547858 containerd[1613]: time="2024-11-12T22:40:20.547791293Z" level=info msg="CreateContainer within sandbox \"5aa165e2a444e13dac71a7742087c206edc24bd1990baae2157d08066a651cf9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e4def0917ee5f153dcff0c5edab3782ec8626d72b2b88c9abcf9048acd450049\"" Nov 12 22:40:20.548638 containerd[1613]: time="2024-11-12T22:40:20.548603199Z" level=info msg="StartContainer for \"e4def0917ee5f153dcff0c5edab3782ec8626d72b2b88c9abcf9048acd450049\"" Nov 12 22:40:20.558979 containerd[1613]: time="2024-11-12T22:40:20.558950015Z" level=info msg="CreateContainer within sandbox \"c623246bd7dec07d9f63060a389f83482fa3a00886ddf6e30673e3b6548bdee7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d8cf3286ac7016a73c2a31c6bdb6bd7d56397d844e99a2f4f0293cf6d6ed8e7\"" Nov 12 22:40:20.559518 containerd[1613]: time="2024-11-12T22:40:20.559497190Z" level=info msg="StartContainer for \"4d8cf3286ac7016a73c2a31c6bdb6bd7d56397d844e99a2f4f0293cf6d6ed8e7\"" Nov 12 22:40:20.564440 containerd[1613]: time="2024-11-12T22:40:20.564346077Z" level=info msg="CreateContainer within sandbox \"48363fa99e9d359aa9a55272ca573eb606f9bff03608e9a1aeb13d47088b1e02\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6911e58b7e7932f91a60d8ba72c85a845f398391be60b43330ef7dd66f7d34e5\"" Nov 12 22:40:20.564857 containerd[1613]: time="2024-11-12T22:40:20.564830561Z" level=info msg="StartContainer for \"6911e58b7e7932f91a60d8ba72c85a845f398391be60b43330ef7dd66f7d34e5\"" Nov 12 22:40:20.633008 kubelet[2457]: I1112 22:40:20.632967 2457 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:40:20.633688 kubelet[2457]: E1112 22:40:20.633644 2457 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Nov 12 22:40:20.638298 containerd[1613]: time="2024-11-12T22:40:20.638239217Z" level=info msg="StartContainer for \"e4def0917ee5f153dcff0c5edab3782ec8626d72b2b88c9abcf9048acd450049\" returns successfully" Nov 12 22:40:20.672084 containerd[1613]: time="2024-11-12T22:40:20.671866623Z" level=info msg="StartContainer for \"6911e58b7e7932f91a60d8ba72c85a845f398391be60b43330ef7dd66f7d34e5\" returns successfully" Nov 12 22:40:20.672084 containerd[1613]: time="2024-11-12T22:40:20.671871051Z" level=info msg="StartContainer for \"4d8cf3286ac7016a73c2a31c6bdb6bd7d56397d844e99a2f4f0293cf6d6ed8e7\" returns successfully" Nov 12 22:40:21.162490 kubelet[2457]: E1112 22:40:21.162408 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:21.166898 kubelet[2457]: E1112 22:40:21.166385 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:21.167891 kubelet[2457]: E1112 22:40:21.167802 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:22.170037 kubelet[2457]: E1112 22:40:22.169996 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:22.236476 kubelet[2457]: I1112 22:40:22.236436 2457 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:40:22.675051 kubelet[2457]: E1112 22:40:22.674997 2457 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 22:40:22.979409 kubelet[2457]: I1112 22:40:22.979083 2457 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:40:23.110182 kubelet[2457]: I1112 22:40:23.110091 2457 apiserver.go:52] "Watching apiserver" Nov 12 22:40:23.122181 kubelet[2457]: I1112 22:40:23.122124 2457 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:40:23.176952 kubelet[2457]: E1112 22:40:23.176871 2457 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 12 22:40:23.177778 kubelet[2457]: E1112 22:40:23.177649 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:25.647572 systemd[1]: Reloading requested from client PID 2734 ('systemctl') (unit session-7.scope)... Nov 12 22:40:25.647590 systemd[1]: Reloading... Nov 12 22:40:25.707986 zram_generator::config[2773]: No configuration found. Nov 12 22:40:25.832249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 22:40:25.911773 systemd[1]: Reloading finished in 263 ms. Nov 12 22:40:25.949423 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:40:25.968223 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 22:40:25.968620 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:40:25.976110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 22:40:26.126247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 22:40:26.132243 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 22:40:26.186548 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:40:26.186548 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 22:40:26.186548 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 22:40:26.186548 kubelet[2828]: I1112 22:40:26.186358 2828 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 22:40:26.192303 kubelet[2828]: I1112 22:40:26.192266 2828 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 22:40:26.192303 kubelet[2828]: I1112 22:40:26.192297 2828 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 22:40:26.192681 kubelet[2828]: I1112 22:40:26.192636 2828 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 22:40:26.194572 kubelet[2828]: I1112 22:40:26.194550 2828 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 22:40:26.197557 kubelet[2828]: I1112 22:40:26.197136 2828 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 22:40:26.208509 kubelet[2828]: I1112 22:40:26.208480 2828 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 22:40:26.209177 kubelet[2828]: I1112 22:40:26.209141 2828 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 22:40:26.209381 kubelet[2828]: I1112 22:40:26.209339 2828 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 22:40:26.209494 kubelet[2828]: I1112 22:40:26.209385 2828 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 22:40:26.209494 kubelet[2828]: I1112 22:40:26.209400 2828 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 22:40:26.209494 kubelet[2828]: I1112 22:40:26.209445 2828 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:40:26.209587 kubelet[2828]: I1112 22:40:26.209566 2828 kubelet.go:396] "Attempting to sync node with API server" Nov 12 22:40:26.209587 kubelet[2828]: I1112 22:40:26.209586 2828 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 22:40:26.209649 kubelet[2828]: I1112 22:40:26.209619 2828 kubelet.go:312] "Adding apiserver pod source" Nov 12 22:40:26.209649 kubelet[2828]: I1112 22:40:26.209641 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 22:40:26.211940 kubelet[2828]: I1112 22:40:26.211084 2828 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 12 22:40:26.211940 kubelet[2828]: I1112 22:40:26.211403 2828 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 22:40:26.211198 sudo[2843]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 22:40:26.212516 kubelet[2828]: I1112 22:40:26.212006 2828 server.go:1256] "Started kubelet" Nov 12 22:40:26.211673 sudo[2843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 22:40:26.214967 kubelet[2828]: I1112 22:40:26.213374 2828 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 22:40:26.216101 kubelet[2828]: I1112 22:40:26.216014 2828 server.go:461] "Adding debug handlers to kubelet server" Nov 12 22:40:26.217074 kubelet[2828]: I1112 22:40:26.217020 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 22:40:26.217453 kubelet[2828]: I1112 22:40:26.217429 2828 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 22:40:26.222548 kubelet[2828]: I1112 22:40:26.222508 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 22:40:26.222704 kubelet[2828]: E1112 22:40:26.222685 2828 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 22:40:26.222991 kubelet[2828]: I1112 22:40:26.222976 2828 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 22:40:26.223220 kubelet[2828]: I1112 22:40:26.223202 2828 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 22:40:26.223597 kubelet[2828]: I1112 22:40:26.223481 2828 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 22:40:26.225145 kubelet[2828]: I1112 22:40:26.225124 2828 factory.go:221] Registration of the systemd container factory successfully Nov 12 22:40:26.225389 kubelet[2828]: I1112 22:40:26.225352 2828 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 22:40:26.228113 kubelet[2828]: I1112 22:40:26.228078 2828 factory.go:221] Registration of the containerd container factory successfully Nov 12 22:40:26.237367 kubelet[2828]: I1112 22:40:26.237326 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 22:40:26.240138 kubelet[2828]: I1112 22:40:26.239025 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 22:40:26.240138 kubelet[2828]: I1112 22:40:26.239073 2828 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 22:40:26.240138 kubelet[2828]: I1112 22:40:26.239098 2828 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 22:40:26.240138 kubelet[2828]: E1112 22:40:26.239158 2828 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 22:40:26.288142 kubelet[2828]: I1112 22:40:26.288107 2828 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 22:40:26.288142 kubelet[2828]: I1112 22:40:26.288137 2828 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 22:40:26.288142 kubelet[2828]: I1112 22:40:26.288156 2828 state_mem.go:36] "Initialized new in-memory state store" Nov 12 22:40:26.288732 kubelet[2828]: I1112 22:40:26.288465 2828 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 22:40:26.288732 kubelet[2828]: I1112 22:40:26.288497 2828 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 22:40:26.288732 kubelet[2828]: I1112 22:40:26.288508 2828 policy_none.go:49] "None policy: Start" Nov 12 22:40:26.291746 kubelet[2828]: I1112 22:40:26.291616 2828 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 22:40:26.291746 kubelet[2828]: I1112 22:40:26.291674 2828 state_mem.go:35] "Initializing new in-memory state store" Nov 12 22:40:26.292254 kubelet[2828]: I1112 22:40:26.292089 2828 state_mem.go:75] "Updated machine memory state" Nov 12 22:40:26.294899 kubelet[2828]: I1112 22:40:26.294277 2828 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 22:40:26.294899 kubelet[2828]: I1112 22:40:26.294571 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 22:40:26.328438 kubelet[2828]: I1112 22:40:26.328398 2828 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 22:40:26.335925 kubelet[2828]: I1112 22:40:26.335877 2828 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 22:40:26.336033 kubelet[2828]: I1112 22:40:26.336001 2828 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 22:40:26.340109 kubelet[2828]: I1112 22:40:26.340081 2828 topology_manager.go:215] "Topology Admit Handler" podUID="9a668c9f9a51e006233ae02f7f7de2b9" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 22:40:26.340227 kubelet[2828]: I1112 22:40:26.340178 2828 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 22:40:26.340227 kubelet[2828]: I1112 22:40:26.340221 2828 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 22:40:26.524810 kubelet[2828]: I1112 22:40:26.524756 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:26.524810 kubelet[2828]: I1112 22:40:26.524821 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:26.525024 kubelet[2828]: I1112 22:40:26.524851 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:26.525024 kubelet[2828]: I1112 22:40:26.524877 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a668c9f9a51e006233ae02f7f7de2b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a668c9f9a51e006233ae02f7f7de2b9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:40:26.525024 kubelet[2828]: I1112 22:40:26.524927 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a668c9f9a51e006233ae02f7f7de2b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9a668c9f9a51e006233ae02f7f7de2b9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:40:26.525024 kubelet[2828]: I1112 22:40:26.524955 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:26.525024 kubelet[2828]: I1112 22:40:26.524978 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 22:40:26.525168 kubelet[2828]: I1112 22:40:26.525006 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 22:40:26.525168 kubelet[2828]: I1112 22:40:26.525030 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a668c9f9a51e006233ae02f7f7de2b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a668c9f9a51e006233ae02f7f7de2b9\") " pod="kube-system/kube-apiserver-localhost" Nov 12 22:40:26.648453 kubelet[2828]: E1112 22:40:26.648407 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:26.648986 kubelet[2828]: E1112 22:40:26.648952 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:26.649384 kubelet[2828]: E1112 22:40:26.649363 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:26.767728 sudo[2843]: pam_unix(sudo:session): session closed for user root Nov 12 22:40:27.210577 kubelet[2828]: I1112 22:40:27.210522 2828 apiserver.go:52] "Watching apiserver" Nov 12 22:40:27.224538 kubelet[2828]: I1112 22:40:27.224295 2828 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 22:40:27.256426 kubelet[2828]: E1112 22:40:27.256377 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:27.257638 kubelet[2828]: E1112 22:40:27.257585 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:27.258206 kubelet[2828]: E1112 22:40:27.258124 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:27.274261 kubelet[2828]: I1112 22:40:27.274202 2828 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.274141502 podStartE2EDuration="1.274141502s" podCreationTimestamp="2024-11-12 22:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:40:27.27413553 +0000 UTC m=+1.136934990" watchObservedRunningTime="2024-11-12 22:40:27.274141502 +0000 UTC m=+1.136940962" Nov 12 22:40:27.290623 kubelet[2828]: I1112 22:40:27.290560 2828 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.290515636 podStartE2EDuration="1.290515636s" podCreationTimestamp="2024-11-12 22:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:40:27.282693656 +0000 UTC m=+1.145493116" watchObservedRunningTime="2024-11-12 22:40:27.290515636 +0000 UTC m=+1.153315096" Nov 12 22:40:27.290812 kubelet[2828]: I1112 22:40:27.290699 2828 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.290681011 podStartE2EDuration="1.290681011s" podCreationTimestamp="2024-11-12 22:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:40:27.29050254 +0000 UTC m=+1.153302011" watchObservedRunningTime="2024-11-12 22:40:27.290681011 +0000 UTC m=+1.153480471" Nov 12 22:40:28.108587 sudo[1803]: pam_unix(sudo:session): session closed for user root Nov 12 22:40:28.110628 sshd[1802]: Connection closed by 10.0.0.1 port 51364 Nov 12 22:40:28.111595 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Nov 12 22:40:28.116053 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:51364.service: Deactivated successfully. Nov 12 22:40:28.118631 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 22:40:28.119318 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Nov 12 22:40:28.120445 systemd-logind[1591]: Removed session 7. Nov 12 22:40:28.258073 kubelet[2828]: E1112 22:40:28.258006 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:29.259636 kubelet[2828]: E1112 22:40:29.259595 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:31.317848 kubelet[2828]: E1112 22:40:31.317802 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:32.180973 kubelet[2828]: E1112 22:40:32.180900 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:32.263463 kubelet[2828]: E1112 22:40:32.263415 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:32.263608 kubelet[2828]: E1112 22:40:32.263557 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:33.483419 update_engine[1593]: I20241112 22:40:33.483302 1593 update_attempter.cc:509] Updating boot flags... Nov 12 22:40:33.515634 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2912) Nov 12 22:40:33.547026 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2915) Nov 12 22:40:33.584069 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2915) Nov 12 22:40:38.519598 kubelet[2828]: E1112 22:40:38.519529 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:39.271703 kubelet[2828]: E1112 22:40:39.271676 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:39.558962 kubelet[2828]: I1112 22:40:39.558819 2828 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 22:40:39.559394 containerd[1613]: time="2024-11-12T22:40:39.559209924Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 22:40:39.559654 kubelet[2828]: I1112 22:40:39.559570 2828 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 22:40:40.432280 kubelet[2828]: I1112 22:40:40.432216 2828 topology_manager.go:215] "Topology Admit Handler" podUID="ceb3fbe8-0f18-4231-9558-82b9c527fcbf" podNamespace="kube-system" podName="cilium-operator-5cc964979-svncl" Nov 12 22:40:40.506462 kubelet[2828]: I1112 22:40:40.506407 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ceb3fbe8-0f18-4231-9558-82b9c527fcbf-cilium-config-path\") pod \"cilium-operator-5cc964979-svncl\" (UID: \"ceb3fbe8-0f18-4231-9558-82b9c527fcbf\") " pod="kube-system/cilium-operator-5cc964979-svncl" Nov 12 22:40:40.506462 kubelet[2828]: I1112 22:40:40.506461 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swtls\" (UniqueName: \"kubernetes.io/projected/ceb3fbe8-0f18-4231-9558-82b9c527fcbf-kube-api-access-swtls\") pod \"cilium-operator-5cc964979-svncl\" (UID: \"ceb3fbe8-0f18-4231-9558-82b9c527fcbf\") " pod="kube-system/cilium-operator-5cc964979-svncl" Nov 12 22:40:40.587784 kubelet[2828]: I1112 22:40:40.587743 2828 topology_manager.go:215] "Topology Admit Handler" podUID="87ae3f59-1e0f-4254-a5ee-695c4bf0790a" podNamespace="kube-system" podName="kube-proxy-gb67r" Nov 12 22:40:40.590313 kubelet[2828]: I1112 22:40:40.590269 2828 topology_manager.go:215] "Topology Admit Handler" podUID="fa30da42-2631-483e-9b6a-287561cfd681" podNamespace="kube-system" podName="cilium-ckcpv" Nov 12 22:40:40.606997 kubelet[2828]: I1112 22:40:40.606942 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cilium-cgroup\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.606997 kubelet[2828]: I1112 22:40:40.606989 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-hostproc\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.606997 kubelet[2828]: I1112 22:40:40.607011 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87ae3f59-1e0f-4254-a5ee-695c4bf0790a-lib-modules\") pod \"kube-proxy-gb67r\" (UID: \"87ae3f59-1e0f-4254-a5ee-695c4bf0790a\") " pod="kube-system/kube-proxy-gb67r" Nov 12 22:40:40.607193 kubelet[2828]: I1112 22:40:40.607032 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-host-proc-sys-net\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607193 kubelet[2828]: I1112 22:40:40.607051 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cilium-run\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607193 kubelet[2828]: I1112 22:40:40.607072 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-host-proc-sys-kernel\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607193 kubelet[2828]: I1112 22:40:40.607105 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87ae3f59-1e0f-4254-a5ee-695c4bf0790a-xtables-lock\") pod \"kube-proxy-gb67r\" (UID: \"87ae3f59-1e0f-4254-a5ee-695c4bf0790a\") " pod="kube-system/kube-proxy-gb67r" Nov 12 22:40:40.607193 kubelet[2828]: I1112 22:40:40.607123 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-etc-cni-netd\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607193 kubelet[2828]: I1112 22:40:40.607149 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa30da42-2631-483e-9b6a-287561cfd681-hubble-tls\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607385 kubelet[2828]: I1112 22:40:40.607200 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cni-path\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607385 kubelet[2828]: I1112 22:40:40.607220 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-lib-modules\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607385 kubelet[2828]: I1112 22:40:40.607238 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-xtables-lock\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607385 kubelet[2828]: I1112 22:40:40.607260 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa30da42-2631-483e-9b6a-287561cfd681-clustermesh-secrets\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607385 kubelet[2828]: I1112 22:40:40.607281 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ntmt\" (UniqueName: \"kubernetes.io/projected/87ae3f59-1e0f-4254-a5ee-695c4bf0790a-kube-api-access-4ntmt\") pod \"kube-proxy-gb67r\" (UID: \"87ae3f59-1e0f-4254-a5ee-695c4bf0790a\") " pod="kube-system/kube-proxy-gb67r" Nov 12 22:40:40.607385 kubelet[2828]: I1112 22:40:40.607297 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-bpf-maps\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607548 kubelet[2828]: I1112 22:40:40.607327 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzbtf\" (UniqueName: \"kubernetes.io/projected/fa30da42-2631-483e-9b6a-287561cfd681-kube-api-access-bzbtf\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.607548 kubelet[2828]: I1112 22:40:40.607344 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/87ae3f59-1e0f-4254-a5ee-695c4bf0790a-kube-proxy\") pod \"kube-proxy-gb67r\" (UID: \"87ae3f59-1e0f-4254-a5ee-695c4bf0790a\") " pod="kube-system/kube-proxy-gb67r" Nov 12 22:40:40.607548 kubelet[2828]: I1112 22:40:40.607363 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa30da42-2631-483e-9b6a-287561cfd681-cilium-config-path\") pod \"cilium-ckcpv\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " pod="kube-system/cilium-ckcpv" Nov 12 22:40:40.744399 kubelet[2828]: E1112 22:40:40.744350 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:40.745015 containerd[1613]: time="2024-11-12T22:40:40.744981019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-svncl,Uid:ceb3fbe8-0f18-4231-9558-82b9c527fcbf,Namespace:kube-system,Attempt:0,}" Nov 12 22:40:40.770765 containerd[1613]: time="2024-11-12T22:40:40.770666909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:40:40.770765 containerd[1613]: time="2024-11-12T22:40:40.770739416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:40:40.770765 containerd[1613]: time="2024-11-12T22:40:40.770757991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:40.771605 containerd[1613]: time="2024-11-12T22:40:40.771542775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:40.834655 containerd[1613]: time="2024-11-12T22:40:40.834598694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-svncl,Uid:ceb3fbe8-0f18-4231-9558-82b9c527fcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\"" Nov 12 22:40:40.835317 kubelet[2828]: E1112 22:40:40.835294 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:40.836543 containerd[1613]: time="2024-11-12T22:40:40.836388798Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 22:40:40.896775 kubelet[2828]: E1112 22:40:40.896736 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:40.897181 containerd[1613]: time="2024-11-12T22:40:40.897140220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gb67r,Uid:87ae3f59-1e0f-4254-a5ee-695c4bf0790a,Namespace:kube-system,Attempt:0,}" Nov 12 22:40:40.902011 kubelet[2828]: E1112 22:40:40.901988 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:40.902696 containerd[1613]: time="2024-11-12T22:40:40.902665756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ckcpv,Uid:fa30da42-2631-483e-9b6a-287561cfd681,Namespace:kube-system,Attempt:0,}" Nov 12 22:40:40.927224 containerd[1613]: time="2024-11-12T22:40:40.927094749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:40:40.927224 containerd[1613]: time="2024-11-12T22:40:40.927184379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:40:40.927412 containerd[1613]: time="2024-11-12T22:40:40.927246316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:40.927458 containerd[1613]: time="2024-11-12T22:40:40.927404185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:40.932399 containerd[1613]: time="2024-11-12T22:40:40.932268761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:40:40.932399 containerd[1613]: time="2024-11-12T22:40:40.932336349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:40:40.932399 containerd[1613]: time="2024-11-12T22:40:40.932349734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:40.932569 containerd[1613]: time="2024-11-12T22:40:40.932443130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:40:40.980167 containerd[1613]: time="2024-11-12T22:40:40.980080480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ckcpv,Uid:fa30da42-2631-483e-9b6a-287561cfd681,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\"" Nov 12 22:40:40.980783 kubelet[2828]: E1112 22:40:40.980747 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:40.984230 containerd[1613]: time="2024-11-12T22:40:40.984066205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gb67r,Uid:87ae3f59-1e0f-4254-a5ee-695c4bf0790a,Namespace:kube-system,Attempt:0,} returns sandbox id \"df98ef5ddc4f41bf8af2f3ed413c382e96ef054a314e45659547d77a94f423ad\"" Nov 12 22:40:40.984946 kubelet[2828]: E1112 22:40:40.984893 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:40.986623 containerd[1613]: time="2024-11-12T22:40:40.986588103Z" level=info msg="CreateContainer within sandbox \"df98ef5ddc4f41bf8af2f3ed413c382e96ef054a314e45659547d77a94f423ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 22:40:41.006498 containerd[1613]: time="2024-11-12T22:40:41.006388481Z" level=info msg="CreateContainer within sandbox \"df98ef5ddc4f41bf8af2f3ed413c382e96ef054a314e45659547d77a94f423ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"139918da66cd5d1e0b614f5f36d4681418b9c452c6072b996f52797f694cdf9f\"" Nov 12 22:40:41.007094 containerd[1613]: time="2024-11-12T22:40:41.007073395Z" level=info msg="StartContainer for \"139918da66cd5d1e0b614f5f36d4681418b9c452c6072b996f52797f694cdf9f\"" Nov 12 22:40:41.072054 containerd[1613]: time="2024-11-12T22:40:41.071990683Z" level=info msg="StartContainer for \"139918da66cd5d1e0b614f5f36d4681418b9c452c6072b996f52797f694cdf9f\" returns successfully" Nov 12 22:40:41.276987 kubelet[2828]: E1112 22:40:41.276591 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:41.289152 kubelet[2828]: I1112 22:40:41.289109 2828 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gb67r" podStartSLOduration=1.289064625 podStartE2EDuration="1.289064625s" podCreationTimestamp="2024-11-12 22:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:40:41.288900074 +0000 UTC m=+15.151699534" watchObservedRunningTime="2024-11-12 22:40:41.289064625 +0000 UTC m=+15.151864085" Nov 12 22:40:42.724625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1085567725.mount: Deactivated successfully. Nov 12 22:40:43.193218 containerd[1613]: time="2024-11-12T22:40:43.193150722Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:43.194190 containerd[1613]: time="2024-11-12T22:40:43.194143005Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Nov 12 22:40:43.195408 containerd[1613]: time="2024-11-12T22:40:43.195346247Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:43.197387 containerd[1613]: time="2024-11-12T22:40:43.197351232Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.360906258s" Nov 12 22:40:43.197456 containerd[1613]: time="2024-11-12T22:40:43.197388973Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 22:40:43.198136 containerd[1613]: time="2024-11-12T22:40:43.198100756Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 22:40:43.199424 containerd[1613]: time="2024-11-12T22:40:43.199380473Z" level=info msg="CreateContainer within sandbox \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 22:40:43.213346 containerd[1613]: time="2024-11-12T22:40:43.213298108Z" level=info msg="CreateContainer within sandbox \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\"" Nov 12 22:40:43.214175 containerd[1613]: time="2024-11-12T22:40:43.214121693Z" level=info msg="StartContainer for \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\"" Nov 12 22:40:43.272675 containerd[1613]: time="2024-11-12T22:40:43.272505517Z" level=info msg="StartContainer for \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\" returns successfully" Nov 12 22:40:43.284937 kubelet[2828]: E1112 22:40:43.284887 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:43.295962 kubelet[2828]: I1112 22:40:43.295887 2828 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-svncl" podStartSLOduration=0.933736741 podStartE2EDuration="3.295800656s" podCreationTimestamp="2024-11-12 22:40:40 +0000 UTC" firstStartedPulling="2024-11-12 22:40:40.835806156 +0000 UTC m=+14.698605616" lastFinishedPulling="2024-11-12 22:40:43.197870071 +0000 UTC m=+17.060669531" observedRunningTime="2024-11-12 22:40:43.295486794 +0000 UTC m=+17.158286254" watchObservedRunningTime="2024-11-12 22:40:43.295800656 +0000 UTC m=+17.158600116" Nov 12 22:40:44.287217 kubelet[2828]: E1112 22:40:44.287183 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:48.065746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924300543.mount: Deactivated successfully. Nov 12 22:40:50.132579 containerd[1613]: time="2024-11-12T22:40:50.132513619Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:50.133349 containerd[1613]: time="2024-11-12T22:40:50.133308237Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735355" Nov 12 22:40:50.134470 containerd[1613]: time="2024-11-12T22:40:50.134441049Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 22:40:50.136228 containerd[1613]: time="2024-11-12T22:40:50.136200413Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.938065342s" Nov 12 22:40:50.136291 containerd[1613]: time="2024-11-12T22:40:50.136229227Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 22:40:50.137953 containerd[1613]: time="2024-11-12T22:40:50.137901277Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:40:50.149450 containerd[1613]: time="2024-11-12T22:40:50.149402810Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\"" Nov 12 22:40:50.150038 containerd[1613]: time="2024-11-12T22:40:50.149992540Z" level=info msg="StartContainer for \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\"" Nov 12 22:40:50.208501 containerd[1613]: time="2024-11-12T22:40:50.208446117Z" level=info msg="StartContainer for \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\" returns successfully" Nov 12 22:40:50.297414 kubelet[2828]: E1112 22:40:50.297373 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:50.380268 containerd[1613]: time="2024-11-12T22:40:50.380192699Z" level=info msg="shim disconnected" id=1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a namespace=k8s.io Nov 12 22:40:50.380268 containerd[1613]: time="2024-11-12T22:40:50.380258542Z" level=warning msg="cleaning up after shim disconnected" id=1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a namespace=k8s.io Nov 12 22:40:50.380268 containerd[1613]: time="2024-11-12T22:40:50.380267339Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:40:51.146696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a-rootfs.mount: Deactivated successfully. Nov 12 22:40:51.300359 kubelet[2828]: E1112 22:40:51.300314 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:51.303470 containerd[1613]: time="2024-11-12T22:40:51.303421244Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:40:51.319604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097086376.mount: Deactivated successfully. Nov 12 22:40:51.323413 containerd[1613]: time="2024-11-12T22:40:51.323368305Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\"" Nov 12 22:40:51.324002 containerd[1613]: time="2024-11-12T22:40:51.323971591Z" level=info msg="StartContainer for \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\"" Nov 12 22:40:51.378160 containerd[1613]: time="2024-11-12T22:40:51.378042806Z" level=info msg="StartContainer for \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\" returns successfully" Nov 12 22:40:51.390617 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 22:40:51.391097 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:40:51.391317 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:40:51.398621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 22:40:51.421227 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 22:40:51.421692 containerd[1613]: time="2024-11-12T22:40:51.421630271Z" level=info msg="shim disconnected" id=e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621 namespace=k8s.io Nov 12 22:40:51.421692 containerd[1613]: time="2024-11-12T22:40:51.421690124Z" level=warning msg="cleaning up after shim disconnected" id=e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621 namespace=k8s.io Nov 12 22:40:51.421815 containerd[1613]: time="2024-11-12T22:40:51.421698320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:40:52.146241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621-rootfs.mount: Deactivated successfully. Nov 12 22:40:52.303316 kubelet[2828]: E1112 22:40:52.303285 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:52.305739 containerd[1613]: time="2024-11-12T22:40:52.305660358Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:40:52.361772 containerd[1613]: time="2024-11-12T22:40:52.361708450Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\"" Nov 12 22:40:52.362403 containerd[1613]: time="2024-11-12T22:40:52.362328909Z" level=info msg="StartContainer for \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\"" Nov 12 22:40:52.433962 containerd[1613]: time="2024-11-12T22:40:52.433807932Z" level=info msg="StartContainer for \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\" returns successfully" Nov 12 22:40:52.456390 containerd[1613]: time="2024-11-12T22:40:52.456324429Z" level=info msg="shim disconnected" id=4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1 namespace=k8s.io Nov 12 22:40:52.456390 containerd[1613]: time="2024-11-12T22:40:52.456382509Z" level=warning msg="cleaning up after shim disconnected" id=4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1 namespace=k8s.io Nov 12 22:40:52.456390 containerd[1613]: time="2024-11-12T22:40:52.456395032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:40:53.146383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1-rootfs.mount: Deactivated successfully. Nov 12 22:40:53.306021 kubelet[2828]: E1112 22:40:53.305960 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:53.307830 containerd[1613]: time="2024-11-12T22:40:53.307786400Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:40:53.393224 containerd[1613]: time="2024-11-12T22:40:53.393178649Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\"" Nov 12 22:40:53.394209 containerd[1613]: time="2024-11-12T22:40:53.394076829Z" level=info msg="StartContainer for \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\"" Nov 12 22:40:53.456024 containerd[1613]: time="2024-11-12T22:40:53.455818774Z" level=info msg="StartContainer for \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\" returns successfully" Nov 12 22:40:53.477986 containerd[1613]: time="2024-11-12T22:40:53.477920633Z" level=info msg="shim disconnected" id=0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f namespace=k8s.io Nov 12 22:40:53.477986 containerd[1613]: time="2024-11-12T22:40:53.477977950Z" level=warning msg="cleaning up after shim disconnected" id=0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f namespace=k8s.io Nov 12 22:40:53.477986 containerd[1613]: time="2024-11-12T22:40:53.477987428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:40:54.146156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f-rootfs.mount: Deactivated successfully. Nov 12 22:40:54.220100 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:36502.service - OpenSSH per-connection server daemon (10.0.0.1:36502). Nov 12 22:40:54.260756 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 36502 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:40:54.262212 sshd-session[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:40:54.266204 systemd-logind[1591]: New session 8 of user core. Nov 12 22:40:54.273221 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 22:40:54.309445 kubelet[2828]: E1112 22:40:54.309286 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:54.311553 containerd[1613]: time="2024-11-12T22:40:54.311511541Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:40:54.328572 containerd[1613]: time="2024-11-12T22:40:54.328518243Z" level=info msg="CreateContainer within sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\"" Nov 12 22:40:54.329571 containerd[1613]: time="2024-11-12T22:40:54.329536940Z" level=info msg="StartContainer for \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\"" Nov 12 22:40:54.405425 containerd[1613]: time="2024-11-12T22:40:54.405126458Z" level=info msg="StartContainer for \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\" returns successfully" Nov 12 22:40:54.448530 sshd[3531]: Connection closed by 10.0.0.1 port 36502 Nov 12 22:40:54.450140 sshd-session[3528]: pam_unix(sshd:session): session closed for user core Nov 12 22:40:54.454847 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:36502.service: Deactivated successfully. Nov 12 22:40:54.459857 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 22:40:54.459968 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Nov 12 22:40:54.462336 systemd-logind[1591]: Removed session 8. Nov 12 22:40:54.535309 kubelet[2828]: I1112 22:40:54.535281 2828 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 22:40:54.552400 kubelet[2828]: I1112 22:40:54.552343 2828 topology_manager.go:215] "Topology Admit Handler" podUID="d60e7dc2-9251-4795-8dd2-5c0f8a44291f" podNamespace="kube-system" podName="coredns-76f75df574-8njfx" Nov 12 22:40:54.552792 kubelet[2828]: I1112 22:40:54.552738 2828 topology_manager.go:215] "Topology Admit Handler" podUID="6d1e75e7-4d86-461c-9fd5-18566d82128b" podNamespace="kube-system" podName="coredns-76f75df574-vdhhb" Nov 12 22:40:54.599688 kubelet[2828]: I1112 22:40:54.599638 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42zzp\" (UniqueName: \"kubernetes.io/projected/d60e7dc2-9251-4795-8dd2-5c0f8a44291f-kube-api-access-42zzp\") pod \"coredns-76f75df574-8njfx\" (UID: \"d60e7dc2-9251-4795-8dd2-5c0f8a44291f\") " pod="kube-system/coredns-76f75df574-8njfx" Nov 12 22:40:54.599688 kubelet[2828]: I1112 22:40:54.599679 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkf4q\" (UniqueName: \"kubernetes.io/projected/6d1e75e7-4d86-461c-9fd5-18566d82128b-kube-api-access-lkf4q\") pod \"coredns-76f75df574-vdhhb\" (UID: \"6d1e75e7-4d86-461c-9fd5-18566d82128b\") " pod="kube-system/coredns-76f75df574-vdhhb" Nov 12 22:40:54.599854 kubelet[2828]: I1112 22:40:54.599743 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d60e7dc2-9251-4795-8dd2-5c0f8a44291f-config-volume\") pod \"coredns-76f75df574-8njfx\" (UID: \"d60e7dc2-9251-4795-8dd2-5c0f8a44291f\") " pod="kube-system/coredns-76f75df574-8njfx" Nov 12 22:40:54.599854 kubelet[2828]: I1112 22:40:54.599776 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d1e75e7-4d86-461c-9fd5-18566d82128b-config-volume\") pod \"coredns-76f75df574-vdhhb\" (UID: \"6d1e75e7-4d86-461c-9fd5-18566d82128b\") " pod="kube-system/coredns-76f75df574-vdhhb" Nov 12 22:40:54.862435 kubelet[2828]: E1112 22:40:54.862388 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:54.862645 kubelet[2828]: E1112 22:40:54.862455 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:54.863071 containerd[1613]: time="2024-11-12T22:40:54.863028187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vdhhb,Uid:6d1e75e7-4d86-461c-9fd5-18566d82128b,Namespace:kube-system,Attempt:0,}" Nov 12 22:40:54.863403 containerd[1613]: time="2024-11-12T22:40:54.863030542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8njfx,Uid:d60e7dc2-9251-4795-8dd2-5c0f8a44291f,Namespace:kube-system,Attempt:0,}" Nov 12 22:40:55.314492 kubelet[2828]: E1112 22:40:55.314214 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:55.326922 kubelet[2828]: I1112 22:40:55.326877 2828 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ckcpv" podStartSLOduration=6.171861713 podStartE2EDuration="15.326840668s" podCreationTimestamp="2024-11-12 22:40:40 +0000 UTC" firstStartedPulling="2024-11-12 22:40:40.981447875 +0000 UTC m=+14.844247335" lastFinishedPulling="2024-11-12 22:40:50.13642683 +0000 UTC m=+23.999226290" observedRunningTime="2024-11-12 22:40:55.326706766 +0000 UTC m=+29.189506226" watchObservedRunningTime="2024-11-12 22:40:55.326840668 +0000 UTC m=+29.189640118" Nov 12 22:40:56.315200 kubelet[2828]: E1112 22:40:56.315152 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:56.557479 systemd-networkd[1247]: cilium_host: Link UP Nov 12 22:40:56.557654 systemd-networkd[1247]: cilium_net: Link UP Nov 12 22:40:56.557658 systemd-networkd[1247]: cilium_net: Gained carrier Nov 12 22:40:56.557847 systemd-networkd[1247]: cilium_host: Gained carrier Nov 12 22:40:56.558601 systemd-networkd[1247]: cilium_host: Gained IPv6LL Nov 12 22:40:56.659887 systemd-networkd[1247]: cilium_vxlan: Link UP Nov 12 22:40:56.659898 systemd-networkd[1247]: cilium_vxlan: Gained carrier Nov 12 22:40:56.882935 kernel: NET: Registered PF_ALG protocol family Nov 12 22:40:57.317075 kubelet[2828]: E1112 22:40:57.316972 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:57.351152 systemd-networkd[1247]: cilium_net: Gained IPv6LL Nov 12 22:40:57.609550 systemd-networkd[1247]: lxc_health: Link UP Nov 12 22:40:57.618699 systemd-networkd[1247]: lxc_health: Gained carrier Nov 12 22:40:57.914677 systemd-networkd[1247]: lxc5b42d0438dcd: Link UP Nov 12 22:40:57.921947 kernel: eth0: renamed from tmp77c4a Nov 12 22:40:57.926303 systemd-networkd[1247]: lxc5b42d0438dcd: Gained carrier Nov 12 22:40:57.929700 systemd-networkd[1247]: lxccba1c05e1586: Link UP Nov 12 22:40:57.937705 kernel: eth0: renamed from tmp3032a Nov 12 22:40:57.944622 systemd-networkd[1247]: lxccba1c05e1586: Gained carrier Nov 12 22:40:58.311115 systemd-networkd[1247]: cilium_vxlan: Gained IPv6LL Nov 12 22:40:58.904346 kubelet[2828]: E1112 22:40:58.904262 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:59.207160 systemd-networkd[1247]: lxc_health: Gained IPv6LL Nov 12 22:40:59.271124 systemd-networkd[1247]: lxccba1c05e1586: Gained IPv6LL Nov 12 22:40:59.385813 kubelet[2828]: I1112 22:40:59.385767 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 22:40:59.386991 kubelet[2828]: E1112 22:40:59.386664 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:40:59.457541 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:43018.service - OpenSSH per-connection server daemon (10.0.0.1:43018). Nov 12 22:40:59.501237 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 43018 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:40:59.502983 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:40:59.507347 systemd-logind[1591]: New session 9 of user core. Nov 12 22:40:59.518219 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 22:40:59.592065 systemd-networkd[1247]: lxc5b42d0438dcd: Gained IPv6LL Nov 12 22:40:59.702717 sshd[4063]: Connection closed by 10.0.0.1 port 43018 Nov 12 22:40:59.703128 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Nov 12 22:40:59.707426 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:43018.service: Deactivated successfully. Nov 12 22:40:59.710649 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 22:40:59.710975 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Nov 12 22:40:59.713265 systemd-logind[1591]: Removed session 9. Nov 12 22:41:00.321924 kubelet[2828]: E1112 22:41:00.321877 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:01.781878 containerd[1613]: time="2024-11-12T22:41:01.781792938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:41:01.781878 containerd[1613]: time="2024-11-12T22:41:01.781851448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:41:01.781878 containerd[1613]: time="2024-11-12T22:41:01.781862218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:01.782690 containerd[1613]: time="2024-11-12T22:41:01.781986071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:01.783609 containerd[1613]: time="2024-11-12T22:41:01.783069988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:41:01.783730 containerd[1613]: time="2024-11-12T22:41:01.783423983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:41:01.783730 containerd[1613]: time="2024-11-12T22:41:01.783448359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:01.783730 containerd[1613]: time="2024-11-12T22:41:01.783567994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:41:01.814013 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:41:01.814970 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 22:41:01.846831 containerd[1613]: time="2024-11-12T22:41:01.846784381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8njfx,Uid:d60e7dc2-9251-4795-8dd2-5c0f8a44291f,Namespace:kube-system,Attempt:0,} returns sandbox id \"77c4aa9d51898b4ee9caeb74e022ea763ca85f17bdfa5667f9716ba7f577f597\"" Nov 12 22:41:01.848217 kubelet[2828]: E1112 22:41:01.848132 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:01.851424 containerd[1613]: time="2024-11-12T22:41:01.851379117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vdhhb,Uid:6d1e75e7-4d86-461c-9fd5-18566d82128b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3032a8713c99eb3b59c164605d983e9d0d561057b2d90e37c6913055f16a1fbd\"" Nov 12 22:41:01.852191 kubelet[2828]: E1112 22:41:01.852154 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:01.853069 containerd[1613]: time="2024-11-12T22:41:01.853038666Z" level=info msg="CreateContainer within sandbox \"77c4aa9d51898b4ee9caeb74e022ea763ca85f17bdfa5667f9716ba7f577f597\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:41:01.855228 containerd[1613]: time="2024-11-12T22:41:01.855199719Z" level=info msg="CreateContainer within sandbox \"3032a8713c99eb3b59c164605d983e9d0d561057b2d90e37c6913055f16a1fbd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 22:41:01.885584 containerd[1613]: time="2024-11-12T22:41:01.885542399Z" level=info msg="CreateContainer within sandbox \"77c4aa9d51898b4ee9caeb74e022ea763ca85f17bdfa5667f9716ba7f577f597\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3cdbbe21700a7b26c8b8e6900691261d05ad6f1331c1a2dc0bb24bda4edac8d0\"" Nov 12 22:41:01.886098 containerd[1613]: time="2024-11-12T22:41:01.886063678Z" level=info msg="StartContainer for \"3cdbbe21700a7b26c8b8e6900691261d05ad6f1331c1a2dc0bb24bda4edac8d0\"" Nov 12 22:41:01.887245 containerd[1613]: time="2024-11-12T22:41:01.887218929Z" level=info msg="CreateContainer within sandbox \"3032a8713c99eb3b59c164605d983e9d0d561057b2d90e37c6913055f16a1fbd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"529d9a570d3daef1ae36ea1a825e88d82c20a6d2922297ef6ea144e1d06ebc6e\"" Nov 12 22:41:01.887648 containerd[1613]: time="2024-11-12T22:41:01.887625855Z" level=info msg="StartContainer for \"529d9a570d3daef1ae36ea1a825e88d82c20a6d2922297ef6ea144e1d06ebc6e\"" Nov 12 22:41:01.957344 containerd[1613]: time="2024-11-12T22:41:01.957286884Z" level=info msg="StartContainer for \"529d9a570d3daef1ae36ea1a825e88d82c20a6d2922297ef6ea144e1d06ebc6e\" returns successfully" Nov 12 22:41:01.957486 containerd[1613]: time="2024-11-12T22:41:01.957286914Z" level=info msg="StartContainer for \"3cdbbe21700a7b26c8b8e6900691261d05ad6f1331c1a2dc0bb24bda4edac8d0\" returns successfully" Nov 12 22:41:02.327102 kubelet[2828]: E1112 22:41:02.327051 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:02.329705 kubelet[2828]: E1112 22:41:02.329677 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:02.342979 kubelet[2828]: I1112 22:41:02.341246 2828 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vdhhb" podStartSLOduration=22.341202425 podStartE2EDuration="22.341202425s" podCreationTimestamp="2024-11-12 22:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:41:02.338610674 +0000 UTC m=+36.201410144" watchObservedRunningTime="2024-11-12 22:41:02.341202425 +0000 UTC m=+36.204001906" Nov 12 22:41:02.351247 kubelet[2828]: I1112 22:41:02.350927 2828 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8njfx" podStartSLOduration=22.350851749 podStartE2EDuration="22.350851749s" podCreationTimestamp="2024-11-12 22:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:41:02.349856669 +0000 UTC m=+36.212656159" watchObservedRunningTime="2024-11-12 22:41:02.350851749 +0000 UTC m=+36.213651219" Nov 12 22:41:02.789088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923311106.mount: Deactivated successfully. Nov 12 22:41:03.332139 kubelet[2828]: E1112 22:41:03.332098 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:03.332640 kubelet[2828]: E1112 22:41:03.332232 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:04.333871 kubelet[2828]: E1112 22:41:04.333822 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:04.333871 kubelet[2828]: E1112 22:41:04.333867 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:04.712136 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:43024.service - OpenSSH per-connection server daemon (10.0.0.1:43024). Nov 12 22:41:04.789799 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 43024 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:04.791463 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:04.795297 systemd-logind[1591]: New session 10 of user core. Nov 12 22:41:04.805164 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 22:41:05.002753 sshd[4257]: Connection closed by 10.0.0.1 port 43024 Nov 12 22:41:05.003094 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:05.007158 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:43024.service: Deactivated successfully. Nov 12 22:41:05.009774 systemd-logind[1591]: Session 10 logged out. Waiting for processes to exit. Nov 12 22:41:05.009927 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 22:41:05.011136 systemd-logind[1591]: Removed session 10. Nov 12 22:41:10.017350 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:39286.service - OpenSSH per-connection server daemon (10.0.0.1:39286). Nov 12 22:41:10.056798 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 39286 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:10.058993 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:10.063879 systemd-logind[1591]: New session 11 of user core. Nov 12 22:41:10.075273 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 22:41:10.234417 sshd[4273]: Connection closed by 10.0.0.1 port 39286 Nov 12 22:41:10.234841 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:10.241242 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:39286.service: Deactivated successfully. Nov 12 22:41:10.246645 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 22:41:10.248151 systemd-logind[1591]: Session 11 logged out. Waiting for processes to exit. Nov 12 22:41:10.249422 systemd-logind[1591]: Removed session 11. Nov 12 22:41:15.249163 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:39302.service - OpenSSH per-connection server daemon (10.0.0.1:39302). Nov 12 22:41:15.283815 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 39302 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:15.285408 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:15.289826 systemd-logind[1591]: New session 12 of user core. Nov 12 22:41:15.300447 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 22:41:15.489564 sshd[4291]: Connection closed by 10.0.0.1 port 39302 Nov 12 22:41:15.487771 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:15.494296 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:39302.service: Deactivated successfully. Nov 12 22:41:15.500133 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 22:41:15.510190 systemd-logind[1591]: Session 12 logged out. Waiting for processes to exit. Nov 12 22:41:15.518461 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:39310.service - OpenSSH per-connection server daemon (10.0.0.1:39310). Nov 12 22:41:15.521463 systemd-logind[1591]: Removed session 12. Nov 12 22:41:15.579752 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 39310 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:15.582723 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:15.593116 systemd-logind[1591]: New session 13 of user core. Nov 12 22:41:15.607573 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 22:41:15.982945 sshd[4307]: Connection closed by 10.0.0.1 port 39310 Nov 12 22:41:15.984991 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:16.017441 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:39312.service - OpenSSH per-connection server daemon (10.0.0.1:39312). Nov 12 22:41:16.018242 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:39310.service: Deactivated successfully. Nov 12 22:41:16.023377 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 22:41:16.031382 systemd-logind[1591]: Session 13 logged out. Waiting for processes to exit. Nov 12 22:41:16.033293 systemd-logind[1591]: Removed session 13. Nov 12 22:41:16.098528 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 39312 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:16.105658 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:16.123702 systemd-logind[1591]: New session 14 of user core. Nov 12 22:41:16.135610 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 22:41:16.347471 sshd[4320]: Connection closed by 10.0.0.1 port 39312 Nov 12 22:41:16.349284 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:16.354637 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:39312.service: Deactivated successfully. Nov 12 22:41:16.359283 systemd-logind[1591]: Session 14 logged out. Waiting for processes to exit. Nov 12 22:41:16.360157 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 22:41:16.361381 systemd-logind[1591]: Removed session 14. Nov 12 22:41:21.368443 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:42650.service - OpenSSH per-connection server daemon (10.0.0.1:42650). Nov 12 22:41:21.467530 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 42650 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:21.470351 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:21.488227 systemd-logind[1591]: New session 15 of user core. Nov 12 22:41:21.497709 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 22:41:21.709643 sshd[4336]: Connection closed by 10.0.0.1 port 42650 Nov 12 22:41:21.708582 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:21.732433 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:42650.service: Deactivated successfully. Nov 12 22:41:21.755266 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 22:41:21.756263 systemd-logind[1591]: Session 15 logged out. Waiting for processes to exit. Nov 12 22:41:21.758437 systemd-logind[1591]: Removed session 15. Nov 12 22:41:26.727441 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:42658.service - OpenSSH per-connection server daemon (10.0.0.1:42658). Nov 12 22:41:26.794046 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 42658 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:26.803051 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:26.842870 systemd-logind[1591]: New session 16 of user core. Nov 12 22:41:26.849190 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 22:41:27.076166 sshd[4353]: Connection closed by 10.0.0.1 port 42658 Nov 12 22:41:27.077024 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:27.086022 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:42658.service: Deactivated successfully. Nov 12 22:41:27.093238 systemd-logind[1591]: Session 16 logged out. Waiting for processes to exit. Nov 12 22:41:27.093244 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 22:41:27.094777 systemd-logind[1591]: Removed session 16. Nov 12 22:41:32.097313 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:39304.service - OpenSSH per-connection server daemon (10.0.0.1:39304). Nov 12 22:41:32.171848 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 39304 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:32.181007 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:32.215575 systemd-logind[1591]: New session 17 of user core. Nov 12 22:41:32.231850 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 22:41:32.506738 sshd[4368]: Connection closed by 10.0.0.1 port 39304 Nov 12 22:41:32.507518 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:32.515714 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:39304.service: Deactivated successfully. Nov 12 22:41:32.523746 systemd-logind[1591]: Session 17 logged out. Waiting for processes to exit. Nov 12 22:41:32.523962 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 22:41:32.526143 systemd-logind[1591]: Removed session 17. Nov 12 22:41:37.544783 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:39306.service - OpenSSH per-connection server daemon (10.0.0.1:39306). Nov 12 22:41:37.630858 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 39306 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:37.632527 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:37.645884 systemd-logind[1591]: New session 18 of user core. Nov 12 22:41:37.656878 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 22:41:37.934976 sshd[4383]: Connection closed by 10.0.0.1 port 39306 Nov 12 22:41:37.932286 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:37.949669 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:39322.service - OpenSSH per-connection server daemon (10.0.0.1:39322). Nov 12 22:41:37.950538 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:39306.service: Deactivated successfully. Nov 12 22:41:37.958586 systemd-logind[1591]: Session 18 logged out. Waiting for processes to exit. Nov 12 22:41:37.959863 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 22:41:37.962139 systemd-logind[1591]: Removed session 18. Nov 12 22:41:38.033632 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 39322 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:38.035189 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:38.055037 systemd-logind[1591]: New session 19 of user core. Nov 12 22:41:38.063586 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 22:41:38.660759 sshd[4398]: Connection closed by 10.0.0.1 port 39322 Nov 12 22:41:38.659218 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:38.667561 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:39322.service: Deactivated successfully. Nov 12 22:41:38.673598 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 22:41:38.681939 systemd-logind[1591]: Session 19 logged out. Waiting for processes to exit. Nov 12 22:41:38.701440 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:39324.service - OpenSSH per-connection server daemon (10.0.0.1:39324). Nov 12 22:41:38.704426 systemd-logind[1591]: Removed session 19. Nov 12 22:41:38.769529 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 39324 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:38.771826 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:38.782169 systemd-logind[1591]: New session 20 of user core. Nov 12 22:41:38.790413 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 22:41:41.429304 sshd[4412]: Connection closed by 10.0.0.1 port 39324 Nov 12 22:41:41.430538 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:41.451637 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:43660.service - OpenSSH per-connection server daemon (10.0.0.1:43660). Nov 12 22:41:41.452438 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:39324.service: Deactivated successfully. Nov 12 22:41:41.458272 systemd-logind[1591]: Session 20 logged out. Waiting for processes to exit. Nov 12 22:41:41.462063 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 22:41:41.468209 systemd-logind[1591]: Removed session 20. Nov 12 22:41:41.537400 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 43660 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:41.545524 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:41.558883 systemd-logind[1591]: New session 21 of user core. Nov 12 22:41:41.564606 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 22:41:42.181575 sshd[4442]: Connection closed by 10.0.0.1 port 43660 Nov 12 22:41:42.183929 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:42.202568 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:43670.service - OpenSSH per-connection server daemon (10.0.0.1:43670). Nov 12 22:41:42.204002 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:43660.service: Deactivated successfully. Nov 12 22:41:42.208821 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 22:41:42.215262 systemd-logind[1591]: Session 21 logged out. Waiting for processes to exit. Nov 12 22:41:42.218051 systemd-logind[1591]: Removed session 21. Nov 12 22:41:42.242231 kubelet[2828]: E1112 22:41:42.242156 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:42.267405 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 43670 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:42.269821 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:42.283811 systemd-logind[1591]: New session 22 of user core. Nov 12 22:41:42.295581 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 22:41:42.527388 sshd[4455]: Connection closed by 10.0.0.1 port 43670 Nov 12 22:41:42.528404 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:42.541347 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:43670.service: Deactivated successfully. Nov 12 22:41:42.551404 systemd-logind[1591]: Session 22 logged out. Waiting for processes to exit. Nov 12 22:41:42.562318 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 22:41:42.568863 systemd-logind[1591]: Removed session 22. Nov 12 22:41:47.242154 kubelet[2828]: E1112 22:41:47.242061 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:47.539501 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:43682.service - OpenSSH per-connection server daemon (10.0.0.1:43682). Nov 12 22:41:47.581445 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 43682 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:47.583795 sshd-session[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:47.589792 systemd-logind[1591]: New session 23 of user core. Nov 12 22:41:47.597530 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 22:41:47.722514 sshd[4470]: Connection closed by 10.0.0.1 port 43682 Nov 12 22:41:47.723004 sshd-session[4467]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:47.728673 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:43682.service: Deactivated successfully. Nov 12 22:41:47.732549 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 22:41:47.733394 systemd-logind[1591]: Session 23 logged out. Waiting for processes to exit. Nov 12 22:41:47.734861 systemd-logind[1591]: Removed session 23. Nov 12 22:41:50.240657 kubelet[2828]: E1112 22:41:50.240570 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:41:52.748554 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:38078.service - OpenSSH per-connection server daemon (10.0.0.1:38078). Nov 12 22:41:52.806749 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 38078 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:52.810173 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:52.826308 systemd-logind[1591]: New session 24 of user core. Nov 12 22:41:52.837647 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 22:41:53.076102 sshd[4489]: Connection closed by 10.0.0.1 port 38078 Nov 12 22:41:53.076546 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:53.084494 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:38078.service: Deactivated successfully. Nov 12 22:41:53.094503 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 22:41:53.096784 systemd-logind[1591]: Session 24 logged out. Waiting for processes to exit. Nov 12 22:41:53.099139 systemd-logind[1591]: Removed session 24. Nov 12 22:41:58.087225 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:38092.service - OpenSSH per-connection server daemon (10.0.0.1:38092). Nov 12 22:41:58.129316 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 38092 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:41:58.131018 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:41:58.136157 systemd-logind[1591]: New session 25 of user core. Nov 12 22:41:58.150187 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 22:41:58.332695 sshd[4504]: Connection closed by 10.0.0.1 port 38092 Nov 12 22:41:58.333114 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Nov 12 22:41:58.337761 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:38092.service: Deactivated successfully. Nov 12 22:41:58.340739 systemd-logind[1591]: Session 25 logged out. Waiting for processes to exit. Nov 12 22:41:58.340803 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 22:41:58.341934 systemd-logind[1591]: Removed session 25. Nov 12 22:42:00.240136 kubelet[2828]: E1112 22:42:00.240077 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:03.343143 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:39030.service - OpenSSH per-connection server daemon (10.0.0.1:39030). Nov 12 22:42:03.382696 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 39030 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:03.384390 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:03.388848 systemd-logind[1591]: New session 26 of user core. Nov 12 22:42:03.397259 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 22:42:03.508846 sshd[4520]: Connection closed by 10.0.0.1 port 39030 Nov 12 22:42:03.509263 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:03.513305 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:39030.service: Deactivated successfully. Nov 12 22:42:03.516041 systemd-logind[1591]: Session 26 logged out. Waiting for processes to exit. Nov 12 22:42:03.516294 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 22:42:03.517757 systemd-logind[1591]: Removed session 26. Nov 12 22:42:06.241087 kubelet[2828]: E1112 22:42:06.240588 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:08.523321 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:39046.service - OpenSSH per-connection server daemon (10.0.0.1:39046). Nov 12 22:42:08.572040 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 39046 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:08.574387 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:08.582377 systemd-logind[1591]: New session 27 of user core. Nov 12 22:42:08.590411 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 22:42:08.722207 sshd[4535]: Connection closed by 10.0.0.1 port 39046 Nov 12 22:42:08.722991 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:08.735523 systemd[1]: Started sshd@27-10.0.0.16:22-10.0.0.1:50032.service - OpenSSH per-connection server daemon (10.0.0.1:50032). Nov 12 22:42:08.736371 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:39046.service: Deactivated successfully. Nov 12 22:42:08.744139 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 22:42:08.745900 systemd-logind[1591]: Session 27 logged out. Waiting for processes to exit. Nov 12 22:42:08.748413 systemd-logind[1591]: Removed session 27. Nov 12 22:42:08.782790 sshd[4544]: Accepted publickey for core from 10.0.0.1 port 50032 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:08.785168 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:08.791634 systemd-logind[1591]: New session 28 of user core. Nov 12 22:42:08.801540 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 22:42:09.240615 kubelet[2828]: E1112 22:42:09.240572 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:10.417988 containerd[1613]: time="2024-11-12T22:42:10.417713579Z" level=info msg="StopContainer for \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\" with timeout 30 (s)" Nov 12 22:42:10.418848 containerd[1613]: time="2024-11-12T22:42:10.418561619Z" level=info msg="Stop container \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\" with signal terminated" Nov 12 22:42:10.518823 containerd[1613]: time="2024-11-12T22:42:10.518744675Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 22:42:10.525928 containerd[1613]: time="2024-11-12T22:42:10.525856260Z" level=info msg="StopContainer for \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\" with timeout 2 (s)" Nov 12 22:42:10.526351 containerd[1613]: time="2024-11-12T22:42:10.526305248Z" level=info msg="Stop container \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\" with signal terminated" Nov 12 22:42:10.539404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada-rootfs.mount: Deactivated successfully. Nov 12 22:42:10.551114 systemd-networkd[1247]: lxc_health: Link DOWN Nov 12 22:42:10.551124 systemd-networkd[1247]: lxc_health: Lost carrier Nov 12 22:42:10.642668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc-rootfs.mount: Deactivated successfully. Nov 12 22:42:10.661807 containerd[1613]: time="2024-11-12T22:42:10.661690879Z" level=info msg="shim disconnected" id=45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc namespace=k8s.io Nov 12 22:42:10.661807 containerd[1613]: time="2024-11-12T22:42:10.661786429Z" level=warning msg="cleaning up after shim disconnected" id=45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc namespace=k8s.io Nov 12 22:42:10.661807 containerd[1613]: time="2024-11-12T22:42:10.661799043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:42:10.669687 containerd[1613]: time="2024-11-12T22:42:10.669380876Z" level=info msg="shim disconnected" id=c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada namespace=k8s.io Nov 12 22:42:10.669687 containerd[1613]: time="2024-11-12T22:42:10.669495502Z" level=warning msg="cleaning up after shim disconnected" id=c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada namespace=k8s.io Nov 12 22:42:10.669687 containerd[1613]: time="2024-11-12T22:42:10.669508707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:42:10.706277 containerd[1613]: time="2024-11-12T22:42:10.706202378Z" level=info msg="StopContainer for \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\" returns successfully" Nov 12 22:42:10.712540 containerd[1613]: time="2024-11-12T22:42:10.712329344Z" level=info msg="StopPodSandbox for \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\"" Nov 12 22:42:10.723513 containerd[1613]: time="2024-11-12T22:42:10.720537198Z" level=info msg="StopContainer for \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\" returns successfully" Nov 12 22:42:10.723513 containerd[1613]: time="2024-11-12T22:42:10.722624159Z" level=info msg="StopPodSandbox for \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\"" Nov 12 22:42:10.735799 containerd[1613]: time="2024-11-12T22:42:10.722715551Z" level=info msg="Container to stop \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:42:10.738288 containerd[1613]: time="2024-11-12T22:42:10.712424263Z" level=info msg="Container to stop \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:42:10.738288 containerd[1613]: time="2024-11-12T22:42:10.737887081Z" level=info msg="Container to stop \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:42:10.738288 containerd[1613]: time="2024-11-12T22:42:10.737927598Z" level=info msg="Container to stop \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:42:10.738288 containerd[1613]: time="2024-11-12T22:42:10.737942236Z" level=info msg="Container to stop \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:42:10.738288 containerd[1613]: time="2024-11-12T22:42:10.737955170Z" level=info msg="Container to stop \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 22:42:10.741048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f-shm.mount: Deactivated successfully. Nov 12 22:42:10.741309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288-shm.mount: Deactivated successfully. Nov 12 22:42:10.806283 containerd[1613]: time="2024-11-12T22:42:10.806144719Z" level=info msg="shim disconnected" id=e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f namespace=k8s.io Nov 12 22:42:10.806283 containerd[1613]: time="2024-11-12T22:42:10.806231001Z" level=warning msg="cleaning up after shim disconnected" id=e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f namespace=k8s.io Nov 12 22:42:10.806283 containerd[1613]: time="2024-11-12T22:42:10.806243335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:42:10.812094 containerd[1613]: time="2024-11-12T22:42:10.811771751Z" level=info msg="shim disconnected" id=1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288 namespace=k8s.io Nov 12 22:42:10.812094 containerd[1613]: time="2024-11-12T22:42:10.811869876Z" level=warning msg="cleaning up after shim disconnected" id=1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288 namespace=k8s.io Nov 12 22:42:10.812094 containerd[1613]: time="2024-11-12T22:42:10.811882009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:42:10.833687 containerd[1613]: time="2024-11-12T22:42:10.833552780Z" level=info msg="TearDown network for sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" successfully" Nov 12 22:42:10.833687 containerd[1613]: time="2024-11-12T22:42:10.833629244Z" level=info msg="StopPodSandbox for \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" returns successfully" Nov 12 22:42:10.838253 containerd[1613]: time="2024-11-12T22:42:10.838057473Z" level=warning msg="cleanup warnings time=\"2024-11-12T22:42:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 22:42:10.840108 containerd[1613]: time="2024-11-12T22:42:10.839900453Z" level=info msg="TearDown network for sandbox \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\" successfully" Nov 12 22:42:10.840108 containerd[1613]: time="2024-11-12T22:42:10.839981837Z" level=info msg="StopPodSandbox for \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\" returns successfully" Nov 12 22:42:10.959480 kubelet[2828]: I1112 22:42:10.959117 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swtls\" (UniqueName: \"kubernetes.io/projected/ceb3fbe8-0f18-4231-9558-82b9c527fcbf-kube-api-access-swtls\") pod \"ceb3fbe8-0f18-4231-9558-82b9c527fcbf\" (UID: \"ceb3fbe8-0f18-4231-9558-82b9c527fcbf\") " Nov 12 22:42:10.959480 kubelet[2828]: I1112 22:42:10.959205 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa30da42-2631-483e-9b6a-287561cfd681-cilium-config-path\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.959480 kubelet[2828]: I1112 22:42:10.959239 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-hostproc\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.959480 kubelet[2828]: I1112 22:42:10.959264 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-xtables-lock\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.959480 kubelet[2828]: I1112 22:42:10.959296 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-etc-cni-netd\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.959480 kubelet[2828]: I1112 22:42:10.959322 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-host-proc-sys-kernel\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960321 kubelet[2828]: I1112 22:42:10.959345 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cilium-cgroup\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960321 kubelet[2828]: I1112 22:42:10.959367 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cilium-run\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960321 kubelet[2828]: I1112 22:42:10.959394 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cni-path\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960321 kubelet[2828]: I1112 22:42:10.959430 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-lib-modules\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960321 kubelet[2828]: I1112 22:42:10.959477 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa30da42-2631-483e-9b6a-287561cfd681-hubble-tls\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960321 kubelet[2828]: I1112 22:42:10.959510 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-host-proc-sys-net\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960544 kubelet[2828]: I1112 22:42:10.959544 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ceb3fbe8-0f18-4231-9558-82b9c527fcbf-cilium-config-path\") pod \"ceb3fbe8-0f18-4231-9558-82b9c527fcbf\" (UID: \"ceb3fbe8-0f18-4231-9558-82b9c527fcbf\") " Nov 12 22:42:10.960544 kubelet[2828]: I1112 22:42:10.959578 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa30da42-2631-483e-9b6a-287561cfd681-clustermesh-secrets\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960544 kubelet[2828]: I1112 22:42:10.959603 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-bpf-maps\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960544 kubelet[2828]: I1112 22:42:10.959634 2828 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzbtf\" (UniqueName: \"kubernetes.io/projected/fa30da42-2631-483e-9b6a-287561cfd681-kube-api-access-bzbtf\") pod \"fa30da42-2631-483e-9b6a-287561cfd681\" (UID: \"fa30da42-2631-483e-9b6a-287561cfd681\") " Nov 12 22:42:10.960544 kubelet[2828]: I1112 22:42:10.959986 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.960837 kubelet[2828]: I1112 22:42:10.960734 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cni-path" (OuterVolumeSpecName: "cni-path") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.960837 kubelet[2828]: I1112 22:42:10.960838 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.960979 kubelet[2828]: I1112 22:42:10.960896 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.960979 kubelet[2828]: I1112 22:42:10.960956 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-hostproc" (OuterVolumeSpecName: "hostproc") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.961058 kubelet[2828]: I1112 22:42:10.960979 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.961058 kubelet[2828]: I1112 22:42:10.961004 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.961058 kubelet[2828]: I1112 22:42:10.961029 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.965431 kubelet[2828]: I1112 22:42:10.965170 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.965431 kubelet[2828]: I1112 22:42:10.965336 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa30da42-2631-483e-9b6a-287561cfd681-kube-api-access-bzbtf" (OuterVolumeSpecName: "kube-api-access-bzbtf") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "kube-api-access-bzbtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:42:10.965431 kubelet[2828]: I1112 22:42:10.965397 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 22:42:10.966320 kubelet[2828]: I1112 22:42:10.966286 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb3fbe8-0f18-4231-9558-82b9c527fcbf-kube-api-access-swtls" (OuterVolumeSpecName: "kube-api-access-swtls") pod "ceb3fbe8-0f18-4231-9558-82b9c527fcbf" (UID: "ceb3fbe8-0f18-4231-9558-82b9c527fcbf"). InnerVolumeSpecName "kube-api-access-swtls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:42:10.966964 kubelet[2828]: I1112 22:42:10.966876 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa30da42-2631-483e-9b6a-287561cfd681-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 22:42:10.968003 kubelet[2828]: I1112 22:42:10.967973 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa30da42-2631-483e-9b6a-287561cfd681-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:42:10.968100 kubelet[2828]: I1112 22:42:10.968065 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceb3fbe8-0f18-4231-9558-82b9c527fcbf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ceb3fbe8-0f18-4231-9558-82b9c527fcbf" (UID: "ceb3fbe8-0f18-4231-9558-82b9c527fcbf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 22:42:10.968827 kubelet[2828]: I1112 22:42:10.968770 2828 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa30da42-2631-483e-9b6a-287561cfd681-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fa30da42-2631-483e-9b6a-287561cfd681" (UID: "fa30da42-2631-483e-9b6a-287561cfd681"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 22:42:11.060261 kubelet[2828]: I1112 22:42:11.060178 2828 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060261 kubelet[2828]: I1112 22:42:11.060234 2828 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060261 kubelet[2828]: I1112 22:42:11.060248 2828 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060261 kubelet[2828]: I1112 22:42:11.060263 2828 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060261 kubelet[2828]: I1112 22:42:11.060275 2828 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa30da42-2631-483e-9b6a-287561cfd681-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060566 kubelet[2828]: I1112 22:42:11.060289 2828 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060566 kubelet[2828]: I1112 22:42:11.060303 2828 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ceb3fbe8-0f18-4231-9558-82b9c527fcbf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060566 kubelet[2828]: I1112 22:42:11.060315 2828 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa30da42-2631-483e-9b6a-287561cfd681-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060566 kubelet[2828]: I1112 22:42:11.060326 2828 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060566 kubelet[2828]: I1112 22:42:11.060338 2828 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bzbtf\" (UniqueName: \"kubernetes.io/projected/fa30da42-2631-483e-9b6a-287561cfd681-kube-api-access-bzbtf\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060566 kubelet[2828]: I1112 22:42:11.060352 2828 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060566 kubelet[2828]: I1112 22:42:11.060363 2828 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-swtls\" (UniqueName: \"kubernetes.io/projected/ceb3fbe8-0f18-4231-9558-82b9c527fcbf-kube-api-access-swtls\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060566 kubelet[2828]: I1112 22:42:11.060375 2828 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa30da42-2631-483e-9b6a-287561cfd681-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060780 kubelet[2828]: I1112 22:42:11.060386 2828 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060780 kubelet[2828]: I1112 22:42:11.060397 2828 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.060780 kubelet[2828]: I1112 22:42:11.060409 2828 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa30da42-2631-483e-9b6a-287561cfd681-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 22:42:11.344125 kubelet[2828]: E1112 22:42:11.344080 2828 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 22:42:11.461430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f-rootfs.mount: Deactivated successfully. Nov 12 22:42:11.461665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288-rootfs.mount: Deactivated successfully. Nov 12 22:42:11.461867 systemd[1]: var-lib-kubelet-pods-fa30da42\x2d2631\x2d483e\x2d9b6a\x2d287561cfd681-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbzbtf.mount: Deactivated successfully. Nov 12 22:42:11.462063 systemd[1]: var-lib-kubelet-pods-fa30da42\x2d2631\x2d483e\x2d9b6a\x2d287561cfd681-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 22:42:11.462223 systemd[1]: var-lib-kubelet-pods-fa30da42\x2d2631\x2d483e\x2d9b6a\x2d287561cfd681-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 22:42:11.462413 systemd[1]: var-lib-kubelet-pods-ceb3fbe8\x2d0f18\x2d4231\x2d9558\x2d82b9c527fcbf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dswtls.mount: Deactivated successfully. Nov 12 22:42:11.685245 kubelet[2828]: I1112 22:42:11.684527 2828 scope.go:117] "RemoveContainer" containerID="45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc" Nov 12 22:42:11.693660 containerd[1613]: time="2024-11-12T22:42:11.692793873Z" level=info msg="RemoveContainer for \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\"" Nov 12 22:42:11.697407 containerd[1613]: time="2024-11-12T22:42:11.697363468Z" level=info msg="RemoveContainer for \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\" returns successfully" Nov 12 22:42:11.697731 kubelet[2828]: I1112 22:42:11.697694 2828 scope.go:117] "RemoveContainer" containerID="0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f" Nov 12 22:42:11.698825 containerd[1613]: time="2024-11-12T22:42:11.698799890Z" level=info msg="RemoveContainer for \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\"" Nov 12 22:42:11.702582 containerd[1613]: time="2024-11-12T22:42:11.702503550Z" level=info msg="RemoveContainer for \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\" returns successfully" Nov 12 22:42:11.702719 kubelet[2828]: I1112 22:42:11.702676 2828 scope.go:117] "RemoveContainer" containerID="4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1" Nov 12 22:42:11.703677 containerd[1613]: time="2024-11-12T22:42:11.703645176Z" level=info msg="RemoveContainer for \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\"" Nov 12 22:42:11.707312 containerd[1613]: time="2024-11-12T22:42:11.707269567Z" level=info msg="RemoveContainer for \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\" returns successfully" Nov 12 22:42:11.707510 kubelet[2828]: I1112 22:42:11.707482 2828 scope.go:117] "RemoveContainer" containerID="e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621" Nov 12 22:42:11.708622 containerd[1613]: time="2024-11-12T22:42:11.708556887Z" level=info msg="RemoveContainer for \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\"" Nov 12 22:42:11.712471 containerd[1613]: time="2024-11-12T22:42:11.712406193Z" level=info msg="RemoveContainer for \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\" returns successfully" Nov 12 22:42:11.712685 kubelet[2828]: I1112 22:42:11.712644 2828 scope.go:117] "RemoveContainer" containerID="1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a" Nov 12 22:42:11.713776 containerd[1613]: time="2024-11-12T22:42:11.713747735Z" level=info msg="RemoveContainer for \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\"" Nov 12 22:42:11.717050 containerd[1613]: time="2024-11-12T22:42:11.717016144Z" level=info msg="RemoveContainer for \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\" returns successfully" Nov 12 22:42:11.717261 kubelet[2828]: I1112 22:42:11.717180 2828 scope.go:117] "RemoveContainer" containerID="45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc" Nov 12 22:42:11.717408 containerd[1613]: time="2024-11-12T22:42:11.717363740Z" level=error msg="ContainerStatus for \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\": not found" Nov 12 22:42:11.725327 kubelet[2828]: E1112 22:42:11.725294 2828 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\": not found" containerID="45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc" Nov 12 22:42:11.725422 kubelet[2828]: I1112 22:42:11.725408 2828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc"} err="failed to get container status \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"45098f1fabe73d444b2b86bc83b557c02bdb472686b70596763cd83638dd27cc\": not found" Nov 12 22:42:11.725473 kubelet[2828]: I1112 22:42:11.725426 2828 scope.go:117] "RemoveContainer" containerID="0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f" Nov 12 22:42:11.725693 containerd[1613]: time="2024-11-12T22:42:11.725650051Z" level=error msg="ContainerStatus for \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\": not found" Nov 12 22:42:11.725821 kubelet[2828]: E1112 22:42:11.725799 2828 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\": not found" containerID="0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f" Nov 12 22:42:11.725883 kubelet[2828]: I1112 22:42:11.725839 2828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f"} err="failed to get container status \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b40c3e509645bc4da67746a589d71b6171b0302083eb7d095fedcad2311e87f\": not found" Nov 12 22:42:11.725883 kubelet[2828]: I1112 22:42:11.725853 2828 scope.go:117] "RemoveContainer" containerID="4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1" Nov 12 22:42:11.726055 containerd[1613]: time="2024-11-12T22:42:11.726020400Z" level=error msg="ContainerStatus for \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\": not found" Nov 12 22:42:11.726172 kubelet[2828]: E1112 22:42:11.726148 2828 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\": not found" containerID="4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1" Nov 12 22:42:11.726226 kubelet[2828]: I1112 22:42:11.726180 2828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1"} err="failed to get container status \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cdaf46c6dc777782ee91447a8a41061b8a0ecd5a4b8ed5effd054f0ffcf8fa1\": not found" Nov 12 22:42:11.726226 kubelet[2828]: I1112 22:42:11.726191 2828 scope.go:117] "RemoveContainer" containerID="e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621" Nov 12 22:42:11.726499 containerd[1613]: time="2024-11-12T22:42:11.726440994Z" level=error msg="ContainerStatus for \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\": not found" Nov 12 22:42:11.726670 kubelet[2828]: E1112 22:42:11.726638 2828 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\": not found" containerID="e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621" Nov 12 22:42:11.726725 kubelet[2828]: I1112 22:42:11.726689 2828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621"} err="failed to get container status \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5b81c6acb22a1b28804aaf182ad312454fe5377108d2cf9fcb502588be90621\": not found" Nov 12 22:42:11.726725 kubelet[2828]: I1112 22:42:11.726710 2828 scope.go:117] "RemoveContainer" containerID="1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a" Nov 12 22:42:11.726970 containerd[1613]: time="2024-11-12T22:42:11.726935077Z" level=error msg="ContainerStatus for \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\": not found" Nov 12 22:42:11.727222 kubelet[2828]: E1112 22:42:11.727195 2828 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\": not found" containerID="1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a" Nov 12 22:42:11.727222 kubelet[2828]: I1112 22:42:11.727228 2828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a"} err="failed to get container status \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bd14f88ca6781599728ed7047266cdc73dc1d68436973ecac417b2554f34f9a\": not found" Nov 12 22:42:11.727427 kubelet[2828]: I1112 22:42:11.727240 2828 scope.go:117] "RemoveContainer" containerID="c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada" Nov 12 22:42:11.728575 containerd[1613]: time="2024-11-12T22:42:11.728205715Z" level=info msg="RemoveContainer for \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\"" Nov 12 22:42:11.732114 containerd[1613]: time="2024-11-12T22:42:11.732082723Z" level=info msg="RemoveContainer for \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\" returns successfully" Nov 12 22:42:11.732332 kubelet[2828]: I1112 22:42:11.732234 2828 scope.go:117] "RemoveContainer" containerID="c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada" Nov 12 22:42:11.732432 containerd[1613]: time="2024-11-12T22:42:11.732394351Z" level=error msg="ContainerStatus for \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\": not found" Nov 12 22:42:11.732552 kubelet[2828]: E1112 22:42:11.732526 2828 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\": not found" containerID="c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada" Nov 12 22:42:11.732593 kubelet[2828]: I1112 22:42:11.732554 2828 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada"} err="failed to get container status \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9ab82626debb8fa7b0a5d688df7bb7cae60bab88d15283c8434a536ada50ada\": not found" Nov 12 22:42:12.206933 sshd[4550]: Connection closed by 10.0.0.1 port 50032 Nov 12 22:42:12.209916 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:12.237615 systemd[1]: Started sshd@28-10.0.0.16:22-10.0.0.1:50038.service - OpenSSH per-connection server daemon (10.0.0.1:50038). Nov 12 22:42:12.238452 systemd[1]: sshd@27-10.0.0.16:22-10.0.0.1:50032.service: Deactivated successfully. Nov 12 22:42:12.259096 kubelet[2828]: I1112 22:42:12.256938 2828 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ceb3fbe8-0f18-4231-9558-82b9c527fcbf" path="/var/lib/kubelet/pods/ceb3fbe8-0f18-4231-9558-82b9c527fcbf/volumes" Nov 12 22:42:12.259096 kubelet[2828]: I1112 22:42:12.257722 2828 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fa30da42-2631-483e-9b6a-287561cfd681" path="/var/lib/kubelet/pods/fa30da42-2631-483e-9b6a-287561cfd681/volumes" Nov 12 22:42:12.261969 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 22:42:12.265026 systemd-logind[1591]: Session 28 logged out. Waiting for processes to exit. Nov 12 22:42:12.271876 systemd-logind[1591]: Removed session 28. Nov 12 22:42:12.372235 sshd[4715]: Accepted publickey for core from 10.0.0.1 port 50038 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:12.376111 sshd-session[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:12.386513 systemd-logind[1591]: New session 29 of user core. Nov 12 22:42:12.395544 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 22:42:13.302756 sshd[4723]: Connection closed by 10.0.0.1 port 50038 Nov 12 22:42:13.304141 sshd-session[4715]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:13.331465 systemd[1]: Started sshd@29-10.0.0.16:22-10.0.0.1:50046.service - OpenSSH per-connection server daemon (10.0.0.1:50046). Nov 12 22:42:13.332326 systemd[1]: sshd@28-10.0.0.16:22-10.0.0.1:50038.service: Deactivated successfully. Nov 12 22:42:13.358665 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 22:42:13.362441 systemd-logind[1591]: Session 29 logged out. Waiting for processes to exit. Nov 12 22:42:13.364137 systemd-logind[1591]: Removed session 29. Nov 12 22:42:13.399525 kubelet[2828]: I1112 22:42:13.395642 2828 topology_manager.go:215] "Topology Admit Handler" podUID="1da3fc80-47de-465b-803e-4274ba780057" podNamespace="kube-system" podName="cilium-bmwkh" Nov 12 22:42:13.399525 kubelet[2828]: E1112 22:42:13.395783 2828 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa30da42-2631-483e-9b6a-287561cfd681" containerName="mount-bpf-fs" Nov 12 22:42:13.399525 kubelet[2828]: E1112 22:42:13.395800 2828 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa30da42-2631-483e-9b6a-287561cfd681" containerName="mount-cgroup" Nov 12 22:42:13.399525 kubelet[2828]: E1112 22:42:13.395811 2828 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa30da42-2631-483e-9b6a-287561cfd681" containerName="apply-sysctl-overwrites" Nov 12 22:42:13.399525 kubelet[2828]: E1112 22:42:13.395821 2828 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa30da42-2631-483e-9b6a-287561cfd681" containerName="clean-cilium-state" Nov 12 22:42:13.399525 kubelet[2828]: E1112 22:42:13.395831 2828 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa30da42-2631-483e-9b6a-287561cfd681" containerName="cilium-agent" Nov 12 22:42:13.399525 kubelet[2828]: E1112 22:42:13.395845 2828 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ceb3fbe8-0f18-4231-9558-82b9c527fcbf" containerName="cilium-operator" Nov 12 22:42:13.399525 kubelet[2828]: I1112 22:42:13.395881 2828 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb3fbe8-0f18-4231-9558-82b9c527fcbf" containerName="cilium-operator" Nov 12 22:42:13.399525 kubelet[2828]: I1112 22:42:13.395891 2828 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa30da42-2631-483e-9b6a-287561cfd681" containerName="cilium-agent" Nov 12 22:42:13.441842 sshd[4731]: Accepted publickey for core from 10.0.0.1 port 50046 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:13.451953 sshd-session[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:13.468884 systemd-logind[1591]: New session 30 of user core. Nov 12 22:42:13.485607 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 12 22:42:13.496352 kubelet[2828]: I1112 22:42:13.492259 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1da3fc80-47de-465b-803e-4274ba780057-clustermesh-secrets\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496352 kubelet[2828]: I1112 22:42:13.492342 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p47qw\" (UniqueName: \"kubernetes.io/projected/1da3fc80-47de-465b-803e-4274ba780057-kube-api-access-p47qw\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496352 kubelet[2828]: I1112 22:42:13.492373 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1da3fc80-47de-465b-803e-4274ba780057-hubble-tls\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496352 kubelet[2828]: I1112 22:42:13.492404 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-etc-cni-netd\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496352 kubelet[2828]: I1112 22:42:13.493413 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-xtables-lock\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496352 kubelet[2828]: I1112 22:42:13.493460 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1da3fc80-47de-465b-803e-4274ba780057-cilium-ipsec-secrets\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496704 kubelet[2828]: I1112 22:42:13.493506 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-cilium-cgroup\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496704 kubelet[2828]: I1112 22:42:13.493537 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-host-proc-sys-kernel\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496704 kubelet[2828]: I1112 22:42:13.493568 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-lib-modules\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496704 kubelet[2828]: I1112 22:42:13.495815 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-cilium-run\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496704 kubelet[2828]: I1112 22:42:13.495965 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-hostproc\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496704 kubelet[2828]: I1112 22:42:13.496009 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-host-proc-sys-net\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496887 kubelet[2828]: I1112 22:42:13.496039 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-bpf-maps\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496887 kubelet[2828]: I1112 22:42:13.496064 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1da3fc80-47de-465b-803e-4274ba780057-cni-path\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.496887 kubelet[2828]: I1112 22:42:13.496089 2828 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1da3fc80-47de-465b-803e-4274ba780057-cilium-config-path\") pod \"cilium-bmwkh\" (UID: \"1da3fc80-47de-465b-803e-4274ba780057\") " pod="kube-system/cilium-bmwkh" Nov 12 22:42:13.585819 sshd[4738]: Connection closed by 10.0.0.1 port 50046 Nov 12 22:42:13.584084 sshd-session[4731]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:13.603669 systemd[1]: Started sshd@30-10.0.0.16:22-10.0.0.1:50050.service - OpenSSH per-connection server daemon (10.0.0.1:50050). Nov 12 22:42:13.662844 systemd[1]: sshd@29-10.0.0.16:22-10.0.0.1:50046.service: Deactivated successfully. Nov 12 22:42:13.670802 systemd[1]: session-30.scope: Deactivated successfully. Nov 12 22:42:13.678601 systemd-logind[1591]: Session 30 logged out. Waiting for processes to exit. Nov 12 22:42:13.682651 systemd-logind[1591]: Removed session 30. Nov 12 22:42:13.732435 kubelet[2828]: E1112 22:42:13.732322 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:13.732983 sshd[4741]: Accepted publickey for core from 10.0.0.1 port 50050 ssh2: RSA SHA256:wlg8ILGVh6TDxoojdC0IwjgTaDXVbHk8WAYDF0osSAA Nov 12 22:42:13.733576 containerd[1613]: time="2024-11-12T22:42:13.733274837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmwkh,Uid:1da3fc80-47de-465b-803e-4274ba780057,Namespace:kube-system,Attempt:0,}" Nov 12 22:42:13.745015 sshd-session[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 22:42:13.767262 systemd-logind[1591]: New session 31 of user core. Nov 12 22:42:13.779795 systemd[1]: Started session-31.scope - Session 31 of User core. Nov 12 22:42:13.849462 containerd[1613]: time="2024-11-12T22:42:13.849060574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 22:42:13.849462 containerd[1613]: time="2024-11-12T22:42:13.849156896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 22:42:13.849462 containerd[1613]: time="2024-11-12T22:42:13.849173828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:13.852122 containerd[1613]: time="2024-11-12T22:42:13.849303232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 22:42:14.021555 containerd[1613]: time="2024-11-12T22:42:14.021466283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmwkh,Uid:1da3fc80-47de-465b-803e-4274ba780057,Namespace:kube-system,Attempt:0,} returns sandbox id \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\"" Nov 12 22:42:14.048447 kubelet[2828]: E1112 22:42:14.047556 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:14.070769 containerd[1613]: time="2024-11-12T22:42:14.068459491Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 22:42:14.129646 containerd[1613]: time="2024-11-12T22:42:14.126840007Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a24d822366141831cb37f4d2caa78941985898374db3b5c5fef84595c841efea\"" Nov 12 22:42:14.131782 containerd[1613]: time="2024-11-12T22:42:14.131691762Z" level=info msg="StartContainer for \"a24d822366141831cb37f4d2caa78941985898374db3b5c5fef84595c841efea\"" Nov 12 22:42:14.257196 containerd[1613]: time="2024-11-12T22:42:14.257106581Z" level=info msg="StartContainer for \"a24d822366141831cb37f4d2caa78941985898374db3b5c5fef84595c841efea\" returns successfully" Nov 12 22:42:14.324704 containerd[1613]: time="2024-11-12T22:42:14.324598769Z" level=info msg="shim disconnected" id=a24d822366141831cb37f4d2caa78941985898374db3b5c5fef84595c841efea namespace=k8s.io Nov 12 22:42:14.324704 containerd[1613]: time="2024-11-12T22:42:14.324696083Z" level=warning msg="cleaning up after shim disconnected" id=a24d822366141831cb37f4d2caa78941985898374db3b5c5fef84595c841efea namespace=k8s.io Nov 12 22:42:14.324704 containerd[1613]: time="2024-11-12T22:42:14.324707003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:42:14.345502 containerd[1613]: time="2024-11-12T22:42:14.345415898Z" level=warning msg="cleanup warnings time=\"2024-11-12T22:42:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 22:42:14.708016 kubelet[2828]: E1112 22:42:14.707986 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:14.710137 containerd[1613]: time="2024-11-12T22:42:14.710097888Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 22:42:14.724090 containerd[1613]: time="2024-11-12T22:42:14.724033443Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d4d1e1028dad8f28a71384e8965cd3014413eca157ba7ec9719521f3a9d9783\"" Nov 12 22:42:14.725538 containerd[1613]: time="2024-11-12T22:42:14.725505761Z" level=info msg="StartContainer for \"0d4d1e1028dad8f28a71384e8965cd3014413eca157ba7ec9719521f3a9d9783\"" Nov 12 22:42:14.796217 containerd[1613]: time="2024-11-12T22:42:14.796162921Z" level=info msg="StartContainer for \"0d4d1e1028dad8f28a71384e8965cd3014413eca157ba7ec9719521f3a9d9783\" returns successfully" Nov 12 22:42:14.830566 containerd[1613]: time="2024-11-12T22:42:14.830491243Z" level=info msg="shim disconnected" id=0d4d1e1028dad8f28a71384e8965cd3014413eca157ba7ec9719521f3a9d9783 namespace=k8s.io Nov 12 22:42:14.830566 containerd[1613]: time="2024-11-12T22:42:14.830560865Z" level=warning msg="cleaning up after shim disconnected" id=0d4d1e1028dad8f28a71384e8965cd3014413eca157ba7ec9719521f3a9d9783 namespace=k8s.io Nov 12 22:42:14.830566 containerd[1613]: time="2024-11-12T22:42:14.830572617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:42:15.610052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d4d1e1028dad8f28a71384e8965cd3014413eca157ba7ec9719521f3a9d9783-rootfs.mount: Deactivated successfully. Nov 12 22:42:15.710776 kubelet[2828]: E1112 22:42:15.710743 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:15.713280 containerd[1613]: time="2024-11-12T22:42:15.713236789Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 22:42:15.732638 containerd[1613]: time="2024-11-12T22:42:15.732584581Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e62fe60e2e6869079690bf9443a6d1db32bf17663cd96e161660f9e5607adfd\"" Nov 12 22:42:15.732759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531268772.mount: Deactivated successfully. Nov 12 22:42:15.734461 containerd[1613]: time="2024-11-12T22:42:15.733418134Z" level=info msg="StartContainer for \"9e62fe60e2e6869079690bf9443a6d1db32bf17663cd96e161660f9e5607adfd\"" Nov 12 22:42:15.805182 containerd[1613]: time="2024-11-12T22:42:15.805127142Z" level=info msg="StartContainer for \"9e62fe60e2e6869079690bf9443a6d1db32bf17663cd96e161660f9e5607adfd\" returns successfully" Nov 12 22:42:15.832582 containerd[1613]: time="2024-11-12T22:42:15.832516687Z" level=info msg="shim disconnected" id=9e62fe60e2e6869079690bf9443a6d1db32bf17663cd96e161660f9e5607adfd namespace=k8s.io Nov 12 22:42:15.832582 containerd[1613]: time="2024-11-12T22:42:15.832580728Z" level=warning msg="cleaning up after shim disconnected" id=9e62fe60e2e6869079690bf9443a6d1db32bf17663cd96e161660f9e5607adfd namespace=k8s.io Nov 12 22:42:15.832582 containerd[1613]: time="2024-11-12T22:42:15.832591658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:42:16.346000 kubelet[2828]: E1112 22:42:16.345960 2828 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 22:42:16.610303 systemd[1]: run-containerd-runc-k8s.io-9e62fe60e2e6869079690bf9443a6d1db32bf17663cd96e161660f9e5607adfd-runc.EsMyWl.mount: Deactivated successfully. Nov 12 22:42:16.610546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e62fe60e2e6869079690bf9443a6d1db32bf17663cd96e161660f9e5607adfd-rootfs.mount: Deactivated successfully. Nov 12 22:42:16.714438 kubelet[2828]: E1112 22:42:16.714404 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:16.716572 containerd[1613]: time="2024-11-12T22:42:16.716038828Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 22:42:16.754951 containerd[1613]: time="2024-11-12T22:42:16.754130906Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e64ea9b9797603f5528e9fc1bd5fff05adce488190e386dae9aa82647a40aa2\"" Nov 12 22:42:16.756085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167534498.mount: Deactivated successfully. Nov 12 22:42:16.759934 containerd[1613]: time="2024-11-12T22:42:16.758561183Z" level=info msg="StartContainer for \"8e64ea9b9797603f5528e9fc1bd5fff05adce488190e386dae9aa82647a40aa2\"" Nov 12 22:42:16.914479 containerd[1613]: time="2024-11-12T22:42:16.914295023Z" level=info msg="StartContainer for \"8e64ea9b9797603f5528e9fc1bd5fff05adce488190e386dae9aa82647a40aa2\" returns successfully" Nov 12 22:42:16.941506 containerd[1613]: time="2024-11-12T22:42:16.941429490Z" level=info msg="shim disconnected" id=8e64ea9b9797603f5528e9fc1bd5fff05adce488190e386dae9aa82647a40aa2 namespace=k8s.io Nov 12 22:42:16.941506 containerd[1613]: time="2024-11-12T22:42:16.941492148Z" level=warning msg="cleaning up after shim disconnected" id=8e64ea9b9797603f5528e9fc1bd5fff05adce488190e386dae9aa82647a40aa2 namespace=k8s.io Nov 12 22:42:16.941506 containerd[1613]: time="2024-11-12T22:42:16.941501296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 22:42:17.610147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e64ea9b9797603f5528e9fc1bd5fff05adce488190e386dae9aa82647a40aa2-rootfs.mount: Deactivated successfully. Nov 12 22:42:17.718880 kubelet[2828]: E1112 22:42:17.718831 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:17.722159 containerd[1613]: time="2024-11-12T22:42:17.721760556Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 22:42:17.739085 containerd[1613]: time="2024-11-12T22:42:17.739034929Z" level=info msg="CreateContainer within sandbox \"deb3bce2f981fe0e684d00d12d7b373082b29639a1ef196166b8174a5a04a500\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d42a801cacb7eb7e9d2281af8be751681b6b5c760db4904b6c0141eb250e3249\"" Nov 12 22:42:17.739552 containerd[1613]: time="2024-11-12T22:42:17.739530574Z" level=info msg="StartContainer for \"d42a801cacb7eb7e9d2281af8be751681b6b5c760db4904b6c0141eb250e3249\"" Nov 12 22:42:17.802608 containerd[1613]: time="2024-11-12T22:42:17.802560181Z" level=info msg="StartContainer for \"d42a801cacb7eb7e9d2281af8be751681b6b5c760db4904b6c0141eb250e3249\" returns successfully" Nov 12 22:42:18.222945 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 22:42:18.239668 kubelet[2828]: E1112 22:42:18.239631 2828 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-8njfx" podUID="d60e7dc2-9251-4795-8dd2-5c0f8a44291f" Nov 12 22:42:18.724247 kubelet[2828]: E1112 22:42:18.724211 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:18.757473 kubelet[2828]: I1112 22:42:18.757407 2828 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T22:42:18Z","lastTransitionTime":"2024-11-12T22:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 22:42:19.734090 kubelet[2828]: E1112 22:42:19.734055 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:20.240477 kubelet[2828]: E1112 22:42:20.240424 2828 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-8njfx" podUID="d60e7dc2-9251-4795-8dd2-5c0f8a44291f" Nov 12 22:42:21.466111 systemd-networkd[1247]: lxc_health: Link UP Nov 12 22:42:21.472636 systemd-networkd[1247]: lxc_health: Gained carrier Nov 12 22:42:21.736929 kubelet[2828]: E1112 22:42:21.735259 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:21.751525 kubelet[2828]: I1112 22:42:21.751480 2828 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bmwkh" podStartSLOduration=8.751431192 podStartE2EDuration="8.751431192s" podCreationTimestamp="2024-11-12 22:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 22:42:18.735059201 +0000 UTC m=+112.597858661" watchObservedRunningTime="2024-11-12 22:42:21.751431192 +0000 UTC m=+115.614230652" Nov 12 22:42:22.242751 kubelet[2828]: E1112 22:42:22.242237 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:22.731946 kubelet[2828]: E1112 22:42:22.731779 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:23.495131 systemd-networkd[1247]: lxc_health: Gained IPv6LL Nov 12 22:42:23.734643 kubelet[2828]: E1112 22:42:23.734445 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 22:42:26.243183 containerd[1613]: time="2024-11-12T22:42:26.242990084Z" level=info msg="StopPodSandbox for \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\"" Nov 12 22:42:26.243183 containerd[1613]: time="2024-11-12T22:42:26.243109218Z" level=info msg="TearDown network for sandbox \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\" successfully" Nov 12 22:42:26.243183 containerd[1613]: time="2024-11-12T22:42:26.243121952Z" level=info msg="StopPodSandbox for \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\" returns successfully" Nov 12 22:42:26.243760 containerd[1613]: time="2024-11-12T22:42:26.243461752Z" level=info msg="RemovePodSandbox for \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\"" Nov 12 22:42:26.243760 containerd[1613]: time="2024-11-12T22:42:26.243496348Z" level=info msg="Forcibly stopping sandbox \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\"" Nov 12 22:42:26.243760 containerd[1613]: time="2024-11-12T22:42:26.243546332Z" level=info msg="TearDown network for sandbox \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\" successfully" Nov 12 22:42:26.247045 containerd[1613]: time="2024-11-12T22:42:26.247018476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:42:26.247141 containerd[1613]: time="2024-11-12T22:42:26.247065745Z" level=info msg="RemovePodSandbox \"1fd4545000157c40e84c8eaef0824770b048f3a568538c7a3b566a2b7a21b288\" returns successfully" Nov 12 22:42:26.247408 containerd[1613]: time="2024-11-12T22:42:26.247369938Z" level=info msg="StopPodSandbox for \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\"" Nov 12 22:42:26.247479 containerd[1613]: time="2024-11-12T22:42:26.247430823Z" level=info msg="TearDown network for sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" successfully" Nov 12 22:42:26.247479 containerd[1613]: time="2024-11-12T22:42:26.247440631Z" level=info msg="StopPodSandbox for \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" returns successfully" Nov 12 22:42:26.247783 containerd[1613]: time="2024-11-12T22:42:26.247673951Z" level=info msg="RemovePodSandbox for \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\"" Nov 12 22:42:26.247783 containerd[1613]: time="2024-11-12T22:42:26.247703156Z" level=info msg="Forcibly stopping sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\"" Nov 12 22:42:26.247783 containerd[1613]: time="2024-11-12T22:42:26.247749273Z" level=info msg="TearDown network for sandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" successfully" Nov 12 22:42:26.251252 containerd[1613]: time="2024-11-12T22:42:26.251188825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 22:42:26.251322 containerd[1613]: time="2024-11-12T22:42:26.251257585Z" level=info msg="RemovePodSandbox \"e5bcb9900e01dc975ebc61a33ea8698023506640abf11371f920fc325e52e02f\" returns successfully" Nov 12 22:42:28.895482 sshd[4755]: Connection closed by 10.0.0.1 port 50050 Nov 12 22:42:28.896036 sshd-session[4741]: pam_unix(sshd:session): session closed for user core Nov 12 22:42:28.901046 systemd[1]: sshd@30-10.0.0.16:22-10.0.0.1:50050.service: Deactivated successfully. Nov 12 22:42:28.904211 systemd-logind[1591]: Session 31 logged out. Waiting for processes to exit. Nov 12 22:42:28.904214 systemd[1]: session-31.scope: Deactivated successfully. Nov 12 22:42:28.905721 systemd-logind[1591]: Removed session 31.