Nov 12 20:43:43.920963 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:43:43.921010 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:43.921026 kernel: BIOS-provided physical RAM map: Nov 12 20:43:43.921047 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:43:43.921055 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 12 20:43:43.921064 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 12 20:43:43.921074 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 12 20:43:43.921082 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 12 20:43:43.921091 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 12 20:43:43.921099 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 12 20:43:43.921111 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 12 20:43:43.921119 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 12 20:43:43.921128 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 12 20:43:43.921136 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 12 20:43:43.921147 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 12 20:43:43.921157 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 12 20:43:43.921168 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 12 20:43:43.921178 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 12 20:43:43.921187 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 12 20:43:43.921196 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:43:43.921205 kernel: NX (Execute Disable) protection: active Nov 12 20:43:43.921214 kernel: APIC: Static calls initialized Nov 12 20:43:43.921223 kernel: efi: EFI v2.7 by EDK II Nov 12 20:43:43.921232 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Nov 12 20:43:43.921241 kernel: SMBIOS 2.8 present. Nov 12 20:43:43.921250 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 12 20:43:43.921259 kernel: Hypervisor detected: KVM Nov 12 20:43:43.921271 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:43:43.921281 kernel: kvm-clock: using sched offset of 4641487216 cycles Nov 12 20:43:43.921290 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:43:43.921300 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:43:43.921310 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:43:43.921322 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:43:43.921333 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 12 20:43:43.921343 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:43:43.921353 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:43:43.921365 kernel: Using GB pages for direct mapping Nov 12 20:43:43.921374 kernel: Secure boot disabled Nov 12 20:43:43.921384 kernel: ACPI: Early table checksum verification disabled Nov 12 20:43:43.921393 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 12 20:43:43.921407 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 12 20:43:43.921418 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:43.921428 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:43.921440 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 12 20:43:43.921450 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:43.921470 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:43.921481 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:43.921491 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:43:43.921501 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 12 20:43:43.921511 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 12 20:43:43.921524 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Nov 12 20:43:43.921534 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 12 20:43:43.921544 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 12 20:43:43.921554 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 12 20:43:43.921564 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 12 20:43:43.921573 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 12 20:43:43.921583 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 12 20:43:43.921593 kernel: No NUMA configuration found Nov 12 20:43:43.921603 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 12 20:43:43.921616 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 12 20:43:43.921626 kernel: Zone ranges: Nov 12 20:43:43.921636 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:43:43.921646 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 12 20:43:43.921656 kernel: Normal empty Nov 12 20:43:43.921666 kernel: Movable zone start for each node Nov 12 20:43:43.921675 kernel: Early memory node ranges Nov 12 20:43:43.921685 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:43:43.921695 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 12 20:43:43.921705 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 12 20:43:43.921718 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 12 20:43:43.921727 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 12 20:43:43.921737 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 12 20:43:43.921747 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 12 20:43:43.921758 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:43:43.921767 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:43:43.921777 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 12 20:43:43.921787 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:43:43.921797 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 12 20:43:43.921811 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 12 20:43:43.921821 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 12 20:43:43.921831 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:43:43.921840 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:43:43.921851 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:43:43.921861 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:43:43.921870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:43:43.921880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:43:43.921891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:43:43.921904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:43:43.921914 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:43:43.921924 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:43:43.921933 kernel: TSC deadline timer available Nov 12 20:43:43.921943 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:43:43.921953 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:43:43.921963 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:43:43.921974 kernel: kvm-guest: setup PV sched yield Nov 12 20:43:43.921996 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:43:43.922006 kernel: Booting paravirtualized kernel on KVM Nov 12 20:43:43.922020 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:43:43.922029 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:43:43.922039 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:43:43.922048 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:43:43.922057 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:43:43.922066 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:43:43.922076 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:43:43.922087 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:43.922100 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:43:43.922109 kernel: random: crng init done Nov 12 20:43:43.922118 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:43:43.922128 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:43:43.922138 kernel: Fallback order for Node 0: 0 Nov 12 20:43:43.922148 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 12 20:43:43.922158 kernel: Policy zone: DMA32 Nov 12 20:43:43.922168 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:43:43.922178 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 171128K reserved, 0K cma-reserved) Nov 12 20:43:43.922191 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:43:43.922201 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:43:43.922211 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:43:43.922222 kernel: Dynamic Preempt: voluntary Nov 12 20:43:43.922241 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:43:43.922256 kernel: rcu: RCU event tracing is enabled. Nov 12 20:43:43.922266 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:43:43.922277 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:43:43.922288 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:43:43.922299 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:43:43.922309 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:43:43.922319 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:43:43.922333 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:43:43.922344 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:43:43.922354 kernel: Console: colour dummy device 80x25 Nov 12 20:43:43.922365 kernel: printk: console [ttyS0] enabled Nov 12 20:43:43.922375 kernel: ACPI: Core revision 20230628 Nov 12 20:43:43.922389 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:43:43.922400 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:43:43.922410 kernel: x2apic enabled Nov 12 20:43:43.922421 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:43:43.922432 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:43:43.922443 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:43:43.922453 kernel: kvm-guest: setup PV IPIs Nov 12 20:43:43.922473 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:43:43.922484 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:43:43.922498 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:43:43.922508 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:43:43.922519 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:43:43.922530 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:43:43.922540 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:43:43.922551 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:43:43.922562 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:43:43.922572 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:43:43.922586 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:43:43.922596 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:43:43.922607 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:43:43.922618 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:43:43.922628 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:43:43.922640 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:43:43.922650 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:43:43.922661 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:43:43.922671 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:43:43.922685 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:43:43.922695 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:43:43.922706 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:43:43.922717 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:43:43.922727 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:43:43.922738 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:43:43.922748 kernel: landlock: Up and running. Nov 12 20:43:43.922759 kernel: SELinux: Initializing. Nov 12 20:43:43.922769 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:43:43.922783 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:43:43.922793 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:43:43.922804 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:43:43.922815 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:43:43.922825 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:43:43.922836 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:43:43.922846 kernel: ... version: 0 Nov 12 20:43:43.922857 kernel: ... bit width: 48 Nov 12 20:43:43.922870 kernel: ... generic registers: 6 Nov 12 20:43:43.922880 kernel: ... value mask: 0000ffffffffffff Nov 12 20:43:43.922891 kernel: ... max period: 00007fffffffffff Nov 12 20:43:43.922901 kernel: ... fixed-purpose events: 0 Nov 12 20:43:43.922912 kernel: ... event mask: 000000000000003f Nov 12 20:43:43.922922 kernel: signal: max sigframe size: 1776 Nov 12 20:43:43.922933 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:43:43.922943 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:43:43.922954 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:43:43.922964 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:43:43.922977 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:43:43.922999 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:43:43.923010 kernel: smpboot: Max logical packages: 1 Nov 12 20:43:43.923021 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:43:43.923031 kernel: devtmpfs: initialized Nov 12 20:43:43.923042 kernel: x86/mm: Memory block size: 128MB Nov 12 20:43:43.923052 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 12 20:43:43.923063 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 12 20:43:43.923074 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 12 20:43:43.923088 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 12 20:43:43.923099 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 12 20:43:43.923109 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:43:43.923120 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:43:43.923131 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:43:43.923141 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:43:43.923152 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:43:43.923162 kernel: audit: type=2000 audit(1731444223.160:1): state=initialized audit_enabled=0 res=1 Nov 12 20:43:43.923173 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:43:43.923186 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:43:43.923197 kernel: cpuidle: using governor menu Nov 12 20:43:43.923207 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:43:43.923218 kernel: dca service started, version 1.12.1 Nov 12 20:43:43.923228 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:43:43.923239 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:43:43.923250 kernel: PCI: Using configuration type 1 for base access Nov 12 20:43:43.923260 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:43:43.923274 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:43:43.923284 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:43:43.923295 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:43:43.923305 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:43:43.923316 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:43:43.923327 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:43:43.923337 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:43:43.923348 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:43:43.923358 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:43:43.923372 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:43:43.923383 kernel: ACPI: Interpreter enabled Nov 12 20:43:43.923393 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:43:43.923404 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:43:43.923415 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:43:43.923425 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:43:43.923436 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:43:43.923447 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:43:43.923722 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:43:43.923914 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:43:43.924080 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:43:43.924095 kernel: PCI host bridge to bus 0000:00 Nov 12 20:43:43.924246 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:43:43.924407 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:43:43.924550 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:43:43.924686 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:43:43.924817 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:43.924951 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 12 20:43:43.925125 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:43:43.925294 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:43:43.925455 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:43:43.925610 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 12 20:43:43.925764 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 12 20:43:43.925928 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 12 20:43:43.926122 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 12 20:43:43.926282 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:43:43.926454 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:43:43.926630 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 12 20:43:43.926790 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 12 20:43:43.926955 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 12 20:43:43.927148 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:43:43.927307 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 12 20:43:43.927540 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 12 20:43:43.927700 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 12 20:43:43.927862 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:43:43.928039 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 12 20:43:43.928194 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 12 20:43:43.928348 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 12 20:43:43.928516 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 12 20:43:43.928681 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:43:43.928832 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:43:43.929009 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:43:43.929174 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 12 20:43:43.929328 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 12 20:43:43.929502 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:43:43.929655 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 12 20:43:43.929670 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:43:43.929681 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:43:43.929691 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:43:43.929701 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:43:43.929715 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:43:43.929726 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:43:43.929736 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:43:43.929747 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:43:43.929758 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:43:43.929768 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:43:43.929778 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:43:43.929789 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:43:43.929799 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:43:43.929813 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:43:43.929823 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:43:43.929834 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:43:43.929845 kernel: iommu: Default domain type: Translated Nov 12 20:43:43.929855 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:43:43.929866 kernel: efivars: Registered efivars operations Nov 12 20:43:43.929876 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:43:43.929887 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:43:43.929897 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 12 20:43:43.929911 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 12 20:43:43.929922 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 12 20:43:43.929933 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 12 20:43:43.930161 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:43:43.930314 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:43:43.930475 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:43:43.930491 kernel: vgaarb: loaded Nov 12 20:43:43.930502 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:43:43.930517 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:43:43.930527 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:43:43.930537 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:43:43.930548 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:43:43.930558 kernel: pnp: PnP ACPI init Nov 12 20:43:43.930733 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:43:43.930751 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:43:43.930762 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:43:43.930772 kernel: NET: Registered PF_INET protocol family Nov 12 20:43:43.930786 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:43:43.930797 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:43:43.930807 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:43:43.930818 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:43:43.930828 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:43:43.930839 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:43:43.930849 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:43:43.930860 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:43:43.930873 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:43:43.930883 kernel: NET: Registered PF_XDP protocol family Nov 12 20:43:43.931048 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 12 20:43:43.931191 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 12 20:43:43.931309 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:43:43.931496 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:43:43.931641 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:43:43.931782 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:43:43.931930 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:43:43.932084 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 12 20:43:43.932099 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:43:43.932109 kernel: Initialise system trusted keyrings Nov 12 20:43:43.932119 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:43:43.932128 kernel: Key type asymmetric registered Nov 12 20:43:43.932138 kernel: Asymmetric key parser 'x509' registered Nov 12 20:43:43.932148 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:43:43.932157 kernel: io scheduler mq-deadline registered Nov 12 20:43:43.932172 kernel: io scheduler kyber registered Nov 12 20:43:43.932181 kernel: io scheduler bfq registered Nov 12 20:43:43.932191 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:43:43.932201 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:43:43.932211 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:43:43.932220 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:43:43.932230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:43:43.932240 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:43:43.932249 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:43:43.932262 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:43:43.932271 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:43:43.932281 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:43:43.932437 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:43:43.932579 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:43:43.932783 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:43:43 UTC (1731444223) Nov 12 20:43:43.932952 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:43:43.932968 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:43:43.932997 kernel: efifb: probing for efifb Nov 12 20:43:43.933009 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 12 20:43:43.933019 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 12 20:43:43.933029 kernel: efifb: scrolling: redraw Nov 12 20:43:43.933040 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 12 20:43:43.933050 kernel: Console: switching to colour frame buffer device 100x37 Nov 12 20:43:43.933083 kernel: fb0: EFI VGA frame buffer device Nov 12 20:43:43.933096 kernel: pstore: Using crash dump compression: deflate Nov 12 20:43:43.933107 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:43:43.933120 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:43:43.933133 kernel: Segment Routing with IPv6 Nov 12 20:43:43.933144 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:43:43.933155 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:43:43.933166 kernel: Key type dns_resolver registered Nov 12 20:43:43.933176 kernel: IPI shorthand broadcast: enabled Nov 12 20:43:43.933187 kernel: sched_clock: Marking stable (979004064, 118119355)->(1252580355, -155456936) Nov 12 20:43:43.933198 kernel: registered taskstats version 1 Nov 12 20:43:43.933209 kernel: Loading compiled-in X.509 certificates Nov 12 20:43:43.933224 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:43:43.933238 kernel: Key type .fscrypt registered Nov 12 20:43:43.933248 kernel: Key type fscrypt-provisioning registered Nov 12 20:43:43.933258 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:43:43.933268 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:43:43.933278 kernel: ima: No architecture policies found Nov 12 20:43:43.933287 kernel: clk: Disabling unused clocks Nov 12 20:43:43.933297 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:43:43.933307 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:43:43.933320 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:43:43.933330 kernel: Run /init as init process Nov 12 20:43:43.933339 kernel: with arguments: Nov 12 20:43:43.933349 kernel: /init Nov 12 20:43:43.933359 kernel: with environment: Nov 12 20:43:43.933368 kernel: HOME=/ Nov 12 20:43:43.933378 kernel: TERM=linux Nov 12 20:43:43.933388 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:43:43.933400 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:43:43.933415 systemd[1]: Detected virtualization kvm. Nov 12 20:43:43.933426 systemd[1]: Detected architecture x86-64. Nov 12 20:43:43.933437 systemd[1]: Running in initrd. Nov 12 20:43:43.933449 systemd[1]: No hostname configured, using default hostname. Nov 12 20:43:43.933471 systemd[1]: Hostname set to . Nov 12 20:43:43.933482 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:43:43.933492 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:43:43.933503 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:43.933513 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:43.933525 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:43:43.933535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:43:43.933546 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:43:43.933560 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:43:43.933572 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:43:43.933583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:43:43.933594 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:43.933604 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:43.933614 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:43:43.933627 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:43:43.933638 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:43:43.933648 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:43:43.933658 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:43:43.933669 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:43:43.933680 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:43:43.933690 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:43:43.933700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:43.933711 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:43.933725 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:43.933735 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:43:43.933746 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:43:43.933757 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:43:43.933767 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:43:43.933778 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:43:43.933788 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:43:43.933799 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:43:43.933809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:43.933822 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:43:43.933833 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:43.933843 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:43:43.933854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:43:43.933891 systemd-journald[192]: Collecting audit messages is disabled. Nov 12 20:43:43.933916 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:43:43.933927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:43:43.933938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:43.933951 systemd-journald[192]: Journal started Nov 12 20:43:43.933972 systemd-journald[192]: Runtime Journal (/run/log/journal/b5d7986177f545529b4513a5ef86dd00) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:43:43.937040 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:43:43.938017 systemd-modules-load[194]: Inserted module 'overlay' Nov 12 20:43:43.941949 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:43.945686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:43:43.946623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:43.961944 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:43.968185 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:43.974010 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:43:43.975950 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 12 20:43:43.976394 kernel: Bridge firewalling registered Nov 12 20:43:43.977100 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:43:43.977698 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:43.980646 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:43:43.994496 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:43.996489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:43:43.999676 dracut-cmdline[221]: dracut-dracut-053 Nov 12 20:43:44.003021 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:43:44.035638 systemd-resolved[237]: Positive Trust Anchors: Nov 12 20:43:44.035673 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:43:44.035712 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:43:44.038543 systemd-resolved[237]: Defaulting to hostname 'linux'. Nov 12 20:43:44.039658 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:43:44.049870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:44.129014 kernel: SCSI subsystem initialized Nov 12 20:43:44.139016 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:43:44.149018 kernel: iscsi: registered transport (tcp) Nov 12 20:43:44.171009 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:43:44.171042 kernel: QLogic iSCSI HBA Driver Nov 12 20:43:44.226866 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:43:44.238139 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:43:44.263032 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:43:44.263087 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:43:44.263098 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:43:44.306029 kernel: raid6: avx2x4 gen() 29462 MB/s Nov 12 20:43:44.327010 kernel: raid6: avx2x2 gen() 29950 MB/s Nov 12 20:43:44.344103 kernel: raid6: avx2x1 gen() 25986 MB/s Nov 12 20:43:44.344141 kernel: raid6: using algorithm avx2x2 gen() 29950 MB/s Nov 12 20:43:44.362268 kernel: raid6: .... xor() 17572 MB/s, rmw enabled Nov 12 20:43:44.362365 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:43:44.388037 kernel: xor: automatically using best checksumming function avx Nov 12 20:43:44.565042 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:43:44.581467 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:43:44.603211 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:44.615238 systemd-udevd[413]: Using default interface naming scheme 'v255'. Nov 12 20:43:44.620488 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:44.630130 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:43:44.643122 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Nov 12 20:43:44.674115 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:43:44.685123 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:43:44.749450 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:44.765161 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:43:44.775707 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:43:44.793881 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:43:44.797115 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:44.799824 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:43:44.804007 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:43:44.836488 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:43:44.836682 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:43:44.836699 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:43:44.836714 kernel: GPT:9289727 != 19775487 Nov 12 20:43:44.836728 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:43:44.836742 kernel: GPT:9289727 != 19775487 Nov 12 20:43:44.836755 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:43:44.836776 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:43:44.811151 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:43:44.823373 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:43:44.844265 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:43:44.844284 kernel: AES CTR mode by8 optimization enabled Nov 12 20:43:44.823549 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:44.836280 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:44.837697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:44.837947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:44.840533 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:44.857175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:44.869472 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (462) Nov 12 20:43:44.865141 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:43:44.879215 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (478) Nov 12 20:43:44.881010 kernel: libata version 3.00 loaded. Nov 12 20:43:44.884882 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:43:44.953054 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:43:44.969194 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:43:44.969222 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:43:44.969417 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:43:44.969609 kernel: scsi host0: ahci Nov 12 20:43:44.969973 kernel: scsi host1: ahci Nov 12 20:43:44.970782 kernel: scsi host2: ahci Nov 12 20:43:44.970965 kernel: scsi host3: ahci Nov 12 20:43:44.971165 kernel: scsi host4: ahci Nov 12 20:43:44.971403 kernel: scsi host5: ahci Nov 12 20:43:44.971727 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 12 20:43:44.971745 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 12 20:43:44.971756 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 12 20:43:44.971769 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 12 20:43:44.971779 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 12 20:43:44.971789 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 12 20:43:44.968829 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:43:44.980185 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:43:44.980684 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:43:44.987359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:43:44.999206 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:43:45.000584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:45.000702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:45.004136 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:45.008178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:45.031044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:45.047289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:43:45.145756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:45.297017 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:45.297074 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:45.298019 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:45.299043 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:45.300036 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:43:45.301013 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:43:45.302502 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:43:45.302529 kernel: ata3.00: applying bridge limits Nov 12 20:43:45.303152 kernel: ata3.00: configured for UDMA/100 Nov 12 20:43:45.304010 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:43:45.309018 disk-uuid[568]: Primary Header is updated. Nov 12 20:43:45.309018 disk-uuid[568]: Secondary Entries is updated. Nov 12 20:43:45.309018 disk-uuid[568]: Secondary Header is updated. Nov 12 20:43:45.313156 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:43:45.317019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:43:45.364035 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:43:45.380155 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:43:45.380172 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:43:46.334964 disk-uuid[584]: The operation has completed successfully. Nov 12 20:43:46.336649 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:43:46.365919 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:43:46.366081 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:43:46.396202 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:43:46.408803 sh[600]: Success Nov 12 20:43:46.434027 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:43:46.469089 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:43:46.497037 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:43:46.502585 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:43:46.511188 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:43:46.511228 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:46.511240 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:43:46.512212 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:43:46.513546 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:43:46.517646 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:43:46.518542 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:43:46.532265 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:43:46.533715 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:43:46.548654 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:46.548702 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:46.548715 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:43:46.552049 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:43:46.563284 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:43:46.565237 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:46.647615 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:43:46.654274 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:43:46.656878 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:43:46.660416 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:43:46.681088 systemd-networkd[779]: lo: Link UP Nov 12 20:43:46.681099 systemd-networkd[779]: lo: Gained carrier Nov 12 20:43:46.683022 systemd-networkd[779]: Enumeration completed Nov 12 20:43:46.683176 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:43:46.683587 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:43:46.683592 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:43:46.684724 systemd-networkd[779]: eth0: Link UP Nov 12 20:43:46.684729 systemd-networkd[779]: eth0: Gained carrier Nov 12 20:43:46.684736 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:43:46.684812 systemd[1]: Reached target network.target - Network. Nov 12 20:43:46.699064 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:43:46.726694 ignition[781]: Ignition 2.19.0 Nov 12 20:43:46.726706 ignition[781]: Stage: fetch-offline Nov 12 20:43:46.726749 ignition[781]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:46.726760 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:46.726880 ignition[781]: parsed url from cmdline: "" Nov 12 20:43:46.726885 ignition[781]: no config URL provided Nov 12 20:43:46.726892 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:43:46.726902 ignition[781]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:43:46.726935 ignition[781]: op(1): [started] loading QEMU firmware config module Nov 12 20:43:46.726941 ignition[781]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:43:46.733375 ignition[781]: op(1): [finished] loading QEMU firmware config module Nov 12 20:43:46.776480 ignition[781]: parsing config with SHA512: 0113ab1bd648b386c38da045c53244fc890bced9bcd5201d485b8d005dcac0b0ae606b17718fe26e0aa98cd37d0df9c7c08fccd03d93ec1fefc75660ad0ea94d Nov 12 20:43:46.780935 unknown[781]: fetched base config from "system" Nov 12 20:43:46.780956 unknown[781]: fetched user config from "qemu" Nov 12 20:43:46.781927 ignition[781]: fetch-offline: fetch-offline passed Nov 12 20:43:46.782103 ignition[781]: Ignition finished successfully Nov 12 20:43:46.784039 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:43:46.786185 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:43:46.801259 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:43:46.818266 ignition[792]: Ignition 2.19.0 Nov 12 20:43:46.818282 ignition[792]: Stage: kargs Nov 12 20:43:46.818516 ignition[792]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:46.818531 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:46.819621 ignition[792]: kargs: kargs passed Nov 12 20:43:46.819681 ignition[792]: Ignition finished successfully Nov 12 20:43:46.823443 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:43:46.834199 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:43:46.850089 ignition[800]: Ignition 2.19.0 Nov 12 20:43:46.850101 ignition[800]: Stage: disks Nov 12 20:43:46.850274 ignition[800]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:46.850285 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:46.851303 ignition[800]: disks: disks passed Nov 12 20:43:46.854130 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:43:46.851371 ignition[800]: Ignition finished successfully Nov 12 20:43:46.855730 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:43:46.857610 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:43:46.858279 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:43:46.858689 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:43:46.858886 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:43:46.869195 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:43:46.881351 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:43:46.888915 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:43:46.904126 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:43:46.955332 systemd-resolved[237]: Detected conflict on linux IN A 10.0.0.51 Nov 12 20:43:46.955349 systemd-resolved[237]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Nov 12 20:43:46.989012 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:43:46.989029 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:43:46.990187 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:43:47.002148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:43:47.004660 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:43:47.007442 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:43:47.007497 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:43:47.017937 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (818) Nov 12 20:43:47.017962 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:47.017976 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:47.018009 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:43:47.018023 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:43:47.009551 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:43:47.019509 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:43:47.022316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:43:47.035130 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:43:47.067254 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:43:47.072098 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:43:47.076568 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:43:47.081897 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:43:47.174436 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:43:47.188120 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:43:47.191603 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:43:47.196005 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:47.218798 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:43:47.220767 ignition[932]: INFO : Ignition 2.19.0 Nov 12 20:43:47.220767 ignition[932]: INFO : Stage: mount Nov 12 20:43:47.220767 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:47.220767 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:47.220767 ignition[932]: INFO : mount: mount passed Nov 12 20:43:47.220767 ignition[932]: INFO : Ignition finished successfully Nov 12 20:43:47.223399 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:43:47.233097 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:43:47.510568 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:43:47.520381 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:43:47.527020 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (945) Nov 12 20:43:47.529655 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:43:47.529679 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:43:47.529690 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:43:47.533031 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:43:47.534498 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:43:47.558685 ignition[962]: INFO : Ignition 2.19.0 Nov 12 20:43:47.558685 ignition[962]: INFO : Stage: files Nov 12 20:43:47.560494 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:47.560494 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:47.563066 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:43:47.564515 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:43:47.564515 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:43:47.568149 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:43:47.569635 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:43:47.569635 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:43:47.568733 unknown[962]: wrote ssh authorized keys file for user: core Nov 12 20:43:47.573584 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:43:47.573584 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:43:47.612329 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:43:47.741460 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:43:47.743804 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:43:47.743804 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 20:43:48.102964 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:43:48.365142 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:43:48.365142 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:48.369042 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:43:48.522195 systemd-networkd[779]: eth0: Gained IPv6LL Nov 12 20:43:48.875420 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:43:49.799407 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:43:49.799407 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 20:43:49.803768 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:43:49.803768 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:43:49.803768 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 20:43:49.803768 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 12 20:43:49.803768 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:43:49.803768 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:43:49.803768 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 12 20:43:49.803768 ignition[962]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:43:49.834927 ignition[962]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:43:49.843683 ignition[962]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:43:49.845516 ignition[962]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:43:49.845516 ignition[962]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:43:49.848805 ignition[962]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:43:49.850265 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:43:49.852395 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:43:49.854567 ignition[962]: INFO : files: files passed Nov 12 20:43:49.854567 ignition[962]: INFO : Ignition finished successfully Nov 12 20:43:49.859068 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:43:49.874157 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:43:49.877733 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:43:49.881204 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:43:49.882356 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:43:49.888461 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:43:49.892258 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:49.892258 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:49.895745 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:43:49.897698 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:43:49.899755 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:43:49.908125 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:43:49.933327 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:43:49.933464 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:43:49.935885 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:43:49.937963 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:43:49.938262 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:43:49.948246 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:43:49.964303 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:43:49.981259 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:43:49.991938 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:49.993431 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:49.995692 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:43:49.997777 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:43:49.997914 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:43:50.000082 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:43:50.001963 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:43:50.004452 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:43:50.006888 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:43:50.009335 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:43:50.011894 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:43:50.014421 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:43:50.017148 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:43:50.019568 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:43:50.022183 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:43:50.024830 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:43:50.025001 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:43:50.027581 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:50.029664 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:50.032161 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:43:50.032363 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:50.034864 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:43:50.035015 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:43:50.037656 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:43:50.037775 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:43:50.040266 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:43:50.042354 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:43:50.046092 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:50.047662 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:43:50.049683 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:43:50.052034 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:43:50.052198 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:43:50.054155 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:43:50.054246 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:43:50.056501 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:43:50.056656 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:43:50.059193 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:43:50.059349 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:43:50.073354 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:43:50.075289 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:43:50.075476 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:50.079324 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:43:50.080569 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:43:50.080916 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:50.083345 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:43:50.083496 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:43:50.090839 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:43:50.090976 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:43:50.095183 ignition[1016]: INFO : Ignition 2.19.0 Nov 12 20:43:50.095183 ignition[1016]: INFO : Stage: umount Nov 12 20:43:50.095183 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:43:50.095183 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:43:50.095183 ignition[1016]: INFO : umount: umount passed Nov 12 20:43:50.095183 ignition[1016]: INFO : Ignition finished successfully Nov 12 20:43:50.097261 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:43:50.097405 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:43:50.099596 systemd[1]: Stopped target network.target - Network. Nov 12 20:43:50.101524 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:43:50.101594 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:43:50.103387 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:43:50.103452 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:43:50.105198 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:43:50.105251 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:43:50.107319 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:43:50.107376 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:43:50.109699 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:43:50.112184 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:43:50.113040 systemd-networkd[779]: eth0: DHCPv6 lease lost Nov 12 20:43:50.115879 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:43:50.116598 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:43:50.116778 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:43:50.118622 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:43:50.118760 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:43:50.123167 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:43:50.123242 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:50.131243 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:43:50.133584 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:43:50.133676 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:43:50.136532 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:43:50.136599 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:50.138847 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:43:50.138906 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:50.141426 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:43:50.141539 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:50.143612 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:50.159611 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:43:50.159779 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:43:50.164946 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:43:50.165159 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:50.168043 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:43:50.168106 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:50.170212 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:43:50.170266 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:50.172370 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:43:50.172437 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:43:50.175123 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:43:50.175187 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:43:50.176956 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:43:50.177038 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:43:50.189197 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:43:50.190541 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:43:50.190617 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:50.193408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:50.193484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:50.201445 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:43:50.201612 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:43:50.772136 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:43:50.772318 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:43:50.773976 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:43:50.775773 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:43:50.775847 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:43:50.786155 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:43:50.795213 systemd[1]: Switching root. Nov 12 20:43:50.830741 systemd-journald[192]: Journal stopped Nov 12 20:43:52.910772 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 12 20:43:52.910853 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:43:52.910885 kernel: SELinux: policy capability open_perms=1 Nov 12 20:43:52.910901 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:43:52.910919 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:43:52.910940 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:43:52.910952 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:43:52.910963 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:43:52.910976 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:43:52.911014 kernel: audit: type=1403 audit(1731444231.697:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:43:52.911042 systemd[1]: Successfully loaded SELinux policy in 47.440ms. Nov 12 20:43:52.911066 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.963ms. Nov 12 20:43:52.911083 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:43:52.911099 systemd[1]: Detected virtualization kvm. Nov 12 20:43:52.911115 systemd[1]: Detected architecture x86-64. Nov 12 20:43:52.911131 systemd[1]: Detected first boot. Nov 12 20:43:52.911143 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:43:52.911155 zram_generator::config[1061]: No configuration found. Nov 12 20:43:52.911171 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:43:52.911191 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:43:52.911207 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:43:52.911224 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:43:52.911250 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:43:52.911267 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:43:52.911286 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:43:52.911305 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:43:52.911317 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:43:52.911329 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:43:52.911349 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:43:52.911365 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:43:52.911377 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:43:52.911389 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:43:52.911401 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:43:52.911413 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:43:52.911425 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:43:52.911437 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:43:52.911452 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:43:52.911464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:43:52.911475 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:43:52.911487 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:43:52.911500 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:43:52.911512 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:43:52.911523 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:43:52.911535 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:43:52.911550 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:43:52.911561 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:43:52.911575 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:43:52.911592 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:43:52.911606 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:43:52.911617 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:43:52.911629 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:43:52.911641 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:43:52.911656 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:43:52.911674 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:43:52.911686 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:43:52.911698 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:52.911710 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:43:52.911722 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:43:52.911733 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:43:52.911745 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:43:52.911757 systemd[1]: Reached target machines.target - Containers. Nov 12 20:43:52.911769 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:43:52.911783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:43:52.911796 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:43:52.911807 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:43:52.911819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:52.911831 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:43:52.911845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:43:52.911857 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:43:52.911869 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:43:52.911883 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:43:52.911896 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:43:52.911910 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:43:52.911926 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:43:52.911939 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:43:52.911950 kernel: loop: module loaded Nov 12 20:43:52.911962 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:43:52.911974 kernel: fuse: init (API version 7.39) Nov 12 20:43:52.912002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:43:52.912018 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:43:52.912030 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:43:52.912042 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:43:52.912054 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:43:52.912066 systemd[1]: Stopped verity-setup.service. Nov 12 20:43:52.912078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:52.912111 systemd-journald[1124]: Collecting audit messages is disabled. Nov 12 20:43:52.912137 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:43:52.912152 systemd-journald[1124]: Journal started Nov 12 20:43:52.912179 systemd-journald[1124]: Runtime Journal (/run/log/journal/b5d7986177f545529b4513a5ef86dd00) is 6.0M, max 48.3M, 42.2M free. Nov 12 20:43:52.284096 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:43:52.304607 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:43:52.305105 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:43:52.914026 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:43:52.916616 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:43:52.918622 kernel: ACPI: bus type drm_connector registered Nov 12 20:43:52.919223 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:43:52.920798 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:43:52.922426 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:43:52.929137 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:43:52.930736 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:43:52.932631 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:43:52.932911 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:43:52.934820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:52.935061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:52.936777 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:43:52.937056 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:43:52.938708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:43:52.938928 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:43:52.940742 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:43:52.940950 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:43:52.942772 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:43:52.942995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:43:52.944835 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:43:52.946914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:43:52.948869 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:43:52.968739 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:43:52.989154 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:43:52.993198 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:43:52.994601 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:43:52.994643 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:43:52.997212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:43:52.999722 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:43:53.003118 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:43:53.004767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:53.024578 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:43:53.027168 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:43:53.028442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:43:53.030003 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:43:53.031517 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:43:53.037091 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:43:53.039870 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:43:53.043086 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:43:53.044687 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:43:53.046092 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:43:53.047751 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:43:53.052367 systemd-journald[1124]: Time spent on flushing to /var/log/journal/b5d7986177f545529b4513a5ef86dd00 is 21.299ms for 1002 entries. Nov 12 20:43:53.052367 systemd-journald[1124]: System Journal (/var/log/journal/b5d7986177f545529b4513a5ef86dd00) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:43:53.334643 systemd-journald[1124]: Received client request to flush runtime journal. Nov 12 20:43:53.334703 kernel: loop0: detected capacity change from 0 to 140768 Nov 12 20:43:53.334736 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:43:53.334757 kernel: loop1: detected capacity change from 0 to 142488 Nov 12 20:43:53.100776 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:43:53.106367 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:43:53.111079 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:43:53.130065 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 20:43:53.187665 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:43:53.195893 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:43:53.208123 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:43:53.258424 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:43:53.261786 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:43:53.313637 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Nov 12 20:43:53.313656 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Nov 12 20:43:53.316711 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:43:53.323954 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:43:53.337036 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:43:53.341021 kernel: loop2: detected capacity change from 0 to 211296 Nov 12 20:43:53.360308 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:43:53.361336 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:43:53.379017 kernel: loop3: detected capacity change from 0 to 140768 Nov 12 20:43:53.415020 kernel: loop4: detected capacity change from 0 to 142488 Nov 12 20:43:53.430012 kernel: loop5: detected capacity change from 0 to 211296 Nov 12 20:43:53.442937 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:43:53.443696 (sd-merge)[1200]: Merged extensions into '/usr'. Nov 12 20:43:53.449602 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:43:53.449622 systemd[1]: Reloading... Nov 12 20:43:53.524015 zram_generator::config[1229]: No configuration found. Nov 12 20:43:53.702529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:43:53.759263 systemd[1]: Reloading finished in 309 ms. Nov 12 20:43:53.802446 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:43:53.828376 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:43:53.838320 systemd[1]: Starting ensure-sysext.service... Nov 12 20:43:53.840736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:43:53.848970 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:43:53.849040 systemd[1]: Reloading... Nov 12 20:43:53.930813 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:43:53.931223 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:43:53.932202 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:43:53.932523 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Nov 12 20:43:53.932604 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Nov 12 20:43:53.977023 zram_generator::config[1288]: No configuration found. Nov 12 20:43:53.982533 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:43:53.982550 systemd-tmpfiles[1263]: Skipping /boot Nov 12 20:43:53.999154 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:43:53.999188 systemd-tmpfiles[1263]: Skipping /boot Nov 12 20:43:54.098466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:43:54.169848 systemd[1]: Reloading finished in 320 ms. Nov 12 20:43:54.187819 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:43:54.213785 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:43:54.236162 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:43:54.492206 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:43:54.523350 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:43:54.527392 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:43:54.530162 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:43:54.536183 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:54.536895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:43:54.538550 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:54.593272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:43:54.595892 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:43:54.597140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:54.597392 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:54.598982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:54.599760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:54.605556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:43:54.605816 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:43:54.607726 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:43:54.607969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:43:54.612090 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:54.612417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:43:54.626333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:54.653656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:54.653908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:43:54.657010 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:43:54.658262 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:54.659228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:54.659420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:54.667841 systemd[1]: Finished ensure-sysext.service. Nov 12 20:43:54.671406 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:54.671598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:43:54.681021 augenrules[1361]: No rules Nov 12 20:43:54.682316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:43:54.685585 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:43:54.690132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:43:54.692639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:43:54.695433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:43:54.698152 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:43:54.700411 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:43:54.701204 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:43:54.703370 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:43:54.705280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:43:54.707498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:43:54.707773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:43:54.709459 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:43:54.710965 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:43:54.711165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:43:54.712691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:43:54.712865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:43:54.714543 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:43:54.714723 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:43:54.724537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:43:54.724642 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:43:54.796369 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:43:54.803639 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:43:54.927698 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:43:54.929249 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:43:54.939522 systemd-resolved[1342]: Positive Trust Anchors: Nov 12 20:43:54.939535 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:43:54.939567 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:43:54.943052 systemd-resolved[1342]: Defaulting to hostname 'linux'. Nov 12 20:43:54.944631 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:43:54.956945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:43:55.015544 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:43:55.034396 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:43:55.037137 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:43:55.052571 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:43:55.058170 systemd-udevd[1386]: Using default interface naming scheme 'v255'. Nov 12 20:43:55.076565 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:43:55.088276 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:43:55.104937 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:43:55.124502 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1395) Nov 12 20:43:55.133018 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1395) Nov 12 20:43:55.142012 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1397) Nov 12 20:43:55.190297 systemd-networkd[1396]: lo: Link UP Nov 12 20:43:55.190309 systemd-networkd[1396]: lo: Gained carrier Nov 12 20:43:55.192439 systemd-networkd[1396]: Enumeration completed Nov 12 20:43:55.192549 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:43:55.193031 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:43:55.193041 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:43:55.203917 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:43:55.202063 systemd-networkd[1396]: eth0: Link UP Nov 12 20:43:55.202070 systemd-networkd[1396]: eth0: Gained carrier Nov 12 20:43:55.202095 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:43:55.202782 systemd[1]: Reached target network.target - Network. Nov 12 20:43:55.209608 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:43:55.211498 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:43:55.243133 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:43:55.245146 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:43:55.246112 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Nov 12 20:43:55.246873 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:43:55.992840 systemd-timesyncd[1369]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:43:55.992900 systemd-timesyncd[1369]: Initial clock synchronization to Tue 2024-11-12 20:43:55.992717 UTC. Nov 12 20:43:55.994782 systemd-resolved[1342]: Clock change detected. Flushing caches. Nov 12 20:43:56.038016 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:43:56.042275 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:43:56.042364 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 12 20:43:56.057746 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:43:56.057978 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:43:56.058285 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:43:56.063195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:56.068386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:43:56.068645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:56.111126 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:43:56.120677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:43:56.122829 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:43:56.219373 kernel: kvm_amd: TSC scaling supported Nov 12 20:43:56.219455 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:43:56.219502 kernel: kvm_amd: Nested Paging enabled Nov 12 20:43:56.220709 kernel: kvm_amd: LBR virtualization supported Nov 12 20:43:56.220732 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:43:56.221573 kernel: kvm_amd: Virtual GIF supported Nov 12 20:43:56.245138 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:43:56.266070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:43:56.276995 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:43:56.301515 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:43:56.313459 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:43:56.352765 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:43:56.354438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:43:56.355615 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:43:56.356832 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:43:56.358138 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:43:56.359659 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:43:56.360913 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:43:56.362214 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:43:56.363514 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:43:56.363544 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:43:56.364492 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:43:56.366177 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:43:56.369204 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:43:56.382372 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:43:56.384819 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:43:56.386606 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:43:56.387957 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:43:56.389089 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:43:56.390242 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:43:56.390271 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:43:56.391411 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:43:56.394151 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:43:56.398116 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:43:56.398568 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:43:56.401336 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:43:56.403271 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:43:56.406213 jq[1443]: false Nov 12 20:43:56.406842 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:43:56.410223 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:43:56.414298 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:43:56.421297 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:43:56.429310 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:43:56.431049 extend-filesystems[1444]: Found loop3 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found loop4 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found loop5 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found sr0 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found vda Nov 12 20:43:56.432374 extend-filesystems[1444]: Found vda1 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found vda2 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found vda3 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found usr Nov 12 20:43:56.432374 extend-filesystems[1444]: Found vda4 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found vda6 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found vda7 Nov 12 20:43:56.432374 extend-filesystems[1444]: Found vda9 Nov 12 20:43:56.432374 extend-filesystems[1444]: Checking size of /dev/vda9 Nov 12 20:43:56.431292 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:43:56.442992 dbus-daemon[1442]: [system] SELinux support is enabled Nov 12 20:43:56.463817 extend-filesystems[1444]: Resized partition /dev/vda9 Nov 12 20:43:56.435514 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:43:56.468926 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:43:56.476982 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1400) Nov 12 20:43:56.477011 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:43:56.436832 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:43:56.477136 update_engine[1457]: I20241112 20:43:56.465695 1457 main.cc:92] Flatcar Update Engine starting Nov 12 20:43:56.477136 update_engine[1457]: I20241112 20:43:56.470264 1457 update_check_scheduler.cc:74] Next update check in 7m41s Nov 12 20:43:56.442346 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:43:56.486088 jq[1459]: true Nov 12 20:43:56.446662 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:43:56.452633 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:43:56.460727 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:43:56.460973 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:43:56.461394 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:43:56.461635 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:43:56.484735 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:43:56.484977 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:43:56.501634 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:43:56.531179 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:43:56.531290 jq[1469]: true Nov 12 20:43:56.532437 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:43:56.532437 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:43:56.532437 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:43:56.538489 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Nov 12 20:43:56.537178 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:43:56.537485 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:43:56.547131 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:43:56.547169 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:43:56.548782 systemd-logind[1455]: New seat seat0. Nov 12 20:43:56.549167 tar[1467]: linux-amd64/helm Nov 12 20:43:56.549314 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:43:56.552722 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:43:56.557378 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:43:56.557582 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:43:56.562562 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:43:56.562749 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:43:56.571619 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:43:56.582124 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:43:56.585721 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:43:56.594195 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:43:56.667866 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:43:56.750485 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:43:56.814383 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:43:56.824495 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:43:56.834135 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:43:56.834477 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:43:56.914663 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:43:56.948677 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:43:56.981865 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:43:56.988992 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:43:56.990596 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:43:57.145031 containerd[1470]: time="2024-11-12T20:43:57.144839303Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:43:57.193253 containerd[1470]: time="2024-11-12T20:43:57.193159469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:57.196668 containerd[1470]: time="2024-11-12T20:43:57.196600257Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:57.196668 containerd[1470]: time="2024-11-12T20:43:57.196651613Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:43:57.196668 containerd[1470]: time="2024-11-12T20:43:57.196671240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:43:57.196926 containerd[1470]: time="2024-11-12T20:43:57.196906021Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:43:57.196955 containerd[1470]: time="2024-11-12T20:43:57.196929886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197028 containerd[1470]: time="2024-11-12T20:43:57.197005608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197048 containerd[1470]: time="2024-11-12T20:43:57.197027309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197321 containerd[1470]: time="2024-11-12T20:43:57.197286575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197350 containerd[1470]: time="2024-11-12T20:43:57.197317774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197350 containerd[1470]: time="2024-11-12T20:43:57.197337761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197400 containerd[1470]: time="2024-11-12T20:43:57.197351046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197478 containerd[1470]: time="2024-11-12T20:43:57.197460522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197793 containerd[1470]: time="2024-11-12T20:43:57.197764793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197970 containerd[1470]: time="2024-11-12T20:43:57.197939841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:43:57.197996 containerd[1470]: time="2024-11-12T20:43:57.197973775Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:43:57.198163 containerd[1470]: time="2024-11-12T20:43:57.198138384Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:43:57.198221 containerd[1470]: time="2024-11-12T20:43:57.198206552Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:43:57.205513 containerd[1470]: time="2024-11-12T20:43:57.205296658Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:43:57.205513 containerd[1470]: time="2024-11-12T20:43:57.205403268Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:43:57.205513 containerd[1470]: time="2024-11-12T20:43:57.205435549Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:43:57.205513 containerd[1470]: time="2024-11-12T20:43:57.205479932Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:43:57.205513 containerd[1470]: time="2024-11-12T20:43:57.205511281Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:43:57.205770 containerd[1470]: time="2024-11-12T20:43:57.205728258Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:43:57.206197 containerd[1470]: time="2024-11-12T20:43:57.206089346Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:43:57.206350 containerd[1470]: time="2024-11-12T20:43:57.206314769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:43:57.206350 containerd[1470]: time="2024-11-12T20:43:57.206343062Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:43:57.206421 containerd[1470]: time="2024-11-12T20:43:57.206359463Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:43:57.206421 containerd[1470]: time="2024-11-12T20:43:57.206377507Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:43:57.206421 containerd[1470]: time="2024-11-12T20:43:57.206400370Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:43:57.206421 containerd[1470]: time="2024-11-12T20:43:57.206420858Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:43:57.206524 containerd[1470]: time="2024-11-12T20:43:57.206439914Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:43:57.206524 containerd[1470]: time="2024-11-12T20:43:57.206463428Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:43:57.206524 containerd[1470]: time="2024-11-12T20:43:57.206479999Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:43:57.206524 containerd[1470]: time="2024-11-12T20:43:57.206495017Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:43:57.206524 containerd[1470]: time="2024-11-12T20:43:57.206508833Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:43:57.206659 containerd[1470]: time="2024-11-12T20:43:57.206539811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206659 containerd[1470]: time="2024-11-12T20:43:57.206558046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206659 containerd[1470]: time="2024-11-12T20:43:57.206574426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206659 containerd[1470]: time="2024-11-12T20:43:57.206591629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206659 containerd[1470]: time="2024-11-12T20:43:57.206607729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206659 containerd[1470]: time="2024-11-12T20:43:57.206623428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206659 containerd[1470]: time="2024-11-12T20:43:57.206646812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206663995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206683381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206702687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206717896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206749475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206770114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206799228Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206835436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206847569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.206876 containerd[1470]: time="2024-11-12T20:43:57.206858840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:43:57.207150 containerd[1470]: time="2024-11-12T20:43:57.206914575Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:43:57.207150 containerd[1470]: time="2024-11-12T20:43:57.206932248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:43:57.207150 containerd[1470]: time="2024-11-12T20:43:57.206942968Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:43:57.207150 containerd[1470]: time="2024-11-12T20:43:57.206955872Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:43:57.207150 containerd[1470]: time="2024-11-12T20:43:57.206965330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.207150 containerd[1470]: time="2024-11-12T20:43:57.206980579Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:43:57.207150 containerd[1470]: time="2024-11-12T20:43:57.206997170Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:43:57.207150 containerd[1470]: time="2024-11-12T20:43:57.207012178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:43:57.207535 containerd[1470]: time="2024-11-12T20:43:57.207425694Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:43:57.207535 containerd[1470]: time="2024-11-12T20:43:57.207533025Z" level=info msg="Connect containerd service" Nov 12 20:43:57.207892 containerd[1470]: time="2024-11-12T20:43:57.207593970Z" level=info msg="using legacy CRI server" Nov 12 20:43:57.207892 containerd[1470]: time="2024-11-12T20:43:57.207601855Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:43:57.207892 containerd[1470]: time="2024-11-12T20:43:57.207774980Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:43:57.209356 containerd[1470]: time="2024-11-12T20:43:57.209298489Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:43:57.210648 containerd[1470]: time="2024-11-12T20:43:57.210241048Z" level=info msg="Start subscribing containerd event" Nov 12 20:43:57.210705 containerd[1470]: time="2024-11-12T20:43:57.210679611Z" level=info msg="Start recovering state" Nov 12 20:43:57.210823 containerd[1470]: time="2024-11-12T20:43:57.210793064Z" level=info msg="Start event monitor" Nov 12 20:43:57.210867 containerd[1470]: time="2024-11-12T20:43:57.210823892Z" level=info msg="Start snapshots syncer" Nov 12 20:43:57.210917 containerd[1470]: time="2024-11-12T20:43:57.210864468Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:43:57.210917 containerd[1470]: time="2024-11-12T20:43:57.210879526Z" level=info msg="Start streaming server" Nov 12 20:43:57.211091 containerd[1470]: time="2024-11-12T20:43:57.210517046Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:43:57.211235 containerd[1470]: time="2024-11-12T20:43:57.211182635Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:43:57.211323 containerd[1470]: time="2024-11-12T20:43:57.211302500Z" level=info msg="containerd successfully booted in 0.071905s" Nov 12 20:43:57.211746 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:43:57.373803 tar[1467]: linux-amd64/LICENSE Nov 12 20:43:57.373971 tar[1467]: linux-amd64/README.md Nov 12 20:43:57.393197 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:43:57.715512 systemd-networkd[1396]: eth0: Gained IPv6LL Nov 12 20:43:57.719478 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:43:57.734765 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:43:57.748440 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:43:57.767638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:43:57.770398 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:43:57.793909 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:43:57.794234 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:43:57.796518 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:43:57.798971 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:43:58.422819 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:43:58.437420 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:34384.service - OpenSSH per-connection server daemon (10.0.0.1:34384). Nov 12 20:43:58.515546 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 34384 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:58.518391 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:58.527315 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:43:58.592546 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:43:58.601364 systemd-logind[1455]: New session 1 of user core. Nov 12 20:43:58.616854 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:43:58.676736 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:43:58.682543 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:43:58.817376 systemd[1554]: Queued start job for default target default.target. Nov 12 20:43:58.830892 systemd[1554]: Created slice app.slice - User Application Slice. Nov 12 20:43:58.830931 systemd[1554]: Reached target paths.target - Paths. Nov 12 20:43:58.830950 systemd[1554]: Reached target timers.target - Timers. Nov 12 20:43:58.833078 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:43:58.850805 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:43:58.850985 systemd[1554]: Reached target sockets.target - Sockets. Nov 12 20:43:58.851009 systemd[1554]: Reached target basic.target - Basic System. Nov 12 20:43:58.851064 systemd[1554]: Reached target default.target - Main User Target. Nov 12 20:43:58.851131 systemd[1554]: Startup finished in 158ms. Nov 12 20:43:58.852873 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:43:58.870474 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:43:59.027382 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:34386.service - OpenSSH per-connection server daemon (10.0.0.1:34386). Nov 12 20:43:59.067079 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 34386 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:59.156441 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:59.160917 systemd-logind[1455]: New session 2 of user core. Nov 12 20:43:59.180480 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:43:59.248965 sshd[1565]: pam_unix(sshd:session): session closed for user core Nov 12 20:43:59.260952 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:34386.service: Deactivated successfully. Nov 12 20:43:59.262927 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:43:59.264561 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:43:59.331664 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:34388.service - OpenSSH per-connection server daemon (10.0.0.1:34388). Nov 12 20:43:59.334746 systemd-logind[1455]: Removed session 2. Nov 12 20:43:59.365426 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 34388 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:43:59.367334 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:43:59.372257 systemd-logind[1455]: New session 3 of user core. Nov 12 20:43:59.441496 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:43:59.475054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:43:59.521733 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:43:59.522690 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:43:59.525283 systemd[1]: Startup finished in 1.116s (kernel) + 7.975s (initrd) + 7.127s (userspace) = 16.219s. Nov 12 20:43:59.541643 sshd[1572]: pam_unix(sshd:session): session closed for user core Nov 12 20:43:59.545713 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:34388.service: Deactivated successfully. Nov 12 20:43:59.548736 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:43:59.550849 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:43:59.551824 systemd-logind[1455]: Removed session 3. Nov 12 20:44:00.521350 kubelet[1580]: E1112 20:44:00.521239 1580 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:44:00.526201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:44:00.526422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:44:00.526733 systemd[1]: kubelet.service: Consumed 2.557s CPU time. Nov 12 20:44:09.556024 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:54208.service - OpenSSH per-connection server daemon (10.0.0.1:54208). Nov 12 20:44:09.588681 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 54208 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:44:09.590584 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:44:09.594535 systemd-logind[1455]: New session 4 of user core. Nov 12 20:44:09.604234 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:44:09.658500 sshd[1596]: pam_unix(sshd:session): session closed for user core Nov 12 20:44:09.668902 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:54208.service: Deactivated successfully. Nov 12 20:44:09.670590 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:44:09.671960 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:44:09.673178 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:54214.service - OpenSSH per-connection server daemon (10.0.0.1:54214). Nov 12 20:44:09.674133 systemd-logind[1455]: Removed session 4. Nov 12 20:44:09.704988 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 54214 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:44:09.706546 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:44:09.710257 systemd-logind[1455]: New session 5 of user core. Nov 12 20:44:09.721219 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:44:09.770869 sshd[1603]: pam_unix(sshd:session): session closed for user core Nov 12 20:44:09.779625 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:54214.service: Deactivated successfully. Nov 12 20:44:09.781238 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:44:09.782765 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:44:09.784008 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:54228.service - OpenSSH per-connection server daemon (10.0.0.1:54228). Nov 12 20:44:09.784962 systemd-logind[1455]: Removed session 5. Nov 12 20:44:09.838766 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 54228 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:44:09.840530 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:44:09.844804 systemd-logind[1455]: New session 6 of user core. Nov 12 20:44:09.854268 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:44:09.912429 sshd[1610]: pam_unix(sshd:session): session closed for user core Nov 12 20:44:09.920706 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:54228.service: Deactivated successfully. Nov 12 20:44:09.922718 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:44:09.924632 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:44:09.925834 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:54232.service - OpenSSH per-connection server daemon (10.0.0.1:54232). Nov 12 20:44:09.926611 systemd-logind[1455]: Removed session 6. Nov 12 20:44:09.960515 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 54232 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:44:09.962086 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:44:09.966880 systemd-logind[1455]: New session 7 of user core. Nov 12 20:44:09.980261 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:44:10.038006 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:44:10.038450 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:44:10.057544 sudo[1620]: pam_unix(sudo:session): session closed for user root Nov 12 20:44:10.059832 sshd[1617]: pam_unix(sshd:session): session closed for user core Nov 12 20:44:10.073993 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:54232.service: Deactivated successfully. Nov 12 20:44:10.075640 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:44:10.077120 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:44:10.087379 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:54244.service - OpenSSH per-connection server daemon (10.0.0.1:54244). Nov 12 20:44:10.088360 systemd-logind[1455]: Removed session 7. Nov 12 20:44:10.116089 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 54244 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:44:10.118001 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:44:10.122033 systemd-logind[1455]: New session 8 of user core. Nov 12 20:44:10.136306 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:44:10.191315 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:44:10.191719 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:44:10.195897 sudo[1629]: pam_unix(sudo:session): session closed for user root Nov 12 20:44:10.202246 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:44:10.202573 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:44:10.221411 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:44:10.223494 auditctl[1632]: No rules Nov 12 20:44:10.223957 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:44:10.224250 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:44:10.226956 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:44:10.261161 augenrules[1650]: No rules Nov 12 20:44:10.263085 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:44:10.264443 sudo[1628]: pam_unix(sudo:session): session closed for user root Nov 12 20:44:10.266377 sshd[1625]: pam_unix(sshd:session): session closed for user core Nov 12 20:44:10.274652 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:54244.service: Deactivated successfully. Nov 12 20:44:10.277314 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:44:10.279379 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:44:10.286384 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:54254.service - OpenSSH per-connection server daemon (10.0.0.1:54254). Nov 12 20:44:10.287417 systemd-logind[1455]: Removed session 8. Nov 12 20:44:10.314800 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 54254 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:44:10.316458 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:44:10.320642 systemd-logind[1455]: New session 9 of user core. Nov 12 20:44:10.330275 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:44:10.383458 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:44:10.383834 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:44:10.776660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:44:10.783300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:10.922426 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:44:10.922557 (dockerd)[1682]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:44:11.125130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:11.133363 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:44:11.212808 kubelet[1688]: E1112 20:44:11.212745 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:44:11.220483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:44:11.220774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:44:11.611786 dockerd[1682]: time="2024-11-12T20:44:11.611546709Z" level=info msg="Starting up" Nov 12 20:44:12.260549 dockerd[1682]: time="2024-11-12T20:44:12.260490818Z" level=info msg="Loading containers: start." Nov 12 20:44:12.579158 kernel: Initializing XFRM netlink socket Nov 12 20:44:12.665714 systemd-networkd[1396]: docker0: Link UP Nov 12 20:44:12.801184 dockerd[1682]: time="2024-11-12T20:44:12.801128519Z" level=info msg="Loading containers: done." Nov 12 20:44:12.815835 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2171976564-merged.mount: Deactivated successfully. Nov 12 20:44:12.829632 dockerd[1682]: time="2024-11-12T20:44:12.829569006Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:44:12.829774 dockerd[1682]: time="2024-11-12T20:44:12.829690073Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:44:12.829843 dockerd[1682]: time="2024-11-12T20:44:12.829820137Z" level=info msg="Daemon has completed initialization" Nov 12 20:44:12.880077 dockerd[1682]: time="2024-11-12T20:44:12.879973304Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:44:12.880271 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:44:13.616604 containerd[1470]: time="2024-11-12T20:44:13.616548307Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:44:15.310653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623591331.mount: Deactivated successfully. Nov 12 20:44:18.515147 containerd[1470]: time="2024-11-12T20:44:18.515053220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:18.515798 containerd[1470]: time="2024-11-12T20:44:18.515737003Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 20:44:18.517546 containerd[1470]: time="2024-11-12T20:44:18.517505973Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:18.521769 containerd[1470]: time="2024-11-12T20:44:18.521657014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:18.522975 containerd[1470]: time="2024-11-12T20:44:18.522912611Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 4.906316293s" Nov 12 20:44:18.523074 containerd[1470]: time="2024-11-12T20:44:18.522987281Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:44:18.557723 containerd[1470]: time="2024-11-12T20:44:18.557678459Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:44:21.471267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:44:21.479402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:21.906747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:21.914308 (kubelet)[1926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:44:22.999066 kubelet[1926]: E1112 20:44:22.998965 1926 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:44:23.003903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:44:23.004169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:44:23.325076 containerd[1470]: time="2024-11-12T20:44:23.324884742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:23.326809 containerd[1470]: time="2024-11-12T20:44:23.326760462Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 20:44:23.328670 containerd[1470]: time="2024-11-12T20:44:23.328624892Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:23.333874 containerd[1470]: time="2024-11-12T20:44:23.333812708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:23.335722 containerd[1470]: time="2024-11-12T20:44:23.335674242Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 4.777756764s" Nov 12 20:44:23.335790 containerd[1470]: time="2024-11-12T20:44:23.335734455Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:44:23.364975 containerd[1470]: time="2024-11-12T20:44:23.364919890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:44:25.034031 containerd[1470]: time="2024-11-12T20:44:25.033939561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:25.034895 containerd[1470]: time="2024-11-12T20:44:25.034802521Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 20:44:25.036046 containerd[1470]: time="2024-11-12T20:44:25.036003365Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:25.038867 containerd[1470]: time="2024-11-12T20:44:25.038823037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:25.040195 containerd[1470]: time="2024-11-12T20:44:25.040139027Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.675160497s" Nov 12 20:44:25.040195 containerd[1470]: time="2024-11-12T20:44:25.040187728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:44:25.064378 containerd[1470]: time="2024-11-12T20:44:25.064326852Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:44:26.862821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205824000.mount: Deactivated successfully. Nov 12 20:44:28.162360 containerd[1470]: time="2024-11-12T20:44:28.162265308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:28.197022 containerd[1470]: time="2024-11-12T20:44:28.196926449Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 20:44:28.221938 containerd[1470]: time="2024-11-12T20:44:28.221881404Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:28.243757 containerd[1470]: time="2024-11-12T20:44:28.243676910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:28.244388 containerd[1470]: time="2024-11-12T20:44:28.244327822Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 3.179945014s" Nov 12 20:44:28.244454 containerd[1470]: time="2024-11-12T20:44:28.244393425Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:44:28.266685 containerd[1470]: time="2024-11-12T20:44:28.266614158Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:44:29.747938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870042283.mount: Deactivated successfully. Nov 12 20:44:33.254386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:44:33.263252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:33.436289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:33.441154 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:44:34.030169 kubelet[2010]: E1112 20:44:34.030070 2010 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:44:34.035300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:44:34.035550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:44:35.375209 containerd[1470]: time="2024-11-12T20:44:35.375094031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:35.413965 containerd[1470]: time="2024-11-12T20:44:35.413919646Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:44:35.463917 containerd[1470]: time="2024-11-12T20:44:35.463856397Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:35.515606 containerd[1470]: time="2024-11-12T20:44:35.515523226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:35.517073 containerd[1470]: time="2024-11-12T20:44:35.517026879Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 7.250367666s" Nov 12 20:44:35.517153 containerd[1470]: time="2024-11-12T20:44:35.517071233Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:44:35.537736 containerd[1470]: time="2024-11-12T20:44:35.537695695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:44:36.844075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624544328.mount: Deactivated successfully. Nov 12 20:44:37.071427 containerd[1470]: time="2024-11-12T20:44:37.071350914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:37.103396 containerd[1470]: time="2024-11-12T20:44:37.103231441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:44:37.119961 containerd[1470]: time="2024-11-12T20:44:37.119909997Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:37.123719 containerd[1470]: time="2024-11-12T20:44:37.123674443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:37.124375 containerd[1470]: time="2024-11-12T20:44:37.124326216Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.586412674s" Nov 12 20:44:37.124375 containerd[1470]: time="2024-11-12T20:44:37.124374948Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:44:37.150184 containerd[1470]: time="2024-11-12T20:44:37.150126391Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:44:38.297749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2218297599.mount: Deactivated successfully. Nov 12 20:44:41.335744 update_engine[1457]: I20241112 20:44:41.335575 1457 update_attempter.cc:509] Updating boot flags... Nov 12 20:44:41.455840 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2098) Nov 12 20:44:41.580581 containerd[1470]: time="2024-11-12T20:44:41.580500328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:41.582045 containerd[1470]: time="2024-11-12T20:44:41.581960943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 20:44:41.583773 containerd[1470]: time="2024-11-12T20:44:41.583709395Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:41.587567 containerd[1470]: time="2024-11-12T20:44:41.587423219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:44:41.588818 containerd[1470]: time="2024-11-12T20:44:41.588766210Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.438589814s" Nov 12 20:44:41.588818 containerd[1470]: time="2024-11-12T20:44:41.588812578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:44:43.961477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:43.975467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:44.000300 systemd[1]: Reloading requested from client PID 2183 ('systemctl') (unit session-9.scope)... Nov 12 20:44:44.000320 systemd[1]: Reloading... Nov 12 20:44:44.122139 zram_generator::config[2225]: No configuration found. Nov 12 20:44:44.671399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:44:44.780001 systemd[1]: Reloading finished in 779 ms. Nov 12 20:44:44.831760 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:44.835681 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:44:44.835995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:44.844653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:44.996619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:45.001954 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:44:45.074202 kubelet[2272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:44:45.074202 kubelet[2272]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:44:45.074202 kubelet[2272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:44:45.077917 kubelet[2272]: I1112 20:44:45.077814 2272 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:44:45.507836 kubelet[2272]: I1112 20:44:45.507769 2272 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:44:45.507836 kubelet[2272]: I1112 20:44:45.507814 2272 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:44:45.508091 kubelet[2272]: I1112 20:44:45.508062 2272 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:44:45.526362 kubelet[2272]: E1112 20:44:45.526310 2272 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:45.527034 kubelet[2272]: I1112 20:44:45.527002 2272 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:44:45.543192 kubelet[2272]: I1112 20:44:45.543130 2272 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:44:45.543476 kubelet[2272]: I1112 20:44:45.543444 2272 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:44:45.543700 kubelet[2272]: I1112 20:44:45.543649 2272 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:44:45.543700 kubelet[2272]: I1112 20:44:45.543687 2272 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:44:45.543700 kubelet[2272]: I1112 20:44:45.543697 2272 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:44:45.543957 kubelet[2272]: I1112 20:44:45.543842 2272 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:44:45.543980 kubelet[2272]: I1112 20:44:45.543974 2272 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:44:45.543999 kubelet[2272]: I1112 20:44:45.543991 2272 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:44:45.544048 kubelet[2272]: I1112 20:44:45.544033 2272 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:44:45.544073 kubelet[2272]: I1112 20:44:45.544056 2272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:44:45.544923 kubelet[2272]: W1112 20:44:45.544839 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:45.544995 kubelet[2272]: E1112 20:44:45.544934 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:45.544995 kubelet[2272]: W1112 20:44:45.544856 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:45.544995 kubelet[2272]: E1112 20:44:45.544971 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:45.545976 kubelet[2272]: I1112 20:44:45.545889 2272 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:44:45.549797 kubelet[2272]: I1112 20:44:45.549704 2272 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:44:45.551657 kubelet[2272]: W1112 20:44:45.551620 2272 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:44:45.552734 kubelet[2272]: I1112 20:44:45.552626 2272 server.go:1256] "Started kubelet" Nov 12 20:44:45.553034 kubelet[2272]: I1112 20:44:45.552978 2272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:44:45.553900 kubelet[2272]: I1112 20:44:45.553325 2272 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:44:45.553900 kubelet[2272]: I1112 20:44:45.553550 2272 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:44:45.554030 kubelet[2272]: I1112 20:44:45.554011 2272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:44:45.554565 kubelet[2272]: I1112 20:44:45.554520 2272 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:44:45.557434 kubelet[2272]: E1112 20:44:45.556866 2272 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:44:45.557434 kubelet[2272]: I1112 20:44:45.556921 2272 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:44:45.557434 kubelet[2272]: I1112 20:44:45.557022 2272 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:44:45.557609 kubelet[2272]: I1112 20:44:45.557562 2272 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:44:45.558926 kubelet[2272]: W1112 20:44:45.558015 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:45.558926 kubelet[2272]: E1112 20:44:45.558066 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:45.560111 kubelet[2272]: E1112 20:44:45.560061 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Nov 12 20:44:45.561014 kubelet[2272]: E1112 20:44:45.560676 2272 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1807536774bdf8f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:44:45.552589045 +0000 UTC m=+0.545229510,LastTimestamp:2024-11-12 20:44:45.552589045 +0000 UTC m=+0.545229510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:44:45.561454 kubelet[2272]: E1112 20:44:45.561422 2272 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:44:45.562197 kubelet[2272]: I1112 20:44:45.562166 2272 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:44:45.562197 kubelet[2272]: I1112 20:44:45.562183 2272 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:44:45.562281 kubelet[2272]: I1112 20:44:45.562264 2272 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:44:45.580686 kubelet[2272]: I1112 20:44:45.580639 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:44:45.582360 kubelet[2272]: I1112 20:44:45.582319 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:44:45.582435 kubelet[2272]: I1112 20:44:45.582376 2272 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:44:45.582435 kubelet[2272]: I1112 20:44:45.582419 2272 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:44:45.582509 kubelet[2272]: E1112 20:44:45.582482 2272 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:44:45.585675 kubelet[2272]: W1112 20:44:45.585618 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:45.585675 kubelet[2272]: E1112 20:44:45.585666 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:45.597929 kubelet[2272]: I1112 20:44:45.597858 2272 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:44:45.597929 kubelet[2272]: I1112 20:44:45.597897 2272 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:44:45.597929 kubelet[2272]: I1112 20:44:45.597930 2272 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:44:45.658602 kubelet[2272]: I1112 20:44:45.658559 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:45.658996 kubelet[2272]: E1112 20:44:45.658963 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Nov 12 20:44:45.683376 kubelet[2272]: E1112 20:44:45.683310 2272 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:44:45.761640 kubelet[2272]: E1112 20:44:45.761458 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Nov 12 20:44:45.861245 kubelet[2272]: I1112 20:44:45.861203 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:45.861665 kubelet[2272]: E1112 20:44:45.861633 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Nov 12 20:44:45.883843 kubelet[2272]: E1112 20:44:45.883756 2272 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:44:46.162325 kubelet[2272]: E1112 20:44:46.162259 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Nov 12 20:44:46.263963 kubelet[2272]: I1112 20:44:46.263915 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:46.264405 kubelet[2272]: E1112 20:44:46.264373 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Nov 12 20:44:46.284582 kubelet[2272]: E1112 20:44:46.284498 2272 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:44:46.465588 kubelet[2272]: W1112 20:44:46.465344 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:46.465588 kubelet[2272]: E1112 20:44:46.465423 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:46.558879 kubelet[2272]: I1112 20:44:46.558806 2272 policy_none.go:49] "None policy: Start" Nov 12 20:44:46.560407 kubelet[2272]: I1112 20:44:46.560384 2272 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:44:46.560451 kubelet[2272]: I1112 20:44:46.560420 2272 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:44:46.573464 kubelet[2272]: W1112 20:44:46.573351 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:46.573464 kubelet[2272]: E1112 20:44:46.573446 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:46.701558 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:44:46.716636 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:44:46.720344 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:44:46.734479 kubelet[2272]: I1112 20:44:46.734434 2272 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:44:46.734906 kubelet[2272]: I1112 20:44:46.734835 2272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:44:46.736026 kubelet[2272]: E1112 20:44:46.735993 2272 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:44:46.963891 kubelet[2272]: E1112 20:44:46.963835 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" Nov 12 20:44:47.005053 kubelet[2272]: W1112 20:44:47.004794 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:47.005053 kubelet[2272]: E1112 20:44:47.004885 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:47.021531 kubelet[2272]: W1112 20:44:47.021378 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:47.021531 kubelet[2272]: E1112 20:44:47.021484 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:47.066594 kubelet[2272]: I1112 20:44:47.066544 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:47.068179 kubelet[2272]: E1112 20:44:47.068138 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Nov 12 20:44:47.085340 kubelet[2272]: I1112 20:44:47.085268 2272 topology_manager.go:215] "Topology Admit Handler" podUID="3f7e2c442105416c2789417c906dfd41" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:44:47.087134 kubelet[2272]: I1112 20:44:47.087066 2272 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:44:47.087985 kubelet[2272]: I1112 20:44:47.087960 2272 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:44:47.095346 systemd[1]: Created slice kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice - libcontainer container kubepods-burstable-pod33932df710fd78419c0859d7fa44b8e7.slice. Nov 12 20:44:47.125588 systemd[1]: Created slice kubepods-burstable-pod3f7e2c442105416c2789417c906dfd41.slice - libcontainer container kubepods-burstable-pod3f7e2c442105416c2789417c906dfd41.slice. Nov 12 20:44:47.141335 systemd[1]: Created slice kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice - libcontainer container kubepods-burstable-podc7145bec6839b5d7dcb0c5beff5515b4.slice. Nov 12 20:44:47.168822 kubelet[2272]: I1112 20:44:47.168755 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f7e2c442105416c2789417c906dfd41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f7e2c442105416c2789417c906dfd41\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:47.168822 kubelet[2272]: I1112 20:44:47.168825 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:47.169383 kubelet[2272]: I1112 20:44:47.168860 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f7e2c442105416c2789417c906dfd41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f7e2c442105416c2789417c906dfd41\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:47.169383 kubelet[2272]: I1112 20:44:47.169003 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f7e2c442105416c2789417c906dfd41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f7e2c442105416c2789417c906dfd41\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:47.169383 kubelet[2272]: I1112 20:44:47.169087 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:47.169383 kubelet[2272]: I1112 20:44:47.169183 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:47.169383 kubelet[2272]: I1112 20:44:47.169213 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:47.169603 kubelet[2272]: I1112 20:44:47.169242 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:47.169603 kubelet[2272]: I1112 20:44:47.169267 2272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:44:47.422445 kubelet[2272]: E1112 20:44:47.422379 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:47.423349 containerd[1470]: time="2024-11-12T20:44:47.423283475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 20:44:47.439743 kubelet[2272]: E1112 20:44:47.439684 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:47.440432 containerd[1470]: time="2024-11-12T20:44:47.440381870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f7e2c442105416c2789417c906dfd41,Namespace:kube-system,Attempt:0,}" Nov 12 20:44:47.443740 kubelet[2272]: E1112 20:44:47.443695 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:47.444318 containerd[1470]: time="2024-11-12T20:44:47.444267163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 20:44:47.673173 kubelet[2272]: E1112 20:44:47.672970 2272 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:47.993435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002666508.mount: Deactivated successfully. Nov 12 20:44:48.059987 containerd[1470]: time="2024-11-12T20:44:48.059894662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:48.063243 containerd[1470]: time="2024-11-12T20:44:48.063152424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:44:48.064575 containerd[1470]: time="2024-11-12T20:44:48.064498179Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:48.065761 containerd[1470]: time="2024-11-12T20:44:48.065704802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:48.066753 containerd[1470]: time="2024-11-12T20:44:48.066677822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:44:48.068046 containerd[1470]: time="2024-11-12T20:44:48.067990244Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:48.068761 containerd[1470]: time="2024-11-12T20:44:48.068720886Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:44:48.072212 containerd[1470]: time="2024-11-12T20:44:48.072160672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:44:48.074979 containerd[1470]: time="2024-11-12T20:44:48.074862041Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.500951ms" Nov 12 20:44:48.075679 containerd[1470]: time="2024-11-12T20:44:48.075642728Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.157192ms" Nov 12 20:44:48.077535 containerd[1470]: time="2024-11-12T20:44:48.077224730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.826388ms" Nov 12 20:44:48.325075 containerd[1470]: time="2024-11-12T20:44:48.324472087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:44:48.325075 containerd[1470]: time="2024-11-12T20:44:48.324551228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:44:48.325075 containerd[1470]: time="2024-11-12T20:44:48.324581134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:48.325075 containerd[1470]: time="2024-11-12T20:44:48.324733552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:48.362437 containerd[1470]: time="2024-11-12T20:44:48.362186478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:44:48.362437 containerd[1470]: time="2024-11-12T20:44:48.362372230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:44:48.362437 containerd[1470]: time="2024-11-12T20:44:48.362399842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:48.362706 containerd[1470]: time="2024-11-12T20:44:48.362568391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:48.365176 containerd[1470]: time="2024-11-12T20:44:48.364825580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:44:48.365176 containerd[1470]: time="2024-11-12T20:44:48.364990812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:44:48.365176 containerd[1470]: time="2024-11-12T20:44:48.365039424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:48.365280 containerd[1470]: time="2024-11-12T20:44:48.365185921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:44:48.414991 systemd[1]: Started cri-containerd-c573e992d1cd059a7f439160a5af1ea9360109ecdf523ee831f7a898e0c8192b.scope - libcontainer container c573e992d1cd059a7f439160a5af1ea9360109ecdf523ee831f7a898e0c8192b. Nov 12 20:44:48.421859 systemd[1]: Started cri-containerd-2dd065da42c74e936fc0ba100c4c2ec845d867d5cda052884a93cb986b32062d.scope - libcontainer container 2dd065da42c74e936fc0ba100c4c2ec845d867d5cda052884a93cb986b32062d. Nov 12 20:44:48.428974 systemd[1]: Started cri-containerd-fcb339c395cac2fc04437ebba49940b7fb32a336ebb5152351f109283ffeb111.scope - libcontainer container fcb339c395cac2fc04437ebba49940b7fb32a336ebb5152351f109283ffeb111. Nov 12 20:44:48.516598 containerd[1470]: time="2024-11-12T20:44:48.516514304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c573e992d1cd059a7f439160a5af1ea9360109ecdf523ee831f7a898e0c8192b\"" Nov 12 20:44:48.518394 kubelet[2272]: E1112 20:44:48.518254 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:48.521669 containerd[1470]: time="2024-11-12T20:44:48.521632074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f7e2c442105416c2789417c906dfd41,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dd065da42c74e936fc0ba100c4c2ec845d867d5cda052884a93cb986b32062d\"" Nov 12 20:44:48.522502 containerd[1470]: time="2024-11-12T20:44:48.522459980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcb339c395cac2fc04437ebba49940b7fb32a336ebb5152351f109283ffeb111\"" Nov 12 20:44:48.524329 kubelet[2272]: E1112 20:44:48.524291 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:48.524488 kubelet[2272]: E1112 20:44:48.524464 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:48.525742 containerd[1470]: time="2024-11-12T20:44:48.525688796Z" level=info msg="CreateContainer within sandbox \"c573e992d1cd059a7f439160a5af1ea9360109ecdf523ee831f7a898e0c8192b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:44:48.527382 containerd[1470]: time="2024-11-12T20:44:48.527338346Z" level=info msg="CreateContainer within sandbox \"2dd065da42c74e936fc0ba100c4c2ec845d867d5cda052884a93cb986b32062d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:44:48.527712 containerd[1470]: time="2024-11-12T20:44:48.527685773Z" level=info msg="CreateContainer within sandbox \"fcb339c395cac2fc04437ebba49940b7fb32a336ebb5152351f109283ffeb111\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:44:48.550341 containerd[1470]: time="2024-11-12T20:44:48.550269257Z" level=info msg="CreateContainer within sandbox \"c573e992d1cd059a7f439160a5af1ea9360109ecdf523ee831f7a898e0c8192b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb62f56a4a6ab8060eb4314e61e97957c98695da802374b2f2fe8bfcac71941e\"" Nov 12 20:44:48.551089 containerd[1470]: time="2024-11-12T20:44:48.551045184Z" level=info msg="StartContainer for \"cb62f56a4a6ab8060eb4314e61e97957c98695da802374b2f2fe8bfcac71941e\"" Nov 12 20:44:48.559143 containerd[1470]: time="2024-11-12T20:44:48.559072476Z" level=info msg="CreateContainer within sandbox \"fcb339c395cac2fc04437ebba49940b7fb32a336ebb5152351f109283ffeb111\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d293c35787066487b8e56ddd380513a7e7065e8686a05f8aa4ed27fecc122057\"" Nov 12 20:44:48.559674 containerd[1470]: time="2024-11-12T20:44:48.559634589Z" level=info msg="StartContainer for \"d293c35787066487b8e56ddd380513a7e7065e8686a05f8aa4ed27fecc122057\"" Nov 12 20:44:48.560059 containerd[1470]: time="2024-11-12T20:44:48.559930950Z" level=info msg="CreateContainer within sandbox \"2dd065da42c74e936fc0ba100c4c2ec845d867d5cda052884a93cb986b32062d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64b7093a490b4499f8fa22981f0a329757b3cfd96d5235001fc52ba99f9f007c\"" Nov 12 20:44:48.560390 containerd[1470]: time="2024-11-12T20:44:48.560371974Z" level=info msg="StartContainer for \"64b7093a490b4499f8fa22981f0a329757b3cfd96d5235001fc52ba99f9f007c\"" Nov 12 20:44:48.565369 kubelet[2272]: E1112 20:44:48.565274 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="3.2s" Nov 12 20:44:48.585697 systemd[1]: Started cri-containerd-cb62f56a4a6ab8060eb4314e61e97957c98695da802374b2f2fe8bfcac71941e.scope - libcontainer container cb62f56a4a6ab8060eb4314e61e97957c98695da802374b2f2fe8bfcac71941e. Nov 12 20:44:48.602270 systemd[1]: Started cri-containerd-d293c35787066487b8e56ddd380513a7e7065e8686a05f8aa4ed27fecc122057.scope - libcontainer container d293c35787066487b8e56ddd380513a7e7065e8686a05f8aa4ed27fecc122057. Nov 12 20:44:48.606334 systemd[1]: Started cri-containerd-64b7093a490b4499f8fa22981f0a329757b3cfd96d5235001fc52ba99f9f007c.scope - libcontainer container 64b7093a490b4499f8fa22981f0a329757b3cfd96d5235001fc52ba99f9f007c. Nov 12 20:44:48.646334 containerd[1470]: time="2024-11-12T20:44:48.646118790Z" level=info msg="StartContainer for \"cb62f56a4a6ab8060eb4314e61e97957c98695da802374b2f2fe8bfcac71941e\" returns successfully" Nov 12 20:44:48.667565 containerd[1470]: time="2024-11-12T20:44:48.666344975Z" level=info msg="StartContainer for \"d293c35787066487b8e56ddd380513a7e7065e8686a05f8aa4ed27fecc122057\" returns successfully" Nov 12 20:44:48.671075 containerd[1470]: time="2024-11-12T20:44:48.671029445Z" level=info msg="StartContainer for \"64b7093a490b4499f8fa22981f0a329757b3cfd96d5235001fc52ba99f9f007c\" returns successfully" Nov 12 20:44:48.671706 kubelet[2272]: I1112 20:44:48.671686 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:48.672184 kubelet[2272]: E1112 20:44:48.672161 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Nov 12 20:44:48.767088 kubelet[2272]: W1112 20:44:48.766936 2272 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:48.767469 kubelet[2272]: E1112 20:44:48.767445 2272 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Nov 12 20:44:49.604252 kubelet[2272]: E1112 20:44:49.603926 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:49.607472 kubelet[2272]: E1112 20:44:49.607443 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:49.615041 kubelet[2272]: E1112 20:44:49.614624 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:50.546865 kubelet[2272]: I1112 20:44:50.546462 2272 apiserver.go:52] "Watching apiserver" Nov 12 20:44:50.558050 kubelet[2272]: I1112 20:44:50.557954 2272 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:44:50.616316 kubelet[2272]: E1112 20:44:50.616275 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:50.616783 kubelet[2272]: E1112 20:44:50.616522 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:50.616783 kubelet[2272]: E1112 20:44:50.616602 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:50.816690 kubelet[2272]: E1112 20:44:50.816521 2272 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:44:51.206821 kubelet[2272]: E1112 20:44:51.206772 2272 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:44:51.616926 kubelet[2272]: E1112 20:44:51.616877 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:51.617395 kubelet[2272]: E1112 20:44:51.616943 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:51.690282 kubelet[2272]: E1112 20:44:51.690233 2272 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:44:51.874136 kubelet[2272]: I1112 20:44:51.874002 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:51.944492 kubelet[2272]: E1112 20:44:51.944323 2272 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 20:44:51.951982 kubelet[2272]: I1112 20:44:51.951946 2272 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:44:54.309307 systemd[1]: Reloading requested from client PID 2550 ('systemctl') (unit session-9.scope)... Nov 12 20:44:54.309321 systemd[1]: Reloading... Nov 12 20:44:54.386145 zram_generator::config[2589]: No configuration found. Nov 12 20:44:54.509387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:44:54.606862 systemd[1]: Reloading finished in 297 ms. Nov 12 20:44:54.653926 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:54.677770 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:44:54.678127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:54.678203 systemd[1]: kubelet.service: Consumed 1.203s CPU time, 115.1M memory peak, 0B memory swap peak. Nov 12 20:44:54.688425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:44:54.841411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:44:54.847295 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:44:54.921331 kubelet[2634]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:44:54.921331 kubelet[2634]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:44:54.921331 kubelet[2634]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:44:54.921704 kubelet[2634]: I1112 20:44:54.921306 2634 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:44:54.927398 kubelet[2634]: I1112 20:44:54.927361 2634 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:44:54.927398 kubelet[2634]: I1112 20:44:54.927385 2634 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:44:54.927583 kubelet[2634]: I1112 20:44:54.927562 2634 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:44:54.928923 kubelet[2634]: I1112 20:44:54.928901 2634 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:44:54.930730 kubelet[2634]: I1112 20:44:54.930698 2634 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:44:54.938560 kubelet[2634]: I1112 20:44:54.938524 2634 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:44:54.938765 kubelet[2634]: I1112 20:44:54.938747 2634 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:44:54.938928 kubelet[2634]: I1112 20:44:54.938896 2634 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:44:54.938928 kubelet[2634]: I1112 20:44:54.938928 2634 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:44:54.939092 kubelet[2634]: I1112 20:44:54.938945 2634 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:44:54.939092 kubelet[2634]: I1112 20:44:54.938984 2634 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:44:54.939092 kubelet[2634]: I1112 20:44:54.939083 2634 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:44:54.939233 kubelet[2634]: I1112 20:44:54.939110 2634 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:44:54.939233 kubelet[2634]: I1112 20:44:54.939181 2634 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:44:54.939233 kubelet[2634]: I1112 20:44:54.939204 2634 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:44:54.939773 kubelet[2634]: I1112 20:44:54.939754 2634 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:44:54.940573 kubelet[2634]: I1112 20:44:54.939928 2634 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:44:54.940573 kubelet[2634]: I1112 20:44:54.940319 2634 server.go:1256] "Started kubelet" Nov 12 20:44:54.940573 kubelet[2634]: I1112 20:44:54.940505 2634 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:44:54.941196 kubelet[2634]: I1112 20:44:54.940708 2634 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:44:54.941509 kubelet[2634]: I1112 20:44:54.941489 2634 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:44:54.945130 kubelet[2634]: I1112 20:44:54.942491 2634 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:44:54.945130 kubelet[2634]: I1112 20:44:54.943971 2634 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:44:54.950817 kubelet[2634]: I1112 20:44:54.950766 2634 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:44:54.953166 kubelet[2634]: I1112 20:44:54.953135 2634 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:44:54.953344 kubelet[2634]: I1112 20:44:54.953325 2634 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:44:54.954268 kubelet[2634]: I1112 20:44:54.954222 2634 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:44:54.954507 kubelet[2634]: I1112 20:44:54.954484 2634 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:44:54.956063 kubelet[2634]: E1112 20:44:54.956040 2634 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:44:54.956221 kubelet[2634]: I1112 20:44:54.956192 2634 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:44:54.959281 kubelet[2634]: I1112 20:44:54.959262 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:44:54.960416 kubelet[2634]: I1112 20:44:54.960392 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:44:54.960416 kubelet[2634]: I1112 20:44:54.960415 2634 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:44:54.960508 kubelet[2634]: I1112 20:44:54.960432 2634 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:44:54.960508 kubelet[2634]: E1112 20:44:54.960482 2634 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:44:54.992405 kubelet[2634]: I1112 20:44:54.992371 2634 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:44:54.992405 kubelet[2634]: I1112 20:44:54.992399 2634 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:44:54.992568 kubelet[2634]: I1112 20:44:54.992420 2634 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:44:54.992595 kubelet[2634]: I1112 20:44:54.992576 2634 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:44:54.992615 kubelet[2634]: I1112 20:44:54.992594 2634 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:44:54.992615 kubelet[2634]: I1112 20:44:54.992600 2634 policy_none.go:49] "None policy: Start" Nov 12 20:44:54.993419 kubelet[2634]: I1112 20:44:54.993185 2634 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:44:54.993419 kubelet[2634]: I1112 20:44:54.993209 2634 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:44:54.993419 kubelet[2634]: I1112 20:44:54.993340 2634 state_mem.go:75] "Updated machine memory state" Nov 12 20:44:54.997337 kubelet[2634]: I1112 20:44:54.997304 2634 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:44:54.997621 kubelet[2634]: I1112 20:44:54.997603 2634 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:44:55.055946 kubelet[2634]: I1112 20:44:55.055900 2634 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:44:55.061522 kubelet[2634]: I1112 20:44:55.061497 2634 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:44:55.061604 kubelet[2634]: I1112 20:44:55.061595 2634 topology_manager.go:215] "Topology Admit Handler" podUID="3f7e2c442105416c2789417c906dfd41" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:44:55.061666 kubelet[2634]: I1112 20:44:55.061640 2634 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:44:55.127078 kubelet[2634]: I1112 20:44:55.127034 2634 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 20:44:55.127231 kubelet[2634]: I1112 20:44:55.127154 2634 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:44:55.154324 kubelet[2634]: I1112 20:44:55.154290 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:55.154324 kubelet[2634]: I1112 20:44:55.154325 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:55.154455 kubelet[2634]: I1112 20:44:55.154345 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:55.154455 kubelet[2634]: I1112 20:44:55.154389 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:44:55.154455 kubelet[2634]: I1112 20:44:55.154454 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f7e2c442105416c2789417c906dfd41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f7e2c442105416c2789417c906dfd41\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:55.154556 kubelet[2634]: I1112 20:44:55.154491 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:55.154556 kubelet[2634]: I1112 20:44:55.154516 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:44:55.154556 kubelet[2634]: I1112 20:44:55.154539 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f7e2c442105416c2789417c906dfd41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f7e2c442105416c2789417c906dfd41\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:55.154633 kubelet[2634]: I1112 20:44:55.154583 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f7e2c442105416c2789417c906dfd41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f7e2c442105416c2789417c906dfd41\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:55.170232 sudo[2671]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 20:44:55.170584 sudo[2671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 20:44:55.412911 kubelet[2634]: E1112 20:44:55.412847 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:55.427941 kubelet[2634]: E1112 20:44:55.427892 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:55.428528 kubelet[2634]: E1112 20:44:55.428492 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:55.648734 sudo[2671]: pam_unix(sudo:session): session closed for user root Nov 12 20:44:55.940578 kubelet[2634]: I1112 20:44:55.940512 2634 apiserver.go:52] "Watching apiserver" Nov 12 20:44:55.953503 kubelet[2634]: I1112 20:44:55.953444 2634 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:44:55.974600 kubelet[2634]: E1112 20:44:55.974556 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:55.980564 kubelet[2634]: E1112 20:44:55.980274 2634 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 12 20:44:55.980748 kubelet[2634]: E1112 20:44:55.980677 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:55.982267 kubelet[2634]: E1112 20:44:55.981541 2634 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 20:44:55.982267 kubelet[2634]: E1112 20:44:55.982163 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:55.994757 kubelet[2634]: I1112 20:44:55.994711 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.994656784 podStartE2EDuration="994.656784ms" podCreationTimestamp="2024-11-12 20:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:44:55.993865 +0000 UTC m=+1.126146730" watchObservedRunningTime="2024-11-12 20:44:55.994656784 +0000 UTC m=+1.126938514" Nov 12 20:44:56.001558 kubelet[2634]: I1112 20:44:56.001510 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.001450297 podStartE2EDuration="1.001450297s" podCreationTimestamp="2024-11-12 20:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:44:56.001371247 +0000 UTC m=+1.133652978" watchObservedRunningTime="2024-11-12 20:44:56.001450297 +0000 UTC m=+1.133732027" Nov 12 20:44:56.017464 kubelet[2634]: I1112 20:44:56.017363 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.017314899 podStartE2EDuration="1.017314899s" podCreationTimestamp="2024-11-12 20:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:44:56.008802549 +0000 UTC m=+1.141084279" watchObservedRunningTime="2024-11-12 20:44:56.017314899 +0000 UTC m=+1.149596629" Nov 12 20:44:56.976908 kubelet[2634]: E1112 20:44:56.976859 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:56.977688 kubelet[2634]: E1112 20:44:56.977663 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:44:57.064638 sudo[1661]: pam_unix(sudo:session): session closed for user root Nov 12 20:44:57.066793 sshd[1658]: pam_unix(sshd:session): session closed for user core Nov 12 20:44:57.071049 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:54254.service: Deactivated successfully. Nov 12 20:44:57.073250 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:44:57.073486 systemd[1]: session-9.scope: Consumed 4.923s CPU time, 193.1M memory peak, 0B memory swap peak. Nov 12 20:44:57.073982 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:44:57.074953 systemd-logind[1455]: Removed session 9. Nov 12 20:45:02.296562 kubelet[2634]: E1112 20:45:02.296518 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:02.579810 kubelet[2634]: E1112 20:45:02.579679 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:02.985783 kubelet[2634]: E1112 20:45:02.985727 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:02.986044 kubelet[2634]: E1112 20:45:02.986012 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:05.079898 kubelet[2634]: E1112 20:45:05.079870 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:05.990375 kubelet[2634]: E1112 20:45:05.990338 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:06.733853 kubelet[2634]: I1112 20:45:06.733813 2634 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:45:06.736056 containerd[1470]: time="2024-11-12T20:45:06.736013728Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:45:06.736395 kubelet[2634]: I1112 20:45:06.736272 2634 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:45:07.221402 kubelet[2634]: I1112 20:45:07.220938 2634 topology_manager.go:215] "Topology Admit Handler" podUID="c30b68b0-0509-41b1-a67f-26fb57fae64b" podNamespace="kube-system" podName="kube-proxy-hlmtf" Nov 12 20:45:07.227272 kubelet[2634]: I1112 20:45:07.227207 2634 topology_manager.go:215] "Topology Admit Handler" podUID="e580adf9-b9b8-4e11-b510-31158322de7d" podNamespace="kube-system" podName="cilium-tpjnm" Nov 12 20:45:07.229463 kubelet[2634]: W1112 20:45:07.229420 2634 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Nov 12 20:45:07.229463 kubelet[2634]: E1112 20:45:07.229462 2634 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Nov 12 20:45:07.229651 kubelet[2634]: W1112 20:45:07.229505 2634 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Nov 12 20:45:07.229651 kubelet[2634]: E1112 20:45:07.229532 2634 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Nov 12 20:45:07.229727 kubelet[2634]: W1112 20:45:07.229703 2634 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Nov 12 20:45:07.229727 kubelet[2634]: E1112 20:45:07.229721 2634 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Nov 12 20:45:07.233317 systemd[1]: Created slice kubepods-besteffort-podc30b68b0_0509_41b1_a67f_26fb57fae64b.slice - libcontainer container kubepods-besteffort-podc30b68b0_0509_41b1_a67f_26fb57fae64b.slice. Nov 12 20:45:07.246334 systemd[1]: Created slice kubepods-burstable-pode580adf9_b9b8_4e11_b510_31158322de7d.slice - libcontainer container kubepods-burstable-pode580adf9_b9b8_4e11_b510_31158322de7d.slice. Nov 12 20:45:07.321248 kubelet[2634]: I1112 20:45:07.321182 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-bpf-maps\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321248 kubelet[2634]: I1112 20:45:07.321259 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-etc-cni-netd\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321472 kubelet[2634]: I1112 20:45:07.321295 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-hubble-tls\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321472 kubelet[2634]: I1112 20:45:07.321322 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-host-proc-sys-kernel\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321472 kubelet[2634]: I1112 20:45:07.321348 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-run\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321472 kubelet[2634]: I1112 20:45:07.321374 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-cgroup\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321472 kubelet[2634]: I1112 20:45:07.321400 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-host-proc-sys-net\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321472 kubelet[2634]: I1112 20:45:07.321431 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c30b68b0-0509-41b1-a67f-26fb57fae64b-xtables-lock\") pod \"kube-proxy-hlmtf\" (UID: \"c30b68b0-0509-41b1-a67f-26fb57fae64b\") " pod="kube-system/kube-proxy-hlmtf" Nov 12 20:45:07.321616 kubelet[2634]: I1112 20:45:07.321455 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-lib-modules\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321616 kubelet[2634]: I1112 20:45:07.321479 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-xtables-lock\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321616 kubelet[2634]: I1112 20:45:07.321507 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-config-path\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321616 kubelet[2634]: I1112 20:45:07.321532 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-hostproc\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321616 kubelet[2634]: I1112 20:45:07.321570 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cni-path\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321616 kubelet[2634]: I1112 20:45:07.321597 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p76kt\" (UniqueName: \"kubernetes.io/projected/c30b68b0-0509-41b1-a67f-26fb57fae64b-kube-api-access-p76kt\") pod \"kube-proxy-hlmtf\" (UID: \"c30b68b0-0509-41b1-a67f-26fb57fae64b\") " pod="kube-system/kube-proxy-hlmtf" Nov 12 20:45:07.321742 kubelet[2634]: I1112 20:45:07.321633 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e580adf9-b9b8-4e11-b510-31158322de7d-clustermesh-secrets\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.321742 kubelet[2634]: I1112 20:45:07.321667 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c30b68b0-0509-41b1-a67f-26fb57fae64b-kube-proxy\") pod \"kube-proxy-hlmtf\" (UID: \"c30b68b0-0509-41b1-a67f-26fb57fae64b\") " pod="kube-system/kube-proxy-hlmtf" Nov 12 20:45:07.321742 kubelet[2634]: I1112 20:45:07.321691 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c30b68b0-0509-41b1-a67f-26fb57fae64b-lib-modules\") pod \"kube-proxy-hlmtf\" (UID: \"c30b68b0-0509-41b1-a67f-26fb57fae64b\") " pod="kube-system/kube-proxy-hlmtf" Nov 12 20:45:07.321742 kubelet[2634]: I1112 20:45:07.321719 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2psxj\" (UniqueName: \"kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-kube-api-access-2psxj\") pod \"cilium-tpjnm\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " pod="kube-system/cilium-tpjnm" Nov 12 20:45:07.428736 kubelet[2634]: E1112 20:45:07.428688 2634 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 20:45:07.428736 kubelet[2634]: E1112 20:45:07.428691 2634 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 20:45:07.428736 kubelet[2634]: E1112 20:45:07.428732 2634 projected.go:200] Error preparing data for projected volume kube-api-access-p76kt for pod kube-system/kube-proxy-hlmtf: configmap "kube-root-ca.crt" not found Nov 12 20:45:07.428736 kubelet[2634]: E1112 20:45:07.428746 2634 projected.go:200] Error preparing data for projected volume kube-api-access-2psxj for pod kube-system/cilium-tpjnm: configmap "kube-root-ca.crt" not found Nov 12 20:45:07.429028 kubelet[2634]: E1112 20:45:07.428813 2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c30b68b0-0509-41b1-a67f-26fb57fae64b-kube-api-access-p76kt podName:c30b68b0-0509-41b1-a67f-26fb57fae64b nodeName:}" failed. No retries permitted until 2024-11-12 20:45:07.928786277 +0000 UTC m=+13.061068007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p76kt" (UniqueName: "kubernetes.io/projected/c30b68b0-0509-41b1-a67f-26fb57fae64b-kube-api-access-p76kt") pod "kube-proxy-hlmtf" (UID: "c30b68b0-0509-41b1-a67f-26fb57fae64b") : configmap "kube-root-ca.crt" not found Nov 12 20:45:07.429028 kubelet[2634]: E1112 20:45:07.428831 2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-kube-api-access-2psxj podName:e580adf9-b9b8-4e11-b510-31158322de7d nodeName:}" failed. No retries permitted until 2024-11-12 20:45:07.928823608 +0000 UTC m=+13.061105338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2psxj" (UniqueName: "kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-kube-api-access-2psxj") pod "cilium-tpjnm" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d") : configmap "kube-root-ca.crt" not found Nov 12 20:45:07.786639 kubelet[2634]: I1112 20:45:07.786582 2634 topology_manager.go:215] "Topology Admit Handler" podUID="c8e5e584-3015-4571-84ad-53b55274225d" podNamespace="kube-system" podName="cilium-operator-5cc964979-gk87p" Nov 12 20:45:07.797947 systemd[1]: Created slice kubepods-besteffort-podc8e5e584_3015_4571_84ad_53b55274225d.slice - libcontainer container kubepods-besteffort-podc8e5e584_3015_4571_84ad_53b55274225d.slice. Nov 12 20:45:07.826122 kubelet[2634]: I1112 20:45:07.826049 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852tm\" (UniqueName: \"kubernetes.io/projected/c8e5e584-3015-4571-84ad-53b55274225d-kube-api-access-852tm\") pod \"cilium-operator-5cc964979-gk87p\" (UID: \"c8e5e584-3015-4571-84ad-53b55274225d\") " pod="kube-system/cilium-operator-5cc964979-gk87p" Nov 12 20:45:07.826296 kubelet[2634]: I1112 20:45:07.826138 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8e5e584-3015-4571-84ad-53b55274225d-cilium-config-path\") pod \"cilium-operator-5cc964979-gk87p\" (UID: \"c8e5e584-3015-4571-84ad-53b55274225d\") " pod="kube-system/cilium-operator-5cc964979-gk87p" Nov 12 20:45:08.143352 kubelet[2634]: E1112 20:45:08.143306 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:08.144054 containerd[1470]: time="2024-11-12T20:45:08.144013622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hlmtf,Uid:c30b68b0-0509-41b1-a67f-26fb57fae64b,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:08.193873 containerd[1470]: time="2024-11-12T20:45:08.193706132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:08.193873 containerd[1470]: time="2024-11-12T20:45:08.193793867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:08.193873 containerd[1470]: time="2024-11-12T20:45:08.193811029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:08.194090 containerd[1470]: time="2024-11-12T20:45:08.193952606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:08.220300 systemd[1]: Started cri-containerd-651a40297d64d963b9f6babe425ed92ac06a56d95d764fffdd2ed97ac00876ea.scope - libcontainer container 651a40297d64d963b9f6babe425ed92ac06a56d95d764fffdd2ed97ac00876ea. Nov 12 20:45:08.245158 containerd[1470]: time="2024-11-12T20:45:08.245083723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hlmtf,Uid:c30b68b0-0509-41b1-a67f-26fb57fae64b,Namespace:kube-system,Attempt:0,} returns sandbox id \"651a40297d64d963b9f6babe425ed92ac06a56d95d764fffdd2ed97ac00876ea\"" Nov 12 20:45:08.246561 kubelet[2634]: E1112 20:45:08.246403 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:08.249963 containerd[1470]: time="2024-11-12T20:45:08.249893749Z" level=info msg="CreateContainer within sandbox \"651a40297d64d963b9f6babe425ed92ac06a56d95d764fffdd2ed97ac00876ea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:45:08.271645 containerd[1470]: time="2024-11-12T20:45:08.271571290Z" level=info msg="CreateContainer within sandbox \"651a40297d64d963b9f6babe425ed92ac06a56d95d764fffdd2ed97ac00876ea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"35bfcf9973520677cd0d34698f991f83ff126e65f697cd199ccf24b07b70d82b\"" Nov 12 20:45:08.272471 containerd[1470]: time="2024-11-12T20:45:08.272421039Z" level=info msg="StartContainer for \"35bfcf9973520677cd0d34698f991f83ff126e65f697cd199ccf24b07b70d82b\"" Nov 12 20:45:08.312478 systemd[1]: Started cri-containerd-35bfcf9973520677cd0d34698f991f83ff126e65f697cd199ccf24b07b70d82b.scope - libcontainer container 35bfcf9973520677cd0d34698f991f83ff126e65f697cd199ccf24b07b70d82b. Nov 12 20:45:08.346796 containerd[1470]: time="2024-11-12T20:45:08.346729437Z" level=info msg="StartContainer for \"35bfcf9973520677cd0d34698f991f83ff126e65f697cd199ccf24b07b70d82b\" returns successfully" Nov 12 20:45:08.401980 kubelet[2634]: E1112 20:45:08.401743 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:08.402891 containerd[1470]: time="2024-11-12T20:45:08.402791066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gk87p,Uid:c8e5e584-3015-4571-84ad-53b55274225d,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:08.422706 kubelet[2634]: E1112 20:45:08.422642 2634 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Nov 12 20:45:08.422706 kubelet[2634]: E1112 20:45:08.422682 2634 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-tpjnm: failed to sync secret cache: timed out waiting for the condition Nov 12 20:45:08.422907 kubelet[2634]: E1112 20:45:08.422777 2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-hubble-tls podName:e580adf9-b9b8-4e11-b510-31158322de7d nodeName:}" failed. No retries permitted until 2024-11-12 20:45:08.922743232 +0000 UTC m=+14.055024962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-hubble-tls") pod "cilium-tpjnm" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d") : failed to sync secret cache: timed out waiting for the condition Nov 12 20:45:08.440123 containerd[1470]: time="2024-11-12T20:45:08.438013373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:08.440123 containerd[1470]: time="2024-11-12T20:45:08.438164897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:08.440123 containerd[1470]: time="2024-11-12T20:45:08.438188441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:08.440628 containerd[1470]: time="2024-11-12T20:45:08.440571414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:08.470475 systemd[1]: Started cri-containerd-93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9.scope - libcontainer container 93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9. Nov 12 20:45:08.516318 containerd[1470]: time="2024-11-12T20:45:08.516239599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gk87p,Uid:c8e5e584-3015-4571-84ad-53b55274225d,Namespace:kube-system,Attempt:0,} returns sandbox id \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\"" Nov 12 20:45:08.517325 kubelet[2634]: E1112 20:45:08.517294 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:08.521416 containerd[1470]: time="2024-11-12T20:45:08.521361483Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 20:45:08.997957 kubelet[2634]: E1112 20:45:08.997914 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:09.049672 kubelet[2634]: E1112 20:45:09.049617 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:09.050507 containerd[1470]: time="2024-11-12T20:45:09.050412473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpjnm,Uid:e580adf9-b9b8-4e11-b510-31158322de7d,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:09.081021 containerd[1470]: time="2024-11-12T20:45:09.080830175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:09.081021 containerd[1470]: time="2024-11-12T20:45:09.080930063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:09.081021 containerd[1470]: time="2024-11-12T20:45:09.080947455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:09.081241 containerd[1470]: time="2024-11-12T20:45:09.081131171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:09.103256 systemd[1]: Started cri-containerd-c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6.scope - libcontainer container c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6. Nov 12 20:45:09.125880 containerd[1470]: time="2024-11-12T20:45:09.125816023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpjnm,Uid:e580adf9-b9b8-4e11-b510-31158322de7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\"" Nov 12 20:45:09.126732 kubelet[2634]: E1112 20:45:09.126697 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:11.953755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073596017.mount: Deactivated successfully. Nov 12 20:45:12.415760 containerd[1470]: time="2024-11-12T20:45:12.415706424Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:12.416505 containerd[1470]: time="2024-11-12T20:45:12.416449941Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Nov 12 20:45:12.417618 containerd[1470]: time="2024-11-12T20:45:12.417581790Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:12.418843 containerd[1470]: time="2024-11-12T20:45:12.418810269Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.897393653s" Nov 12 20:45:12.418843 containerd[1470]: time="2024-11-12T20:45:12.418842850Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 20:45:12.422089 containerd[1470]: time="2024-11-12T20:45:12.422051101Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 20:45:12.423555 containerd[1470]: time="2024-11-12T20:45:12.423513700Z" level=info msg="CreateContainer within sandbox \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 20:45:12.439794 containerd[1470]: time="2024-11-12T20:45:12.439745314Z" level=info msg="CreateContainer within sandbox \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\"" Nov 12 20:45:12.440671 containerd[1470]: time="2024-11-12T20:45:12.440327909Z" level=info msg="StartContainer for \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\"" Nov 12 20:45:12.479344 systemd[1]: Started cri-containerd-5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca.scope - libcontainer container 5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca. Nov 12 20:45:12.521509 containerd[1470]: time="2024-11-12T20:45:12.521437796Z" level=info msg="StartContainer for \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\" returns successfully" Nov 12 20:45:13.018999 kubelet[2634]: E1112 20:45:13.016958 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:13.281978 kubelet[2634]: I1112 20:45:13.281810 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hlmtf" podStartSLOduration=6.281750056 podStartE2EDuration="6.281750056s" podCreationTimestamp="2024-11-12 20:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:45:09.014687491 +0000 UTC m=+14.146969221" watchObservedRunningTime="2024-11-12 20:45:13.281750056 +0000 UTC m=+18.414031786" Nov 12 20:45:14.009182 kubelet[2634]: E1112 20:45:14.009132 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:22.784511 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:52758.service - OpenSSH per-connection server daemon (10.0.0.1:52758). Nov 12 20:45:22.820947 sshd[3059]: Accepted publickey for core from 10.0.0.1 port 52758 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:22.822999 sshd[3059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:22.827873 systemd-logind[1455]: New session 10 of user core. Nov 12 20:45:22.841246 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:45:22.990986 sshd[3059]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:22.995546 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:52758.service: Deactivated successfully. Nov 12 20:45:22.997628 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:45:22.998365 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:45:22.999458 systemd-logind[1455]: Removed session 10. Nov 12 20:45:26.763065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4023388565.mount: Deactivated successfully. Nov 12 20:45:28.009429 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:55848.service - OpenSSH per-connection server daemon (10.0.0.1:55848). Nov 12 20:45:28.122521 sshd[3078]: Accepted publickey for core from 10.0.0.1 port 55848 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:28.124506 sshd[3078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:28.129470 systemd-logind[1455]: New session 11 of user core. Nov 12 20:45:28.139490 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:45:28.616234 sshd[3078]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:28.619837 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:55848.service: Deactivated successfully. Nov 12 20:45:28.621929 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:45:28.622636 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:45:28.623775 systemd-logind[1455]: Removed session 11. Nov 12 20:45:33.633709 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:55860.service - OpenSSH per-connection server daemon (10.0.0.1:55860). Nov 12 20:45:34.376543 sshd[3114]: Accepted publickey for core from 10.0.0.1 port 55860 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:34.378217 sshd[3114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:34.382484 systemd-logind[1455]: New session 12 of user core. Nov 12 20:45:34.392253 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:45:34.641064 sshd[3114]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:34.646151 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:55860.service: Deactivated successfully. Nov 12 20:45:34.648239 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:45:34.649015 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:45:34.650346 systemd-logind[1455]: Removed session 12. Nov 12 20:45:35.734475 containerd[1470]: time="2024-11-12T20:45:35.734315297Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:35.789881 containerd[1470]: time="2024-11-12T20:45:35.789732752Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735303" Nov 12 20:45:35.847497 containerd[1470]: time="2024-11-12T20:45:35.847317328Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:45:35.849353 containerd[1470]: time="2024-11-12T20:45:35.849301404Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 23.427210207s" Nov 12 20:45:35.849414 containerd[1470]: time="2024-11-12T20:45:35.849353083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 20:45:35.851450 containerd[1470]: time="2024-11-12T20:45:35.851418537Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:45:36.771490 containerd[1470]: time="2024-11-12T20:45:36.771410941Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\"" Nov 12 20:45:36.772202 containerd[1470]: time="2024-11-12T20:45:36.772156097Z" level=info msg="StartContainer for \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\"" Nov 12 20:45:36.801330 systemd[1]: Started cri-containerd-d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668.scope - libcontainer container d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668. Nov 12 20:45:36.868285 systemd[1]: cri-containerd-d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668.scope: Deactivated successfully. Nov 12 20:45:37.074436 containerd[1470]: time="2024-11-12T20:45:37.074265198Z" level=info msg="StartContainer for \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\" returns successfully" Nov 12 20:45:37.257127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668-rootfs.mount: Deactivated successfully. Nov 12 20:45:38.079120 kubelet[2634]: E1112 20:45:38.079063 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:38.291795 kubelet[2634]: I1112 20:45:38.291717 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-gk87p" podStartSLOduration=27.387859548 podStartE2EDuration="31.291659186s" podCreationTimestamp="2024-11-12 20:45:07 +0000 UTC" firstStartedPulling="2024-11-12 20:45:08.517991454 +0000 UTC m=+13.650273184" lastFinishedPulling="2024-11-12 20:45:12.421791092 +0000 UTC m=+17.554072822" observedRunningTime="2024-11-12 20:45:13.284206143 +0000 UTC m=+18.416487893" watchObservedRunningTime="2024-11-12 20:45:38.291659186 +0000 UTC m=+43.423940916" Nov 12 20:45:38.864270 containerd[1470]: time="2024-11-12T20:45:38.861906552Z" level=info msg="shim disconnected" id=d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668 namespace=k8s.io Nov 12 20:45:38.864270 containerd[1470]: time="2024-11-12T20:45:38.864257606Z" level=warning msg="cleaning up after shim disconnected" id=d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668 namespace=k8s.io Nov 12 20:45:38.864270 containerd[1470]: time="2024-11-12T20:45:38.864268847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:45:39.312557 kubelet[2634]: E1112 20:45:39.312326 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:39.314892 containerd[1470]: time="2024-11-12T20:45:39.314855932Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:45:39.654754 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:56546.service - OpenSSH per-connection server daemon (10.0.0.1:56546). Nov 12 20:45:39.676270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3513114686.mount: Deactivated successfully. Nov 12 20:45:39.681974 containerd[1470]: time="2024-11-12T20:45:39.681906458Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\"" Nov 12 20:45:39.682716 containerd[1470]: time="2024-11-12T20:45:39.682676299Z" level=info msg="StartContainer for \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\"" Nov 12 20:45:39.699379 sshd[3198]: Accepted publickey for core from 10.0.0.1 port 56546 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:39.701632 sshd[3198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:39.712473 systemd-logind[1455]: New session 13 of user core. Nov 12 20:45:39.725546 systemd[1]: Started cri-containerd-33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540.scope - libcontainer container 33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540. Nov 12 20:45:39.727172 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:45:39.889122 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:45:39.889707 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:45:39.889773 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:45:39.899696 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:45:39.900005 systemd[1]: cri-containerd-33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540.scope: Deactivated successfully. Nov 12 20:45:39.917576 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:45:40.147720 sshd[3198]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:40.152269 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:56546.service: Deactivated successfully. Nov 12 20:45:40.154466 containerd[1470]: time="2024-11-12T20:45:40.153964534Z" level=info msg="StartContainer for \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\" returns successfully" Nov 12 20:45:40.154955 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:45:40.156018 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:45:40.157355 systemd-logind[1455]: Removed session 13. Nov 12 20:45:40.174236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540-rootfs.mount: Deactivated successfully. Nov 12 20:45:40.317086 kubelet[2634]: E1112 20:45:40.317044 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:40.627765 containerd[1470]: time="2024-11-12T20:45:40.627675223Z" level=info msg="shim disconnected" id=33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540 namespace=k8s.io Nov 12 20:45:40.627765 containerd[1470]: time="2024-11-12T20:45:40.627758623Z" level=warning msg="cleaning up after shim disconnected" id=33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540 namespace=k8s.io Nov 12 20:45:40.627765 containerd[1470]: time="2024-11-12T20:45:40.627772049Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:45:41.320023 kubelet[2634]: E1112 20:45:41.319991 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:41.322422 containerd[1470]: time="2024-11-12T20:45:41.322379648Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:45:42.069227 containerd[1470]: time="2024-11-12T20:45:42.069152853Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\"" Nov 12 20:45:42.070033 containerd[1470]: time="2024-11-12T20:45:42.069985802Z" level=info msg="StartContainer for \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\"" Nov 12 20:45:42.103286 systemd[1]: Started cri-containerd-1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e.scope - libcontainer container 1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e. Nov 12 20:45:42.138636 systemd[1]: cri-containerd-1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e.scope: Deactivated successfully. Nov 12 20:45:42.237281 containerd[1470]: time="2024-11-12T20:45:42.237212244Z" level=info msg="StartContainer for \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\" returns successfully" Nov 12 20:45:42.258762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e-rootfs.mount: Deactivated successfully. Nov 12 20:45:42.326395 kubelet[2634]: E1112 20:45:42.326255 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:42.493952 containerd[1470]: time="2024-11-12T20:45:42.493862571Z" level=info msg="shim disconnected" id=1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e namespace=k8s.io Nov 12 20:45:42.493952 containerd[1470]: time="2024-11-12T20:45:42.493944519Z" level=warning msg="cleaning up after shim disconnected" id=1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e namespace=k8s.io Nov 12 20:45:42.493952 containerd[1470]: time="2024-11-12T20:45:42.493955610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:45:43.329718 kubelet[2634]: E1112 20:45:43.329673 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:43.332164 containerd[1470]: time="2024-11-12T20:45:43.331430984Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:45:43.663364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202500239.mount: Deactivated successfully. Nov 12 20:45:43.677774 containerd[1470]: time="2024-11-12T20:45:43.677723690Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\"" Nov 12 20:45:43.678616 containerd[1470]: time="2024-11-12T20:45:43.678586605Z" level=info msg="StartContainer for \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\"" Nov 12 20:45:43.708271 systemd[1]: Started cri-containerd-6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165.scope - libcontainer container 6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165. Nov 12 20:45:43.735546 systemd[1]: cri-containerd-6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165.scope: Deactivated successfully. Nov 12 20:45:43.739019 containerd[1470]: time="2024-11-12T20:45:43.738958827Z" level=info msg="StartContainer for \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\" returns successfully" Nov 12 20:45:43.757335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165-rootfs.mount: Deactivated successfully. Nov 12 20:45:43.766375 containerd[1470]: time="2024-11-12T20:45:43.766295457Z" level=info msg="shim disconnected" id=6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165 namespace=k8s.io Nov 12 20:45:43.766375 containerd[1470]: time="2024-11-12T20:45:43.766368138Z" level=warning msg="cleaning up after shim disconnected" id=6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165 namespace=k8s.io Nov 12 20:45:43.766375 containerd[1470]: time="2024-11-12T20:45:43.766379058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:45:43.781499 containerd[1470]: time="2024-11-12T20:45:43.781401726Z" level=warning msg="cleanup warnings time=\"2024-11-12T20:45:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 20:45:44.334855 kubelet[2634]: E1112 20:45:44.334807 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:44.337329 containerd[1470]: time="2024-11-12T20:45:44.337268357Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:45:44.789818 containerd[1470]: time="2024-11-12T20:45:44.789609625Z" level=info msg="CreateContainer within sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\"" Nov 12 20:45:44.790463 containerd[1470]: time="2024-11-12T20:45:44.790422554Z" level=info msg="StartContainer for \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\"" Nov 12 20:45:44.827379 systemd[1]: Started cri-containerd-a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676.scope - libcontainer container a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676. Nov 12 20:45:44.866204 containerd[1470]: time="2024-11-12T20:45:44.866136702Z" level=info msg="StartContainer for \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\" returns successfully" Nov 12 20:45:45.000818 kubelet[2634]: I1112 20:45:45.000765 2634 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:45:45.159665 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:56564.service - OpenSSH per-connection server daemon (10.0.0.1:56564). Nov 12 20:45:45.203890 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 56564 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:45.209170 kubelet[2634]: I1112 20:45:45.208753 2634 topology_manager.go:215] "Topology Admit Handler" podUID="d336e4a3-2dd9-4b40-8b40-986b211a2e64" podNamespace="kube-system" podName="coredns-76f75df574-6hmz5" Nov 12 20:45:45.208806 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:45.211212 kubelet[2634]: I1112 20:45:45.209267 2634 topology_manager.go:215] "Topology Admit Handler" podUID="5a4ea717-32ad-4b60-8942-1f6116f1b991" podNamespace="kube-system" podName="coredns-76f75df574-ms8tr" Nov 12 20:45:45.215792 systemd-logind[1455]: New session 14 of user core. Nov 12 20:45:45.223334 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:45:45.233039 systemd[1]: Created slice kubepods-burstable-podd336e4a3_2dd9_4b40_8b40_986b211a2e64.slice - libcontainer container kubepods-burstable-podd336e4a3_2dd9_4b40_8b40_986b211a2e64.slice. Nov 12 20:45:45.239414 systemd[1]: Created slice kubepods-burstable-pod5a4ea717_32ad_4b60_8942_1f6116f1b991.slice - libcontainer container kubepods-burstable-pod5a4ea717_32ad_4b60_8942_1f6116f1b991.slice. Nov 12 20:45:45.268889 kubelet[2634]: I1112 20:45:45.268828 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4ea717-32ad-4b60-8942-1f6116f1b991-config-volume\") pod \"coredns-76f75df574-ms8tr\" (UID: \"5a4ea717-32ad-4b60-8942-1f6116f1b991\") " pod="kube-system/coredns-76f75df574-ms8tr" Nov 12 20:45:45.268889 kubelet[2634]: I1112 20:45:45.268882 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpnjs\" (UniqueName: \"kubernetes.io/projected/5a4ea717-32ad-4b60-8942-1f6116f1b991-kube-api-access-xpnjs\") pod \"coredns-76f75df574-ms8tr\" (UID: \"5a4ea717-32ad-4b60-8942-1f6116f1b991\") " pod="kube-system/coredns-76f75df574-ms8tr" Nov 12 20:45:45.269111 kubelet[2634]: I1112 20:45:45.268986 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d336e4a3-2dd9-4b40-8b40-986b211a2e64-config-volume\") pod \"coredns-76f75df574-6hmz5\" (UID: \"d336e4a3-2dd9-4b40-8b40-986b211a2e64\") " pod="kube-system/coredns-76f75df574-6hmz5" Nov 12 20:45:45.269111 kubelet[2634]: I1112 20:45:45.269041 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bmxb\" (UniqueName: \"kubernetes.io/projected/d336e4a3-2dd9-4b40-8b40-986b211a2e64-kube-api-access-2bmxb\") pod \"coredns-76f75df574-6hmz5\" (UID: \"d336e4a3-2dd9-4b40-8b40-986b211a2e64\") " pod="kube-system/coredns-76f75df574-6hmz5" Nov 12 20:45:45.342818 kubelet[2634]: E1112 20:45:45.342770 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:45.407357 sshd[3488]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:45.415610 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:56564.service: Deactivated successfully. Nov 12 20:45:45.418783 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:45:45.419513 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:45:45.421698 systemd-logind[1455]: Removed session 14. Nov 12 20:45:45.537623 kubelet[2634]: E1112 20:45:45.537562 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:45.538719 containerd[1470]: time="2024-11-12T20:45:45.538677158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6hmz5,Uid:d336e4a3-2dd9-4b40-8b40-986b211a2e64,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:45.542760 kubelet[2634]: E1112 20:45:45.542724 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:45.543411 containerd[1470]: time="2024-11-12T20:45:45.543366524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ms8tr,Uid:5a4ea717-32ad-4b60-8942-1f6116f1b991,Namespace:kube-system,Attempt:0,}" Nov 12 20:45:47.051414 kubelet[2634]: E1112 20:45:47.051381 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:47.400515 systemd-networkd[1396]: cilium_host: Link UP Nov 12 20:45:47.401954 systemd-networkd[1396]: cilium_net: Link UP Nov 12 20:45:47.402689 systemd-networkd[1396]: cilium_net: Gained carrier Nov 12 20:45:47.402939 systemd-networkd[1396]: cilium_host: Gained carrier Nov 12 20:45:47.515251 systemd-networkd[1396]: cilium_vxlan: Link UP Nov 12 20:45:47.515263 systemd-networkd[1396]: cilium_vxlan: Gained carrier Nov 12 20:45:47.737143 kernel: NET: Registered PF_ALG protocol family Nov 12 20:45:48.115333 systemd-networkd[1396]: cilium_net: Gained IPv6LL Nov 12 20:45:48.180446 systemd-networkd[1396]: cilium_host: Gained IPv6LL Nov 12 20:45:48.510915 systemd-networkd[1396]: lxc_health: Link UP Nov 12 20:45:48.525834 systemd-networkd[1396]: lxc_health: Gained carrier Nov 12 20:45:48.680180 systemd-networkd[1396]: lxce6feb21a8a6d: Link UP Nov 12 20:45:48.690544 systemd-networkd[1396]: lxc85f8ae60becc: Link UP Nov 12 20:45:48.695481 kernel: eth0: renamed from tmp00e37 Nov 12 20:45:48.702998 systemd-networkd[1396]: lxce6feb21a8a6d: Gained carrier Nov 12 20:45:48.704207 kernel: eth0: renamed from tmpa7db8 Nov 12 20:45:48.708952 systemd-networkd[1396]: lxc85f8ae60becc: Gained carrier Nov 12 20:45:48.950065 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL Nov 12 20:45:49.052188 kubelet[2634]: E1112 20:45:49.052152 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:49.187292 kubelet[2634]: I1112 20:45:49.186963 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tpjnm" podStartSLOduration=15.463831906 podStartE2EDuration="42.186118508s" podCreationTimestamp="2024-11-12 20:45:07 +0000 UTC" firstStartedPulling="2024-11-12 20:45:09.127348415 +0000 UTC m=+14.259630145" lastFinishedPulling="2024-11-12 20:45:35.849635017 +0000 UTC m=+40.981916747" observedRunningTime="2024-11-12 20:45:45.508986548 +0000 UTC m=+50.641268278" watchObservedRunningTime="2024-11-12 20:45:49.186118508 +0000 UTC m=+54.318400239" Nov 12 20:45:49.351526 kubelet[2634]: E1112 20:45:49.351493 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:49.779372 systemd-networkd[1396]: lxc85f8ae60becc: Gained IPv6LL Nov 12 20:45:50.035411 systemd-networkd[1396]: lxce6feb21a8a6d: Gained IPv6LL Nov 12 20:45:50.360387 kubelet[2634]: E1112 20:45:50.357583 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:50.357935 systemd-networkd[1396]: lxc_health: Gained IPv6LL Nov 12 20:45:50.418633 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:59534.service - OpenSSH per-connection server daemon (10.0.0.1:59534). Nov 12 20:45:50.458794 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 59534 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:50.460436 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:50.466108 systemd-logind[1455]: New session 15 of user core. Nov 12 20:45:50.475316 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:45:50.609071 sshd[3948]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:50.620460 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:59534.service: Deactivated successfully. Nov 12 20:45:50.622623 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:45:50.624597 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:45:50.630428 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:59538.service - OpenSSH per-connection server daemon (10.0.0.1:59538). Nov 12 20:45:50.632268 systemd-logind[1455]: Removed session 15. Nov 12 20:45:50.660619 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 59538 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:50.662246 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:50.666433 systemd-logind[1455]: New session 16 of user core. Nov 12 20:45:50.674291 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:45:50.964641 sshd[3963]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:50.976928 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:59538.service: Deactivated successfully. Nov 12 20:45:50.979808 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:45:50.982743 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:45:50.988628 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:59548.service - OpenSSH per-connection server daemon (10.0.0.1:59548). Nov 12 20:45:50.990525 systemd-logind[1455]: Removed session 16. Nov 12 20:45:51.023816 sshd[3975]: Accepted publickey for core from 10.0.0.1 port 59548 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:51.025575 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:51.029949 systemd-logind[1455]: New session 17 of user core. Nov 12 20:45:51.037277 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:45:51.174633 sshd[3975]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:51.179109 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:59548.service: Deactivated successfully. Nov 12 20:45:51.181544 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:45:51.182480 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:45:51.183549 systemd-logind[1455]: Removed session 17. Nov 12 20:45:52.950567 containerd[1470]: time="2024-11-12T20:45:52.949737240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:52.950567 containerd[1470]: time="2024-11-12T20:45:52.950515887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:52.951056 containerd[1470]: time="2024-11-12T20:45:52.950590429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:52.951056 containerd[1470]: time="2024-11-12T20:45:52.950724235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:52.979433 systemd[1]: Started cri-containerd-a7db83733d94ec245348bd1cd5820faac1044b01ad311d13ce5215ea93218684.scope - libcontainer container a7db83733d94ec245348bd1cd5820faac1044b01ad311d13ce5215ea93218684. Nov 12 20:45:52.987633 containerd[1470]: time="2024-11-12T20:45:52.987473578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:45:52.988377 containerd[1470]: time="2024-11-12T20:45:52.988308141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:45:52.988377 containerd[1470]: time="2024-11-12T20:45:52.988356724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:52.988677 containerd[1470]: time="2024-11-12T20:45:52.988479288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:45:52.998244 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:45:53.017434 systemd[1]: Started cri-containerd-00e37af7acf066c3bcbbeceac950537c327ce76648becc697c21a82e08454ff1.scope - libcontainer container 00e37af7acf066c3bcbbeceac950537c327ce76648becc697c21a82e08454ff1. Nov 12 20:45:53.035356 containerd[1470]: time="2024-11-12T20:45:53.035128066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ms8tr,Uid:5a4ea717-32ad-4b60-8942-1f6116f1b991,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7db83733d94ec245348bd1cd5820faac1044b01ad311d13ce5215ea93218684\"" Nov 12 20:45:53.034966 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:45:53.036614 kubelet[2634]: E1112 20:45:53.036574 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:53.040926 containerd[1470]: time="2024-11-12T20:45:53.040881046Z" level=info msg="CreateContainer within sandbox \"a7db83733d94ec245348bd1cd5820faac1044b01ad311d13ce5215ea93218684\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:45:53.066464 containerd[1470]: time="2024-11-12T20:45:53.066371529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6hmz5,Uid:d336e4a3-2dd9-4b40-8b40-986b211a2e64,Namespace:kube-system,Attempt:0,} returns sandbox id \"00e37af7acf066c3bcbbeceac950537c327ce76648becc697c21a82e08454ff1\"" Nov 12 20:45:53.067518 kubelet[2634]: E1112 20:45:53.067373 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:53.070356 containerd[1470]: time="2024-11-12T20:45:53.070296471Z" level=info msg="CreateContainer within sandbox \"00e37af7acf066c3bcbbeceac950537c327ce76648becc697c21a82e08454ff1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:45:53.079513 containerd[1470]: time="2024-11-12T20:45:53.079044217Z" level=info msg="CreateContainer within sandbox \"a7db83733d94ec245348bd1cd5820faac1044b01ad311d13ce5215ea93218684\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdbbaf1f1c61a106005ac3d25b79d31a7581b0b5e2ec5150398a9521e4ecb0a1\"" Nov 12 20:45:53.081145 containerd[1470]: time="2024-11-12T20:45:53.080304242Z" level=info msg="StartContainer for \"fdbbaf1f1c61a106005ac3d25b79d31a7581b0b5e2ec5150398a9521e4ecb0a1\"" Nov 12 20:45:53.113296 systemd[1]: Started cri-containerd-fdbbaf1f1c61a106005ac3d25b79d31a7581b0b5e2ec5150398a9521e4ecb0a1.scope - libcontainer container fdbbaf1f1c61a106005ac3d25b79d31a7581b0b5e2ec5150398a9521e4ecb0a1. Nov 12 20:45:53.411524 containerd[1470]: time="2024-11-12T20:45:53.411419441Z" level=info msg="StartContainer for \"fdbbaf1f1c61a106005ac3d25b79d31a7581b0b5e2ec5150398a9521e4ecb0a1\" returns successfully" Nov 12 20:45:53.414254 kubelet[2634]: E1112 20:45:53.414001 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:53.444762 containerd[1470]: time="2024-11-12T20:45:53.444693179Z" level=info msg="CreateContainer within sandbox \"00e37af7acf066c3bcbbeceac950537c327ce76648becc697c21a82e08454ff1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d507b5f28479771215454ba168eeddbebad2a7ac19e62f72abd669949aaff51d\"" Nov 12 20:45:53.445476 containerd[1470]: time="2024-11-12T20:45:53.445425066Z" level=info msg="StartContainer for \"d507b5f28479771215454ba168eeddbebad2a7ac19e62f72abd669949aaff51d\"" Nov 12 20:45:53.485488 systemd[1]: Started cri-containerd-d507b5f28479771215454ba168eeddbebad2a7ac19e62f72abd669949aaff51d.scope - libcontainer container d507b5f28479771215454ba168eeddbebad2a7ac19e62f72abd669949aaff51d. Nov 12 20:45:53.523951 containerd[1470]: time="2024-11-12T20:45:53.523658838Z" level=info msg="StartContainer for \"d507b5f28479771215454ba168eeddbebad2a7ac19e62f72abd669949aaff51d\" returns successfully" Nov 12 20:45:53.956560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444562452.mount: Deactivated successfully. Nov 12 20:45:54.419716 kubelet[2634]: E1112 20:45:54.419458 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:54.419716 kubelet[2634]: E1112 20:45:54.419625 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:54.429887 kubelet[2634]: I1112 20:45:54.429839 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ms8tr" podStartSLOduration=47.429792711 podStartE2EDuration="47.429792711s" podCreationTimestamp="2024-11-12 20:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:45:53.4609664 +0000 UTC m=+58.593248250" watchObservedRunningTime="2024-11-12 20:45:54.429792711 +0000 UTC m=+59.562074442" Nov 12 20:45:54.443015 kubelet[2634]: I1112 20:45:54.442705 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6hmz5" podStartSLOduration=47.442620205 podStartE2EDuration="47.442620205s" podCreationTimestamp="2024-11-12 20:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:45:54.429948839 +0000 UTC m=+59.562230570" watchObservedRunningTime="2024-11-12 20:45:54.442620205 +0000 UTC m=+59.574901935" Nov 12 20:45:55.425268 kubelet[2634]: E1112 20:45:55.425061 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:55.426563 kubelet[2634]: E1112 20:45:55.425218 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:56.194802 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:54262.service - OpenSSH per-connection server daemon (10.0.0.1:54262). Nov 12 20:45:56.236494 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 54262 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:45:56.238970 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:45:56.243019 systemd-logind[1455]: New session 18 of user core. Nov 12 20:45:56.254328 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:45:56.395910 sshd[4173]: pam_unix(sshd:session): session closed for user core Nov 12 20:45:56.399668 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:54262.service: Deactivated successfully. Nov 12 20:45:56.401895 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:45:56.402569 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:45:56.403761 systemd-logind[1455]: Removed session 18. Nov 12 20:45:56.426841 kubelet[2634]: E1112 20:45:56.426804 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:45:57.428920 kubelet[2634]: E1112 20:45:57.428867 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:01.411708 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:54274.service - OpenSSH per-connection server daemon (10.0.0.1:54274). Nov 12 20:46:01.446879 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 54274 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:01.448882 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:01.453213 systemd-logind[1455]: New session 19 of user core. Nov 12 20:46:01.463303 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:46:01.575092 sshd[4187]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:01.578959 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:54274.service: Deactivated successfully. Nov 12 20:46:01.580834 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:46:01.581439 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:46:01.582290 systemd-logind[1455]: Removed session 19. Nov 12 20:46:06.590315 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:56316.service - OpenSSH per-connection server daemon (10.0.0.1:56316). Nov 12 20:46:06.623709 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 56316 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:06.625343 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:06.628998 systemd-logind[1455]: New session 20 of user core. Nov 12 20:46:06.639266 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:46:06.776655 sshd[4201]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:06.783768 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:56316.service: Deactivated successfully. Nov 12 20:46:06.785619 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:46:06.787166 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:46:06.796768 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:56318.service - OpenSSH per-connection server daemon (10.0.0.1:56318). Nov 12 20:46:06.797813 systemd-logind[1455]: Removed session 20. Nov 12 20:46:06.829562 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 56318 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:06.831473 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:06.836521 systemd-logind[1455]: New session 21 of user core. Nov 12 20:46:06.846281 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:46:07.122443 sshd[4216]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:07.135586 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:56318.service: Deactivated successfully. Nov 12 20:46:07.137464 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:46:07.139692 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:46:07.144987 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:56332.service - OpenSSH per-connection server daemon (10.0.0.1:56332). Nov 12 20:46:07.145911 systemd-logind[1455]: Removed session 21. Nov 12 20:46:07.179455 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 56332 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:07.181127 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:07.185632 systemd-logind[1455]: New session 22 of user core. Nov 12 20:46:07.194248 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:46:08.739273 sshd[4228]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:08.747447 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:56332.service: Deactivated successfully. Nov 12 20:46:08.750455 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:46:08.752801 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:46:08.761736 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:56346.service - OpenSSH per-connection server daemon (10.0.0.1:56346). Nov 12 20:46:08.763369 systemd-logind[1455]: Removed session 22. Nov 12 20:46:08.803335 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 56346 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:08.809431 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:08.818908 systemd-logind[1455]: New session 23 of user core. Nov 12 20:46:08.824286 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:46:09.064220 sshd[4255]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:09.075487 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:56346.service: Deactivated successfully. Nov 12 20:46:09.077393 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:46:09.078875 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:46:09.084406 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:56378.service - OpenSSH per-connection server daemon (10.0.0.1:56378). Nov 12 20:46:09.085434 systemd-logind[1455]: Removed session 23. Nov 12 20:46:09.114130 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 56378 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:09.115829 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:09.119846 systemd-logind[1455]: New session 24 of user core. Nov 12 20:46:09.129263 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:46:09.244263 sshd[4267]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:09.248133 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:56378.service: Deactivated successfully. Nov 12 20:46:09.250195 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:46:09.250849 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:46:09.251732 systemd-logind[1455]: Removed session 24. Nov 12 20:46:09.962091 kubelet[2634]: E1112 20:46:09.962021 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:11.961735 kubelet[2634]: E1112 20:46:11.961634 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:11.962277 kubelet[2634]: E1112 20:46:11.961903 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:14.256962 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:56418.service - OpenSSH per-connection server daemon (10.0.0.1:56418). Nov 12 20:46:14.293433 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 56418 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:14.295246 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:14.299650 systemd-logind[1455]: New session 25 of user core. Nov 12 20:46:14.314372 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:46:14.430818 sshd[4282]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:14.436414 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:56418.service: Deactivated successfully. Nov 12 20:46:14.438737 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:46:14.439777 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:46:14.440885 systemd-logind[1455]: Removed session 25. Nov 12 20:46:19.442928 systemd[1]: Started sshd@25-10.0.0.51:22-10.0.0.1:53364.service - OpenSSH per-connection server daemon (10.0.0.1:53364). Nov 12 20:46:19.476816 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 53364 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:19.478568 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:19.482734 systemd-logind[1455]: New session 26 of user core. Nov 12 20:46:19.488359 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:46:19.622509 sshd[4296]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:19.628285 systemd[1]: sshd@25-10.0.0.51:22-10.0.0.1:53364.service: Deactivated successfully. Nov 12 20:46:19.631570 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:46:19.632673 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:46:19.633568 systemd-logind[1455]: Removed session 26. Nov 12 20:46:24.634139 systemd[1]: Started sshd@26-10.0.0.51:22-10.0.0.1:53372.service - OpenSSH per-connection server daemon (10.0.0.1:53372). Nov 12 20:46:24.669292 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 53372 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:24.671287 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:24.676323 systemd-logind[1455]: New session 27 of user core. Nov 12 20:46:24.690372 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:46:24.795882 sshd[4314]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:24.800161 systemd[1]: sshd@26-10.0.0.51:22-10.0.0.1:53372.service: Deactivated successfully. Nov 12 20:46:24.802725 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:46:24.803416 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:46:24.804490 systemd-logind[1455]: Removed session 27. Nov 12 20:46:29.808412 systemd[1]: Started sshd@27-10.0.0.51:22-10.0.0.1:54330.service - OpenSSH per-connection server daemon (10.0.0.1:54330). Nov 12 20:46:29.842811 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 54330 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:29.845087 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:29.850342 systemd-logind[1455]: New session 28 of user core. Nov 12 20:46:29.863445 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:46:29.983715 sshd[4328]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:29.988361 systemd[1]: sshd@27-10.0.0.51:22-10.0.0.1:54330.service: Deactivated successfully. Nov 12 20:46:29.991250 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:46:29.991997 systemd-logind[1455]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:46:29.993124 systemd-logind[1455]: Removed session 28. Nov 12 20:46:30.961986 kubelet[2634]: E1112 20:46:30.961928 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:32.961936 kubelet[2634]: E1112 20:46:32.961849 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:34.998891 systemd[1]: Started sshd@28-10.0.0.51:22-10.0.0.1:54354.service - OpenSSH per-connection server daemon (10.0.0.1:54354). Nov 12 20:46:35.031216 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 54354 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:35.032800 sshd[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:35.037350 systemd-logind[1455]: New session 29 of user core. Nov 12 20:46:35.048227 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 20:46:35.154867 sshd[4342]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:35.164360 systemd[1]: sshd@28-10.0.0.51:22-10.0.0.1:54354.service: Deactivated successfully. Nov 12 20:46:35.166491 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 20:46:35.167903 systemd-logind[1455]: Session 29 logged out. Waiting for processes to exit. Nov 12 20:46:35.174392 systemd[1]: Started sshd@29-10.0.0.51:22-10.0.0.1:54358.service - OpenSSH per-connection server daemon (10.0.0.1:54358). Nov 12 20:46:35.175389 systemd-logind[1455]: Removed session 29. Nov 12 20:46:35.203931 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 54358 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:35.205594 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:35.210087 systemd-logind[1455]: New session 30 of user core. Nov 12 20:46:35.221234 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 12 20:46:36.659168 containerd[1470]: time="2024-11-12T20:46:36.659117598Z" level=info msg="StopContainer for \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\" with timeout 30 (s)" Nov 12 20:46:36.661183 containerd[1470]: time="2024-11-12T20:46:36.661082551Z" level=info msg="Stop container \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\" with signal terminated" Nov 12 20:46:36.695927 systemd[1]: cri-containerd-5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca.scope: Deactivated successfully. Nov 12 20:46:36.714278 containerd[1470]: time="2024-11-12T20:46:36.714220313Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:46:36.716193 containerd[1470]: time="2024-11-12T20:46:36.716159486Z" level=info msg="StopContainer for \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\" with timeout 2 (s)" Nov 12 20:46:36.716393 containerd[1470]: time="2024-11-12T20:46:36.716373000Z" level=info msg="Stop container \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\" with signal terminated" Nov 12 20:46:36.719943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca-rootfs.mount: Deactivated successfully. Nov 12 20:46:36.723692 systemd-networkd[1396]: lxc_health: Link DOWN Nov 12 20:46:36.723700 systemd-networkd[1396]: lxc_health: Lost carrier Nov 12 20:46:36.732591 containerd[1470]: time="2024-11-12T20:46:36.732503408Z" level=info msg="shim disconnected" id=5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca namespace=k8s.io Nov 12 20:46:36.732591 containerd[1470]: time="2024-11-12T20:46:36.732585593Z" level=warning msg="cleaning up after shim disconnected" id=5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca namespace=k8s.io Nov 12 20:46:36.732591 containerd[1470]: time="2024-11-12T20:46:36.732596363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:36.751347 containerd[1470]: time="2024-11-12T20:46:36.751291256Z" level=info msg="StopContainer for \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\" returns successfully" Nov 12 20:46:36.751989 containerd[1470]: time="2024-11-12T20:46:36.751963937Z" level=info msg="StopPodSandbox for \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\"" Nov 12 20:46:36.752029 containerd[1470]: time="2024-11-12T20:46:36.752013080Z" level=info msg="Container to stop \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:46:36.754718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9-shm.mount: Deactivated successfully. Nov 12 20:46:36.756651 systemd[1]: cri-containerd-a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676.scope: Deactivated successfully. Nov 12 20:46:36.757365 systemd[1]: cri-containerd-a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676.scope: Consumed 7.830s CPU time. Nov 12 20:46:36.768752 systemd[1]: cri-containerd-93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9.scope: Deactivated successfully. Nov 12 20:46:36.779533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676-rootfs.mount: Deactivated successfully. Nov 12 20:46:36.789481 containerd[1470]: time="2024-11-12T20:46:36.789390333Z" level=info msg="shim disconnected" id=a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676 namespace=k8s.io Nov 12 20:46:36.789481 containerd[1470]: time="2024-11-12T20:46:36.789469012Z" level=warning msg="cleaning up after shim disconnected" id=a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676 namespace=k8s.io Nov 12 20:46:36.789481 containerd[1470]: time="2024-11-12T20:46:36.789481555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:36.790544 containerd[1470]: time="2024-11-12T20:46:36.790480773Z" level=info msg="shim disconnected" id=93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9 namespace=k8s.io Nov 12 20:46:36.790594 containerd[1470]: time="2024-11-12T20:46:36.790547740Z" level=warning msg="cleaning up after shim disconnected" id=93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9 namespace=k8s.io Nov 12 20:46:36.790594 containerd[1470]: time="2024-11-12T20:46:36.790560023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:36.792283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9-rootfs.mount: Deactivated successfully. Nov 12 20:46:36.810007 containerd[1470]: time="2024-11-12T20:46:36.809948537Z" level=info msg="StopContainer for \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\" returns successfully" Nov 12 20:46:36.810588 containerd[1470]: time="2024-11-12T20:46:36.810540445Z" level=info msg="StopPodSandbox for \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\"" Nov 12 20:46:36.810641 containerd[1470]: time="2024-11-12T20:46:36.810595820Z" level=info msg="Container to stop \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:46:36.810641 containerd[1470]: time="2024-11-12T20:46:36.810609485Z" level=info msg="Container to stop \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:46:36.810641 containerd[1470]: time="2024-11-12T20:46:36.810618652Z" level=info msg="Container to stop \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:46:36.810641 containerd[1470]: time="2024-11-12T20:46:36.810627970Z" level=info msg="Container to stop \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:46:36.810641 containerd[1470]: time="2024-11-12T20:46:36.810638631Z" level=info msg="Container to stop \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:46:36.816050 containerd[1470]: time="2024-11-12T20:46:36.816011172Z" level=info msg="TearDown network for sandbox \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\" successfully" Nov 12 20:46:36.816050 containerd[1470]: time="2024-11-12T20:46:36.816045356Z" level=info msg="StopPodSandbox for \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\" returns successfully" Nov 12 20:46:36.817667 systemd[1]: cri-containerd-c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6.scope: Deactivated successfully. Nov 12 20:46:36.843572 containerd[1470]: time="2024-11-12T20:46:36.843488860Z" level=info msg="shim disconnected" id=c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6 namespace=k8s.io Nov 12 20:46:36.843572 containerd[1470]: time="2024-11-12T20:46:36.843549594Z" level=warning msg="cleaning up after shim disconnected" id=c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6 namespace=k8s.io Nov 12 20:46:36.843572 containerd[1470]: time="2024-11-12T20:46:36.843558772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:36.858438 containerd[1470]: time="2024-11-12T20:46:36.858370478Z" level=warning msg="cleanup warnings time=\"2024-11-12T20:46:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 20:46:36.860197 containerd[1470]: time="2024-11-12T20:46:36.860150130Z" level=info msg="TearDown network for sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" successfully" Nov 12 20:46:36.860197 containerd[1470]: time="2024-11-12T20:46:36.860186379Z" level=info msg="StopPodSandbox for \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" returns successfully" Nov 12 20:46:36.961398 kubelet[2634]: I1112 20:46:36.961224 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8e5e584-3015-4571-84ad-53b55274225d-cilium-config-path\") pod \"c8e5e584-3015-4571-84ad-53b55274225d\" (UID: \"c8e5e584-3015-4571-84ad-53b55274225d\") " Nov 12 20:46:36.961398 kubelet[2634]: I1112 20:46:36.961301 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-run\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.961398 kubelet[2634]: I1112 20:46:36.961340 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-cgroup\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.961398 kubelet[2634]: I1112 20:46:36.961372 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-host-proc-sys-net\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.961398 kubelet[2634]: I1112 20:46:36.961395 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-lib-modules\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.961951 kubelet[2634]: I1112 20:46:36.961414 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-config-path\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.961951 kubelet[2634]: I1112 20:46:36.961436 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2psxj\" (UniqueName: \"kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-kube-api-access-2psxj\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.961951 kubelet[2634]: I1112 20:46:36.961453 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-host-proc-sys-kernel\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.961951 kubelet[2634]: I1112 20:46:36.961473 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cni-path\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.961951 kubelet[2634]: I1112 20:46:36.961489 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-hostproc\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.961951 kubelet[2634]: I1112 20:46:36.961509 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e580adf9-b9b8-4e11-b510-31158322de7d-clustermesh-secrets\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.962126 kubelet[2634]: I1112 20:46:36.961526 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-xtables-lock\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:36.962126 kubelet[2634]: I1112 20:46:36.961548 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-852tm\" (UniqueName: \"kubernetes.io/projected/c8e5e584-3015-4571-84ad-53b55274225d-kube-api-access-852tm\") pod \"c8e5e584-3015-4571-84ad-53b55274225d\" (UID: \"c8e5e584-3015-4571-84ad-53b55274225d\") " Nov 12 20:46:36.963070 kubelet[2634]: I1112 20:46:36.963009 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-hostproc" (OuterVolumeSpecName: "hostproc") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:36.963070 kubelet[2634]: I1112 20:46:36.963059 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:36.963070 kubelet[2634]: I1112 20:46:36.963080 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cni-path" (OuterVolumeSpecName: "cni-path") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:36.963717 kubelet[2634]: I1112 20:46:36.963682 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:36.963717 kubelet[2634]: I1112 20:46:36.963711 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:36.963717 kubelet[2634]: I1112 20:46:36.963727 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:36.963933 kubelet[2634]: I1112 20:46:36.963743 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:36.965551 kubelet[2634]: I1112 20:46:36.965349 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8e5e584-3015-4571-84ad-53b55274225d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c8e5e584-3015-4571-84ad-53b55274225d" (UID: "c8e5e584-3015-4571-84ad-53b55274225d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:46:36.965748 kubelet[2634]: I1112 20:46:36.965569 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:36.966869 kubelet[2634]: I1112 20:46:36.966832 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e5e584-3015-4571-84ad-53b55274225d-kube-api-access-852tm" (OuterVolumeSpecName: "kube-api-access-852tm") pod "c8e5e584-3015-4571-84ad-53b55274225d" (UID: "c8e5e584-3015-4571-84ad-53b55274225d"). InnerVolumeSpecName "kube-api-access-852tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:46:36.967373 kubelet[2634]: I1112 20:46:36.967235 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e580adf9-b9b8-4e11-b510-31158322de7d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 20:46:36.968447 kubelet[2634]: I1112 20:46:36.968419 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-kube-api-access-2psxj" (OuterVolumeSpecName: "kube-api-access-2psxj") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "kube-api-access-2psxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:46:36.969514 kubelet[2634]: I1112 20:46:36.969483 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:46:36.973669 systemd[1]: Removed slice kubepods-besteffort-podc8e5e584_3015_4571_84ad_53b55274225d.slice - libcontainer container kubepods-besteffort-podc8e5e584_3015_4571_84ad_53b55274225d.slice. Nov 12 20:46:37.062007 kubelet[2634]: I1112 20:46:37.061939 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-bpf-maps\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:37.062007 kubelet[2634]: I1112 20:46:37.062004 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-hubble-tls\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:37.062007 kubelet[2634]: I1112 20:46:37.062025 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-etc-cni-netd\") pod \"e580adf9-b9b8-4e11-b510-31158322de7d\" (UID: \"e580adf9-b9b8-4e11-b510-31158322de7d\") " Nov 12 20:46:37.062256 kubelet[2634]: I1112 20:46:37.062029 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:37.062256 kubelet[2634]: I1112 20:46:37.062074 2634 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-852tm\" (UniqueName: \"kubernetes.io/projected/c8e5e584-3015-4571-84ad-53b55274225d-kube-api-access-852tm\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062256 kubelet[2634]: I1112 20:46:37.062089 2634 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e580adf9-b9b8-4e11-b510-31158322de7d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062256 kubelet[2634]: I1112 20:46:37.062088 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:46:37.062256 kubelet[2634]: I1112 20:46:37.062114 2634 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062256 kubelet[2634]: I1112 20:46:37.062130 2634 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8e5e584-3015-4571-84ad-53b55274225d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062256 kubelet[2634]: I1112 20:46:37.062140 2634 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062419 kubelet[2634]: I1112 20:46:37.062150 2634 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062419 kubelet[2634]: I1112 20:46:37.062159 2634 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062419 kubelet[2634]: I1112 20:46:37.062168 2634 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062419 kubelet[2634]: I1112 20:46:37.062178 2634 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062419 kubelet[2634]: I1112 20:46:37.062187 2634 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2psxj\" (UniqueName: \"kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-kube-api-access-2psxj\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062419 kubelet[2634]: I1112 20:46:37.062199 2634 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062419 kubelet[2634]: I1112 20:46:37.062209 2634 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.062419 kubelet[2634]: I1112 20:46:37.062217 2634 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.065289 kubelet[2634]: I1112 20:46:37.065241 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e580adf9-b9b8-4e11-b510-31158322de7d" (UID: "e580adf9-b9b8-4e11-b510-31158322de7d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:46:37.162658 kubelet[2634]: I1112 20:46:37.162595 2634 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.162658 kubelet[2634]: I1112 20:46:37.162650 2634 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e580adf9-b9b8-4e11-b510-31158322de7d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.162658 kubelet[2634]: I1112 20:46:37.162661 2634 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e580adf9-b9b8-4e11-b510-31158322de7d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 12 20:46:37.509813 kubelet[2634]: I1112 20:46:37.509776 2634 scope.go:117] "RemoveContainer" containerID="5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca" Nov 12 20:46:37.513943 containerd[1470]: time="2024-11-12T20:46:37.513906018Z" level=info msg="RemoveContainer for \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\"" Nov 12 20:46:37.516620 systemd[1]: Removed slice kubepods-burstable-pode580adf9_b9b8_4e11_b510_31158322de7d.slice - libcontainer container kubepods-burstable-pode580adf9_b9b8_4e11_b510_31158322de7d.slice. Nov 12 20:46:37.516702 systemd[1]: kubepods-burstable-pode580adf9_b9b8_4e11_b510_31158322de7d.slice: Consumed 7.943s CPU time. Nov 12 20:46:37.594062 containerd[1470]: time="2024-11-12T20:46:37.593995938Z" level=info msg="RemoveContainer for \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\" returns successfully" Nov 12 20:46:37.594434 kubelet[2634]: I1112 20:46:37.594396 2634 scope.go:117] "RemoveContainer" containerID="5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca" Nov 12 20:46:37.600126 containerd[1470]: time="2024-11-12T20:46:37.597888753Z" level=error msg="ContainerStatus for \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\": not found" Nov 12 20:46:37.600306 kubelet[2634]: E1112 20:46:37.600285 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\": not found" containerID="5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca" Nov 12 20:46:37.600431 kubelet[2634]: I1112 20:46:37.600409 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca"} err="failed to get container status \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c756cbd52c4a5a27bf1da181dfb926f43cf8c62e10330977d3a635f159ee4ca\": not found" Nov 12 20:46:37.600636 kubelet[2634]: I1112 20:46:37.600435 2634 scope.go:117] "RemoveContainer" containerID="a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676" Nov 12 20:46:37.603153 containerd[1470]: time="2024-11-12T20:46:37.602293004Z" level=info msg="RemoveContainer for \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\"" Nov 12 20:46:37.608372 containerd[1470]: time="2024-11-12T20:46:37.608321453Z" level=info msg="RemoveContainer for \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\" returns successfully" Nov 12 20:46:37.608599 kubelet[2634]: I1112 20:46:37.608572 2634 scope.go:117] "RemoveContainer" containerID="6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165" Nov 12 20:46:37.610203 containerd[1470]: time="2024-11-12T20:46:37.610178041Z" level=info msg="RemoveContainer for \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\"" Nov 12 20:46:37.614859 containerd[1470]: time="2024-11-12T20:46:37.614835089Z" level=info msg="RemoveContainer for \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\" returns successfully" Nov 12 20:46:37.615028 kubelet[2634]: I1112 20:46:37.615010 2634 scope.go:117] "RemoveContainer" containerID="1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e" Nov 12 20:46:37.615917 containerd[1470]: time="2024-11-12T20:46:37.615862561Z" level=info msg="RemoveContainer for \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\"" Nov 12 20:46:37.619462 containerd[1470]: time="2024-11-12T20:46:37.619429298Z" level=info msg="RemoveContainer for \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\" returns successfully" Nov 12 20:46:37.619604 kubelet[2634]: I1112 20:46:37.619582 2634 scope.go:117] "RemoveContainer" containerID="33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540" Nov 12 20:46:37.620364 containerd[1470]: time="2024-11-12T20:46:37.620338356Z" level=info msg="RemoveContainer for \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\"" Nov 12 20:46:37.623865 containerd[1470]: time="2024-11-12T20:46:37.623836304Z" level=info msg="RemoveContainer for \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\" returns successfully" Nov 12 20:46:37.624057 kubelet[2634]: I1112 20:46:37.623978 2634 scope.go:117] "RemoveContainer" containerID="d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668" Nov 12 20:46:37.624852 containerd[1470]: time="2024-11-12T20:46:37.624818860Z" level=info msg="RemoveContainer for \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\"" Nov 12 20:46:37.627982 containerd[1470]: time="2024-11-12T20:46:37.627951540Z" level=info msg="RemoveContainer for \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\" returns successfully" Nov 12 20:46:37.628144 kubelet[2634]: I1112 20:46:37.628119 2634 scope.go:117] "RemoveContainer" containerID="a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676" Nov 12 20:46:37.628320 containerd[1470]: time="2024-11-12T20:46:37.628284088Z" level=error msg="ContainerStatus for \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\": not found" Nov 12 20:46:37.628447 kubelet[2634]: E1112 20:46:37.628428 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\": not found" containerID="a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676" Nov 12 20:46:37.628501 kubelet[2634]: I1112 20:46:37.628474 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676"} err="failed to get container status \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\": rpc error: code = NotFound desc = an error occurred when try to find container \"a05b9eaf81c693a3054dc1f048346e91b70912047e367f685f38aeeffccd1676\": not found" Nov 12 20:46:37.628501 kubelet[2634]: I1112 20:46:37.628492 2634 scope.go:117] "RemoveContainer" containerID="6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165" Nov 12 20:46:37.628680 containerd[1470]: time="2024-11-12T20:46:37.628644699Z" level=error msg="ContainerStatus for \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\": not found" Nov 12 20:46:37.628757 kubelet[2634]: E1112 20:46:37.628740 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\": not found" containerID="6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165" Nov 12 20:46:37.628798 kubelet[2634]: I1112 20:46:37.628774 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165"} err="failed to get container status \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\": rpc error: code = NotFound desc = an error occurred when try to find container \"6115e73759757cc40b19f8082ef4b134f876b0390d1c2e0a6a1776fd5c98c165\": not found" Nov 12 20:46:37.628798 kubelet[2634]: I1112 20:46:37.628785 2634 scope.go:117] "RemoveContainer" containerID="1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e" Nov 12 20:46:37.628974 containerd[1470]: time="2024-11-12T20:46:37.628944265Z" level=error msg="ContainerStatus for \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\": not found" Nov 12 20:46:37.629092 kubelet[2634]: E1112 20:46:37.629074 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\": not found" containerID="1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e" Nov 12 20:46:37.629176 kubelet[2634]: I1112 20:46:37.629124 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e"} err="failed to get container status \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ce4a3bfafa2be8877a58f870de72f9c6f6a4fbdf49946c3010eddfa5bf9a14e\": not found" Nov 12 20:46:37.629176 kubelet[2634]: I1112 20:46:37.629137 2634 scope.go:117] "RemoveContainer" containerID="33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540" Nov 12 20:46:37.629307 containerd[1470]: time="2024-11-12T20:46:37.629279428Z" level=error msg="ContainerStatus for \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\": not found" Nov 12 20:46:37.629424 kubelet[2634]: E1112 20:46:37.629406 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\": not found" containerID="33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540" Nov 12 20:46:37.629467 kubelet[2634]: I1112 20:46:37.629439 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540"} err="failed to get container status \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\": rpc error: code = NotFound desc = an error occurred when try to find container \"33c5fa002236a9037d4ac6ac47eb3c6c39393d903bbb1baffb7cb1e9c3b69540\": not found" Nov 12 20:46:37.629467 kubelet[2634]: I1112 20:46:37.629451 2634 scope.go:117] "RemoveContainer" containerID="d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668" Nov 12 20:46:37.629646 containerd[1470]: time="2024-11-12T20:46:37.629614811Z" level=error msg="ContainerStatus for \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\": not found" Nov 12 20:46:37.629762 kubelet[2634]: E1112 20:46:37.629742 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\": not found" containerID="d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668" Nov 12 20:46:37.629817 kubelet[2634]: I1112 20:46:37.629770 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668"} err="failed to get container status \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9844677272530b2168019b7056bba1ec231c9b15097b0ecf5d6e53104422668\": not found" Nov 12 20:46:37.692328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6-rootfs.mount: Deactivated successfully. Nov 12 20:46:37.692433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6-shm.mount: Deactivated successfully. Nov 12 20:46:37.692508 systemd[1]: var-lib-kubelet-pods-e580adf9\x2db9b8\x2d4e11\x2db510\x2d31158322de7d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 20:46:37.692584 systemd[1]: var-lib-kubelet-pods-e580adf9\x2db9b8\x2d4e11\x2db510\x2d31158322de7d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 20:46:37.692662 systemd[1]: var-lib-kubelet-pods-e580adf9\x2db9b8\x2d4e11\x2db510\x2d31158322de7d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2psxj.mount: Deactivated successfully. Nov 12 20:46:37.692747 systemd[1]: var-lib-kubelet-pods-c8e5e584\x2d3015\x2d4571\x2d84ad\x2d53b55274225d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d852tm.mount: Deactivated successfully. Nov 12 20:46:38.624389 sshd[4356]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:38.636314 systemd[1]: sshd@29-10.0.0.51:22-10.0.0.1:54358.service: Deactivated successfully. Nov 12 20:46:38.638404 systemd[1]: session-30.scope: Deactivated successfully. Nov 12 20:46:38.640158 systemd-logind[1455]: Session 30 logged out. Waiting for processes to exit. Nov 12 20:46:38.648385 systemd[1]: Started sshd@30-10.0.0.51:22-10.0.0.1:46006.service - OpenSSH per-connection server daemon (10.0.0.1:46006). Nov 12 20:46:38.649424 systemd-logind[1455]: Removed session 30. Nov 12 20:46:38.680869 sshd[4520]: Accepted publickey for core from 10.0.0.1 port 46006 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:38.682270 sshd[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:38.687235 systemd-logind[1455]: New session 31 of user core. Nov 12 20:46:38.696294 systemd[1]: Started session-31.scope - Session 31 of User core. Nov 12 20:46:38.963755 kubelet[2634]: I1112 20:46:38.963627 2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c8e5e584-3015-4571-84ad-53b55274225d" path="/var/lib/kubelet/pods/c8e5e584-3015-4571-84ad-53b55274225d/volumes" Nov 12 20:46:38.964619 kubelet[2634]: I1112 20:46:38.964248 2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e580adf9-b9b8-4e11-b510-31158322de7d" path="/var/lib/kubelet/pods/e580adf9-b9b8-4e11-b510-31158322de7d/volumes" Nov 12 20:46:40.025421 kubelet[2634]: E1112 20:46:40.025373 2634 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 20:46:40.248166 sshd[4520]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:40.263437 systemd[1]: sshd@30-10.0.0.51:22-10.0.0.1:46006.service: Deactivated successfully. Nov 12 20:46:40.267637 systemd[1]: session-31.scope: Deactivated successfully. Nov 12 20:46:40.270384 systemd-logind[1455]: Session 31 logged out. Waiting for processes to exit. Nov 12 20:46:40.280675 kubelet[2634]: I1112 20:46:40.280520 2634 topology_manager.go:215] "Topology Admit Handler" podUID="b52abe0f-7ba6-44f5-ba4e-c0c65560df6e" podNamespace="kube-system" podName="cilium-5w7v8" Nov 12 20:46:40.280675 kubelet[2634]: E1112 20:46:40.280588 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e580adf9-b9b8-4e11-b510-31158322de7d" containerName="apply-sysctl-overwrites" Nov 12 20:46:40.280675 kubelet[2634]: E1112 20:46:40.280597 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e580adf9-b9b8-4e11-b510-31158322de7d" containerName="mount-bpf-fs" Nov 12 20:46:40.280675 kubelet[2634]: E1112 20:46:40.280606 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c8e5e584-3015-4571-84ad-53b55274225d" containerName="cilium-operator" Nov 12 20:46:40.280675 kubelet[2634]: E1112 20:46:40.280613 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e580adf9-b9b8-4e11-b510-31158322de7d" containerName="mount-cgroup" Nov 12 20:46:40.280675 kubelet[2634]: E1112 20:46:40.280620 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e580adf9-b9b8-4e11-b510-31158322de7d" containerName="clean-cilium-state" Nov 12 20:46:40.280675 kubelet[2634]: E1112 20:46:40.280629 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e580adf9-b9b8-4e11-b510-31158322de7d" containerName="cilium-agent" Nov 12 20:46:40.280675 kubelet[2634]: I1112 20:46:40.280656 2634 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e5e584-3015-4571-84ad-53b55274225d" containerName="cilium-operator" Nov 12 20:46:40.280675 kubelet[2634]: I1112 20:46:40.280663 2634 memory_manager.go:354] "RemoveStaleState removing state" podUID="e580adf9-b9b8-4e11-b510-31158322de7d" containerName="cilium-agent" Nov 12 20:46:40.282582 systemd[1]: Started sshd@31-10.0.0.51:22-10.0.0.1:46022.service - OpenSSH per-connection server daemon (10.0.0.1:46022). Nov 12 20:46:40.284350 systemd-logind[1455]: Removed session 31. Nov 12 20:46:40.309860 systemd[1]: Created slice kubepods-burstable-podb52abe0f_7ba6_44f5_ba4e_c0c65560df6e.slice - libcontainer container kubepods-burstable-podb52abe0f_7ba6_44f5_ba4e_c0c65560df6e.slice. Nov 12 20:46:40.323991 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 46022 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:40.326224 sshd[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:40.331709 systemd-logind[1455]: New session 32 of user core. Nov 12 20:46:40.342500 systemd[1]: Started session-32.scope - Session 32 of User core. Nov 12 20:46:40.397855 sshd[4533]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:40.412147 systemd[1]: sshd@31-10.0.0.51:22-10.0.0.1:46022.service: Deactivated successfully. Nov 12 20:46:40.414655 systemd[1]: session-32.scope: Deactivated successfully. Nov 12 20:46:40.416349 systemd-logind[1455]: Session 32 logged out. Waiting for processes to exit. Nov 12 20:46:40.431786 systemd[1]: Started sshd@32-10.0.0.51:22-10.0.0.1:46024.service - OpenSSH per-connection server daemon (10.0.0.1:46024). Nov 12 20:46:40.433283 systemd-logind[1455]: Removed session 32. Nov 12 20:46:40.465850 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 46024 ssh2: RSA SHA256:7Gg0IxBYYm9puJnIMJkmYX0T1TavREi9Ze4ei1mlShg Nov 12 20:46:40.467637 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:46:40.473484 systemd-logind[1455]: New session 33 of user core. Nov 12 20:46:40.479787 kubelet[2634]: I1112 20:46:40.479727 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-lib-modules\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.479940 kubelet[2634]: I1112 20:46:40.479807 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-clustermesh-secrets\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.479940 kubelet[2634]: I1112 20:46:40.479884 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-cilium-run\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.479999 kubelet[2634]: I1112 20:46:40.479955 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-bpf-maps\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480023 kubelet[2634]: I1112 20:46:40.480006 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-host-proc-sys-net\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480072 kubelet[2634]: I1112 20:46:40.480056 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-host-proc-sys-kernel\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480233 kubelet[2634]: I1112 20:46:40.480192 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcpzn\" (UniqueName: \"kubernetes.io/projected/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-kube-api-access-kcpzn\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480418 kubelet[2634]: I1112 20:46:40.480269 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-cni-path\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480418 kubelet[2634]: I1112 20:46:40.480300 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-hubble-tls\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480418 kubelet[2634]: I1112 20:46:40.480327 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-hostproc\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480418 kubelet[2634]: I1112 20:46:40.480349 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-etc-cni-netd\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480418 kubelet[2634]: I1112 20:46:40.480372 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-cilium-config-path\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480418 kubelet[2634]: I1112 20:46:40.480400 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-cilium-ipsec-secrets\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480615 kubelet[2634]: I1112 20:46:40.480424 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-cilium-cgroup\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480615 kubelet[2634]: I1112 20:46:40.480450 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b52abe0f-7ba6-44f5-ba4e-c0c65560df6e-xtables-lock\") pod \"cilium-5w7v8\" (UID: \"b52abe0f-7ba6-44f5-ba4e-c0c65560df6e\") " pod="kube-system/cilium-5w7v8" Nov 12 20:46:40.480436 systemd[1]: Started session-33.scope - Session 33 of User core. Nov 12 20:46:40.615532 kubelet[2634]: E1112 20:46:40.615472 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:40.617467 containerd[1470]: time="2024-11-12T20:46:40.617406242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5w7v8,Uid:b52abe0f-7ba6-44f5-ba4e-c0c65560df6e,Namespace:kube-system,Attempt:0,}" Nov 12 20:46:40.779267 containerd[1470]: time="2024-11-12T20:46:40.779116509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:46:40.779267 containerd[1470]: time="2024-11-12T20:46:40.779183375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:46:40.779267 containerd[1470]: time="2024-11-12T20:46:40.779207722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:40.779540 containerd[1470]: time="2024-11-12T20:46:40.779343718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:46:40.809259 systemd[1]: Started cri-containerd-ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055.scope - libcontainer container ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055. Nov 12 20:46:40.832940 containerd[1470]: time="2024-11-12T20:46:40.832885010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5w7v8,Uid:b52abe0f-7ba6-44f5-ba4e-c0c65560df6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\"" Nov 12 20:46:40.833713 kubelet[2634]: E1112 20:46:40.833683 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:40.835568 containerd[1470]: time="2024-11-12T20:46:40.835539934Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:46:40.854379 containerd[1470]: time="2024-11-12T20:46:40.854320304Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"350188afc6e01e0645f7032d1485e6b15a0e7a109758a1b47cf96a66ac8c8830\"" Nov 12 20:46:40.855034 containerd[1470]: time="2024-11-12T20:46:40.854997142Z" level=info msg="StartContainer for \"350188afc6e01e0645f7032d1485e6b15a0e7a109758a1b47cf96a66ac8c8830\"" Nov 12 20:46:40.887339 systemd[1]: Started cri-containerd-350188afc6e01e0645f7032d1485e6b15a0e7a109758a1b47cf96a66ac8c8830.scope - libcontainer container 350188afc6e01e0645f7032d1485e6b15a0e7a109758a1b47cf96a66ac8c8830. Nov 12 20:46:40.914651 containerd[1470]: time="2024-11-12T20:46:40.914598660Z" level=info msg="StartContainer for \"350188afc6e01e0645f7032d1485e6b15a0e7a109758a1b47cf96a66ac8c8830\" returns successfully" Nov 12 20:46:40.921594 systemd[1]: cri-containerd-350188afc6e01e0645f7032d1485e6b15a0e7a109758a1b47cf96a66ac8c8830.scope: Deactivated successfully. Nov 12 20:46:40.957194 containerd[1470]: time="2024-11-12T20:46:40.957118095Z" level=info msg="shim disconnected" id=350188afc6e01e0645f7032d1485e6b15a0e7a109758a1b47cf96a66ac8c8830 namespace=k8s.io Nov 12 20:46:40.957194 containerd[1470]: time="2024-11-12T20:46:40.957177367Z" level=warning msg="cleaning up after shim disconnected" id=350188afc6e01e0645f7032d1485e6b15a0e7a109758a1b47cf96a66ac8c8830 namespace=k8s.io Nov 12 20:46:40.957194 containerd[1470]: time="2024-11-12T20:46:40.957185513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:41.522113 kubelet[2634]: E1112 20:46:41.522063 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:41.523976 containerd[1470]: time="2024-11-12T20:46:41.523941464Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:46:41.880257 containerd[1470]: time="2024-11-12T20:46:41.880192290Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"80adf9efe6859fefd59862ebfeb453f6f2553eaa98bf15821ee254434f24bcda\"" Nov 12 20:46:41.881384 containerd[1470]: time="2024-11-12T20:46:41.881352291Z" level=info msg="StartContainer for \"80adf9efe6859fefd59862ebfeb453f6f2553eaa98bf15821ee254434f24bcda\"" Nov 12 20:46:41.913301 systemd[1]: Started cri-containerd-80adf9efe6859fefd59862ebfeb453f6f2553eaa98bf15821ee254434f24bcda.scope - libcontainer container 80adf9efe6859fefd59862ebfeb453f6f2553eaa98bf15821ee254434f24bcda. Nov 12 20:46:41.946221 containerd[1470]: time="2024-11-12T20:46:41.946141936Z" level=info msg="StartContainer for \"80adf9efe6859fefd59862ebfeb453f6f2553eaa98bf15821ee254434f24bcda\" returns successfully" Nov 12 20:46:41.948133 systemd[1]: cri-containerd-80adf9efe6859fefd59862ebfeb453f6f2553eaa98bf15821ee254434f24bcda.scope: Deactivated successfully. Nov 12 20:46:41.975167 containerd[1470]: time="2024-11-12T20:46:41.975067962Z" level=info msg="shim disconnected" id=80adf9efe6859fefd59862ebfeb453f6f2553eaa98bf15821ee254434f24bcda namespace=k8s.io Nov 12 20:46:41.975167 containerd[1470]: time="2024-11-12T20:46:41.975160848Z" level=warning msg="cleaning up after shim disconnected" id=80adf9efe6859fefd59862ebfeb453f6f2553eaa98bf15821ee254434f24bcda namespace=k8s.io Nov 12 20:46:41.975167 containerd[1470]: time="2024-11-12T20:46:41.975172770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:42.525845 kubelet[2634]: E1112 20:46:42.525799 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:42.527786 containerd[1470]: time="2024-11-12T20:46:42.527752759Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:46:42.545867 containerd[1470]: time="2024-11-12T20:46:42.545787163Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4cb8780c1f8179af91fdf54782c969fb2205774f06585d99ff1f69e4b9983b1f\"" Nov 12 20:46:42.546308 containerd[1470]: time="2024-11-12T20:46:42.546286126Z" level=info msg="StartContainer for \"4cb8780c1f8179af91fdf54782c969fb2205774f06585d99ff1f69e4b9983b1f\"" Nov 12 20:46:42.577243 systemd[1]: Started cri-containerd-4cb8780c1f8179af91fdf54782c969fb2205774f06585d99ff1f69e4b9983b1f.scope - libcontainer container 4cb8780c1f8179af91fdf54782c969fb2205774f06585d99ff1f69e4b9983b1f. Nov 12 20:46:42.589770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80adf9efe6859fefd59862ebfeb453f6f2553eaa98bf15821ee254434f24bcda-rootfs.mount: Deactivated successfully. Nov 12 20:46:42.607610 containerd[1470]: time="2024-11-12T20:46:42.607568486Z" level=info msg="StartContainer for \"4cb8780c1f8179af91fdf54782c969fb2205774f06585d99ff1f69e4b9983b1f\" returns successfully" Nov 12 20:46:42.608867 systemd[1]: cri-containerd-4cb8780c1f8179af91fdf54782c969fb2205774f06585d99ff1f69e4b9983b1f.scope: Deactivated successfully. Nov 12 20:46:42.628944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cb8780c1f8179af91fdf54782c969fb2205774f06585d99ff1f69e4b9983b1f-rootfs.mount: Deactivated successfully. Nov 12 20:46:42.634680 containerd[1470]: time="2024-11-12T20:46:42.634615149Z" level=info msg="shim disconnected" id=4cb8780c1f8179af91fdf54782c969fb2205774f06585d99ff1f69e4b9983b1f namespace=k8s.io Nov 12 20:46:42.634788 containerd[1470]: time="2024-11-12T20:46:42.634681544Z" level=warning msg="cleaning up after shim disconnected" id=4cb8780c1f8179af91fdf54782c969fb2205774f06585d99ff1f69e4b9983b1f namespace=k8s.io Nov 12 20:46:42.634788 containerd[1470]: time="2024-11-12T20:46:42.634692134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:43.529561 kubelet[2634]: E1112 20:46:43.529519 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:43.532543 containerd[1470]: time="2024-11-12T20:46:43.531639192Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:46:43.705626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2563682799.mount: Deactivated successfully. Nov 12 20:46:43.711443 containerd[1470]: time="2024-11-12T20:46:43.711374018Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2bf4a43f0d9ed0f5a6a09d2c51c1c5b803b0603f1d025c9ed74e6c00f7e16ddc\"" Nov 12 20:46:43.712214 containerd[1470]: time="2024-11-12T20:46:43.712161355Z" level=info msg="StartContainer for \"2bf4a43f0d9ed0f5a6a09d2c51c1c5b803b0603f1d025c9ed74e6c00f7e16ddc\"" Nov 12 20:46:43.745277 systemd[1]: Started cri-containerd-2bf4a43f0d9ed0f5a6a09d2c51c1c5b803b0603f1d025c9ed74e6c00f7e16ddc.scope - libcontainer container 2bf4a43f0d9ed0f5a6a09d2c51c1c5b803b0603f1d025c9ed74e6c00f7e16ddc. Nov 12 20:46:43.770603 systemd[1]: cri-containerd-2bf4a43f0d9ed0f5a6a09d2c51c1c5b803b0603f1d025c9ed74e6c00f7e16ddc.scope: Deactivated successfully. Nov 12 20:46:43.772648 containerd[1470]: time="2024-11-12T20:46:43.772605843Z" level=info msg="StartContainer for \"2bf4a43f0d9ed0f5a6a09d2c51c1c5b803b0603f1d025c9ed74e6c00f7e16ddc\" returns successfully" Nov 12 20:46:43.793632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bf4a43f0d9ed0f5a6a09d2c51c1c5b803b0603f1d025c9ed74e6c00f7e16ddc-rootfs.mount: Deactivated successfully. Nov 12 20:46:43.799924 containerd[1470]: time="2024-11-12T20:46:43.799858880Z" level=info msg="shim disconnected" id=2bf4a43f0d9ed0f5a6a09d2c51c1c5b803b0603f1d025c9ed74e6c00f7e16ddc namespace=k8s.io Nov 12 20:46:43.800359 containerd[1470]: time="2024-11-12T20:46:43.800183784Z" level=warning msg="cleaning up after shim disconnected" id=2bf4a43f0d9ed0f5a6a09d2c51c1c5b803b0603f1d025c9ed74e6c00f7e16ddc namespace=k8s.io Nov 12 20:46:43.800359 containerd[1470]: time="2024-11-12T20:46:43.800198912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:46:44.533622 kubelet[2634]: E1112 20:46:44.533582 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:44.536486 containerd[1470]: time="2024-11-12T20:46:44.536439492Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:46:44.688603 containerd[1470]: time="2024-11-12T20:46:44.688524135Z" level=info msg="CreateContainer within sandbox \"ad4e66d5b9bcc15050c6b6a9993702a4f34e151f49325b499e4f775400a34055\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d771c9c0dccb5dcccae0ea4b2b25507918ff100f32a3598aa83e36e4a585db9\"" Nov 12 20:46:44.689238 containerd[1470]: time="2024-11-12T20:46:44.689209539Z" level=info msg="StartContainer for \"0d771c9c0dccb5dcccae0ea4b2b25507918ff100f32a3598aa83e36e4a585db9\"" Nov 12 20:46:44.728317 systemd[1]: Started cri-containerd-0d771c9c0dccb5dcccae0ea4b2b25507918ff100f32a3598aa83e36e4a585db9.scope - libcontainer container 0d771c9c0dccb5dcccae0ea4b2b25507918ff100f32a3598aa83e36e4a585db9. Nov 12 20:46:44.758740 containerd[1470]: time="2024-11-12T20:46:44.758645573Z" level=info msg="StartContainer for \"0d771c9c0dccb5dcccae0ea4b2b25507918ff100f32a3598aa83e36e4a585db9\" returns successfully" Nov 12 20:46:45.221151 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 20:46:45.539016 kubelet[2634]: E1112 20:46:45.538889 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:45.679030 kubelet[2634]: I1112 20:46:45.678961 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5w7v8" podStartSLOduration=5.678908623 podStartE2EDuration="5.678908623s" podCreationTimestamp="2024-11-12 20:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:46:45.678645596 +0000 UTC m=+110.810927357" watchObservedRunningTime="2024-11-12 20:46:45.678908623 +0000 UTC m=+110.811190353" Nov 12 20:46:46.616824 kubelet[2634]: E1112 20:46:46.616778 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:49.016862 systemd-networkd[1396]: lxc_health: Link UP Nov 12 20:46:49.024250 systemd-networkd[1396]: lxc_health: Gained carrier Nov 12 20:46:50.067278 systemd-networkd[1396]: lxc_health: Gained IPv6LL Nov 12 20:46:50.617925 kubelet[2634]: E1112 20:46:50.617753 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:51.550795 kubelet[2634]: E1112 20:46:51.550750 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:52.558212 kubelet[2634]: E1112 20:46:52.557984 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:46:54.968752 containerd[1470]: time="2024-11-12T20:46:54.968608895Z" level=info msg="StopPodSandbox for \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\"" Nov 12 20:46:54.968752 containerd[1470]: time="2024-11-12T20:46:54.968737839Z" level=info msg="TearDown network for sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" successfully" Nov 12 20:46:54.969258 containerd[1470]: time="2024-11-12T20:46:54.968752747Z" level=info msg="StopPodSandbox for \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" returns successfully" Nov 12 20:46:54.969258 containerd[1470]: time="2024-11-12T20:46:54.969171988Z" level=info msg="RemovePodSandbox for \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\"" Nov 12 20:46:54.969258 containerd[1470]: time="2024-11-12T20:46:54.969199048Z" level=info msg="Forcibly stopping sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\"" Nov 12 20:46:54.969461 containerd[1470]: time="2024-11-12T20:46:54.969265935Z" level=info msg="TearDown network for sandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" successfully" Nov 12 20:46:54.975020 containerd[1470]: time="2024-11-12T20:46:54.974955319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:46:54.975169 containerd[1470]: time="2024-11-12T20:46:54.975051711Z" level=info msg="RemovePodSandbox \"c29f3cf7ff0c49a7cea37fa9d155f55446630ff77f24906ff8c87139382bedb6\" returns successfully" Nov 12 20:46:54.975742 containerd[1470]: time="2024-11-12T20:46:54.975705293Z" level=info msg="StopPodSandbox for \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\"" Nov 12 20:46:54.975817 containerd[1470]: time="2024-11-12T20:46:54.975800944Z" level=info msg="TearDown network for sandbox \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\" successfully" Nov 12 20:46:54.975817 containerd[1470]: time="2024-11-12T20:46:54.975811724Z" level=info msg="StopPodSandbox for \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\" returns successfully" Nov 12 20:46:54.976239 containerd[1470]: time="2024-11-12T20:46:54.976213742Z" level=info msg="RemovePodSandbox for \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\"" Nov 12 20:46:54.976299 containerd[1470]: time="2024-11-12T20:46:54.976245632Z" level=info msg="Forcibly stopping sandbox \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\"" Nov 12 20:46:54.976347 containerd[1470]: time="2024-11-12T20:46:54.976305886Z" level=info msg="TearDown network for sandbox \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\" successfully" Nov 12 20:46:54.980006 containerd[1470]: time="2024-11-12T20:46:54.979959539Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:46:54.980006 containerd[1470]: time="2024-11-12T20:46:54.980000606Z" level=info msg="RemovePodSandbox \"93880fe14524d9cbf6e54ae978f5c35b1f6318dbcaad09200ab3f33098132fc9\" returns successfully" Nov 12 20:46:56.006713 sshd[4541]: pam_unix(sshd:session): session closed for user core Nov 12 20:46:56.010838 systemd[1]: sshd@32-10.0.0.51:22-10.0.0.1:46024.service: Deactivated successfully. Nov 12 20:46:56.013060 systemd[1]: session-33.scope: Deactivated successfully. Nov 12 20:46:56.013757 systemd-logind[1455]: Session 33 logged out. Waiting for processes to exit. Nov 12 20:46:56.014689 systemd-logind[1455]: Removed session 33.