Dec 13 01:35:21.174692 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:35:21.174719 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:35:21.174731 kernel: BIOS-provided physical RAM map: Dec 13 01:35:21.174737 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:35:21.174743 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 01:35:21.174749 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 01:35:21.174757 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 01:35:21.174763 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 01:35:21.174769 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 01:35:21.174775 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 01:35:21.174787 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 01:35:21.174793 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Dec 13 01:35:21.174800 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Dec 13 01:35:21.174806 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Dec 13 01:35:21.174816 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 01:35:21.174823 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 01:35:21.174833 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 01:35:21.174840 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 01:35:21.174846 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 01:35:21.174853 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:35:21.174860 kernel: NX (Execute Disable) protection: active Dec 13 01:35:21.174867 kernel: APIC: Static calls initialized Dec 13 01:35:21.174873 kernel: efi: EFI v2.7 by EDK II Dec 13 01:35:21.174880 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Dec 13 01:35:21.174887 kernel: SMBIOS 2.8 present. Dec 13 01:35:21.174894 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 01:35:21.174900 kernel: Hypervisor detected: KVM Dec 13 01:35:21.174910 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:35:21.174916 kernel: kvm-clock: using sched offset of 5409438054 cycles Dec 13 01:35:21.174924 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:35:21.174931 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:35:21.174938 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:35:21.174945 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:35:21.174952 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 01:35:21.174959 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:35:21.174966 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:35:21.174976 kernel: Using GB pages for direct mapping Dec 13 01:35:21.174983 kernel: Secure boot disabled Dec 13 01:35:21.174990 kernel: ACPI: Early table checksum verification disabled Dec 13 01:35:21.174997 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 01:35:21.175010 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:35:21.175017 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:35:21.175025 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:35:21.175035 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 01:35:21.175042 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:35:21.175049 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:35:21.175056 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:35:21.175063 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:35:21.175071 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 01:35:21.175078 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 01:35:21.175088 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 01:35:21.175095 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 01:35:21.175102 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 01:35:21.175109 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 01:35:21.175116 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 01:35:21.175123 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 01:35:21.175130 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 01:35:21.175162 kernel: No NUMA configuration found Dec 13 01:35:21.175170 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 01:35:21.175180 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 01:35:21.175187 kernel: Zone ranges: Dec 13 01:35:21.175195 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:35:21.175202 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 01:35:21.175209 kernel: Normal empty Dec 13 01:35:21.175216 kernel: Movable zone start for each node Dec 13 01:35:21.175223 kernel: Early memory node ranges Dec 13 01:35:21.175230 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:35:21.175237 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 01:35:21.175247 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 01:35:21.175254 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 01:35:21.175261 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 01:35:21.175268 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 01:35:21.175278 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 01:35:21.175285 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:35:21.175292 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:35:21.175299 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 01:35:21.175306 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:35:21.175313 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 01:35:21.175323 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 01:35:21.175337 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 01:35:21.175345 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:35:21.175352 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:35:21.175359 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:35:21.175366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:35:21.175373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:35:21.175380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:35:21.175387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:35:21.175397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:35:21.175404 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:35:21.175411 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:35:21.175418 kernel: TSC deadline timer available Dec 13 01:35:21.175443 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:35:21.175467 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:35:21.175484 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:35:21.175491 kernel: kvm-guest: setup PV sched yield Dec 13 01:35:21.175498 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:35:21.175509 kernel: Booting paravirtualized kernel on KVM Dec 13 01:35:21.175516 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:35:21.175524 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:35:21.175531 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:35:21.175538 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:35:21.175546 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:35:21.175552 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:35:21.175564 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:35:21.175576 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:35:21.175588 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:35:21.175595 kernel: random: crng init done Dec 13 01:35:21.175602 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:35:21.175609 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:35:21.175616 kernel: Fallback order for Node 0: 0 Dec 13 01:35:21.175624 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 01:35:21.175631 kernel: Policy zone: DMA32 Dec 13 01:35:21.175638 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:35:21.175648 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Dec 13 01:35:21.175655 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:35:21.175662 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:35:21.175669 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:35:21.175677 kernel: Dynamic Preempt: voluntary Dec 13 01:35:21.175692 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:35:21.175707 kernel: rcu: RCU event tracing is enabled. Dec 13 01:35:21.175715 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:35:21.175722 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:35:21.175730 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:35:21.175737 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:35:21.175745 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:35:21.175755 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:35:21.175763 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:35:21.175773 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:35:21.175780 kernel: Console: colour dummy device 80x25 Dec 13 01:35:21.175787 kernel: printk: console [ttyS0] enabled Dec 13 01:35:21.175797 kernel: ACPI: Core revision 20230628 Dec 13 01:35:21.175805 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:35:21.175813 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:35:21.175821 kernel: x2apic enabled Dec 13 01:35:21.175829 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:35:21.175838 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:35:21.175846 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:35:21.175853 kernel: kvm-guest: setup PV IPIs Dec 13 01:35:21.175860 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:35:21.175871 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:35:21.175878 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:35:21.175886 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:35:21.175893 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:35:21.175903 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:35:21.175914 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:35:21.175924 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:35:21.175935 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:35:21.175946 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:35:21.175958 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:35:21.175965 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:35:21.175975 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:35:21.175983 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:35:21.175991 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:35:21.175999 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:35:21.176007 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:35:21.176015 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:35:21.176025 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:35:21.176032 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:35:21.176040 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:35:21.176047 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:35:21.176055 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:35:21.176062 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:35:21.176070 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:35:21.176077 kernel: landlock: Up and running. Dec 13 01:35:21.176084 kernel: SELinux: Initializing. Dec 13 01:35:21.176097 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:35:21.176108 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:35:21.176118 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:35:21.176126 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:35:21.176147 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:35:21.176155 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:35:21.176163 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:35:21.176170 kernel: ... version: 0 Dec 13 01:35:21.176178 kernel: ... bit width: 48 Dec 13 01:35:21.176189 kernel: ... generic registers: 6 Dec 13 01:35:21.176198 kernel: ... value mask: 0000ffffffffffff Dec 13 01:35:21.176209 kernel: ... max period: 00007fffffffffff Dec 13 01:35:21.176220 kernel: ... fixed-purpose events: 0 Dec 13 01:35:21.176230 kernel: ... event mask: 000000000000003f Dec 13 01:35:21.176237 kernel: signal: max sigframe size: 1776 Dec 13 01:35:21.176245 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:35:21.176253 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:35:21.176260 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:35:21.176272 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:35:21.176283 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:35:21.176293 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:35:21.176301 kernel: smpboot: Max logical packages: 1 Dec 13 01:35:21.176312 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:35:21.176319 kernel: devtmpfs: initialized Dec 13 01:35:21.176335 kernel: x86/mm: Memory block size: 128MB Dec 13 01:35:21.176343 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 01:35:21.176350 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 01:35:21.176361 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 01:35:21.176368 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 01:35:21.176395 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 01:35:21.176412 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:35:21.176420 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:35:21.176427 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:35:21.176435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:35:21.176442 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:35:21.176450 kernel: audit: type=2000 audit(1734053720.566:1): state=initialized audit_enabled=0 res=1 Dec 13 01:35:21.176461 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:35:21.176473 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:35:21.176481 kernel: cpuidle: using governor menu Dec 13 01:35:21.176488 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:35:21.176496 kernel: dca service started, version 1.12.1 Dec 13 01:35:21.176503 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:35:21.176511 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:35:21.176518 kernel: PCI: Using configuration type 1 for base access Dec 13 01:35:21.176526 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:35:21.176537 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:35:21.176544 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:35:21.176552 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:35:21.176559 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:35:21.176566 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:35:21.176574 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:35:21.176581 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:35:21.176589 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:35:21.176596 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:35:21.176606 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:35:21.176613 kernel: ACPI: Interpreter enabled Dec 13 01:35:21.176620 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:35:21.176628 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:35:21.176635 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:35:21.176643 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:35:21.176650 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:35:21.176658 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:35:21.176905 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:35:21.177046 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:35:21.177206 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:35:21.177217 kernel: PCI host bridge to bus 0000:00 Dec 13 01:35:21.177378 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:35:21.177497 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:35:21.177614 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:35:21.177733 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:35:21.177851 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:35:21.177964 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 01:35:21.178078 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:35:21.178336 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:35:21.178482 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:35:21.178617 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 01:35:21.178744 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 01:35:21.178869 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 01:35:21.178992 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 01:35:21.179124 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:35:21.179315 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:35:21.179452 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 01:35:21.179585 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 01:35:21.179711 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 01:35:21.179859 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:35:21.179995 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 01:35:21.180120 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 01:35:21.180323 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 01:35:21.180476 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:35:21.180609 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 01:35:21.180733 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 01:35:21.180869 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 01:35:21.180996 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 01:35:21.181177 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:35:21.181308 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:35:21.181460 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:35:21.181594 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 01:35:21.181720 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 01:35:21.181861 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:35:21.181988 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 01:35:21.181999 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:35:21.182007 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:35:21.182015 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:35:21.182027 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:35:21.182035 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:35:21.182042 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:35:21.182050 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:35:21.182058 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:35:21.182065 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:35:21.182073 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:35:21.182080 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:35:21.182088 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:35:21.182098 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:35:21.182106 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:35:21.182113 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:35:21.182121 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:35:21.182129 kernel: iommu: Default domain type: Translated Dec 13 01:35:21.182150 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:35:21.182170 kernel: efivars: Registered efivars operations Dec 13 01:35:21.182178 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:35:21.182185 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:35:21.182196 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 01:35:21.182204 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 01:35:21.182212 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 01:35:21.182219 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 01:35:21.182364 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:35:21.182492 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:35:21.182619 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:35:21.182629 kernel: vgaarb: loaded Dec 13 01:35:21.182637 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:35:21.182649 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:35:21.182657 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:35:21.182664 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:35:21.182672 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:35:21.182680 kernel: pnp: PnP ACPI init Dec 13 01:35:21.182829 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:35:21.182842 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:35:21.182852 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:35:21.182864 kernel: NET: Registered PF_INET protocol family Dec 13 01:35:21.182872 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:35:21.182880 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:35:21.182887 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:35:21.182895 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:35:21.182902 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:35:21.182910 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:35:21.182917 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:35:21.182925 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:35:21.182935 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:35:21.182943 kernel: NET: Registered PF_XDP protocol family Dec 13 01:35:21.183070 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 01:35:21.183274 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 01:35:21.183403 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:35:21.183517 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:35:21.183629 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:35:21.183741 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:35:21.183861 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:35:21.183974 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 01:35:21.183984 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:35:21.183992 kernel: Initialise system trusted keyrings Dec 13 01:35:21.184000 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:35:21.184007 kernel: Key type asymmetric registered Dec 13 01:35:21.184015 kernel: Asymmetric key parser 'x509' registered Dec 13 01:35:21.184023 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:35:21.184034 kernel: io scheduler mq-deadline registered Dec 13 01:35:21.184041 kernel: io scheduler kyber registered Dec 13 01:35:21.184049 kernel: io scheduler bfq registered Dec 13 01:35:21.184056 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:35:21.184065 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:35:21.184072 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:35:21.184080 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:35:21.184087 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:35:21.184095 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:35:21.184103 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:35:21.184113 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:35:21.184121 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:35:21.184128 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:35:21.184298 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:35:21.184430 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:35:21.184548 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:35:20 UTC (1734053720) Dec 13 01:35:21.184664 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:35:21.184679 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:35:21.184687 kernel: efifb: probing for efifb Dec 13 01:35:21.184695 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Dec 13 01:35:21.184702 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Dec 13 01:35:21.184710 kernel: efifb: scrolling: redraw Dec 13 01:35:21.184718 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Dec 13 01:35:21.184725 kernel: hpet: Lost 1 RTC interrupts Dec 13 01:35:21.184750 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 01:35:21.184761 kernel: fb0: EFI VGA frame buffer device Dec 13 01:35:21.184771 kernel: pstore: Using crash dump compression: deflate Dec 13 01:35:21.184779 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:35:21.184787 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:35:21.184795 kernel: Segment Routing with IPv6 Dec 13 01:35:21.184802 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:35:21.184810 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:35:21.184818 kernel: Key type dns_resolver registered Dec 13 01:35:21.184826 kernel: IPI shorthand broadcast: enabled Dec 13 01:35:21.184834 kernel: sched_clock: Marking stable (1276003382, 142892692)->(1553857397, -134961323) Dec 13 01:35:21.184844 kernel: registered taskstats version 1 Dec 13 01:35:21.184852 kernel: Loading compiled-in X.509 certificates Dec 13 01:35:21.184860 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:35:21.184868 kernel: Key type .fscrypt registered Dec 13 01:35:21.184876 kernel: Key type fscrypt-provisioning registered Dec 13 01:35:21.184884 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:35:21.184892 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:35:21.184900 kernel: ima: No architecture policies found Dec 13 01:35:21.184908 kernel: clk: Disabling unused clocks Dec 13 01:35:21.184919 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:35:21.184927 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:35:21.184935 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:35:21.184943 kernel: Run /init as init process Dec 13 01:35:21.184950 kernel: with arguments: Dec 13 01:35:21.184958 kernel: /init Dec 13 01:35:21.184966 kernel: with environment: Dec 13 01:35:21.184974 kernel: HOME=/ Dec 13 01:35:21.184982 kernel: TERM=linux Dec 13 01:35:21.184992 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:35:21.185005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:35:21.185016 systemd[1]: Detected virtualization kvm. Dec 13 01:35:21.185025 systemd[1]: Detected architecture x86-64. Dec 13 01:35:21.185033 systemd[1]: Running in initrd. Dec 13 01:35:21.185044 systemd[1]: No hostname configured, using default hostname. Dec 13 01:35:21.185052 systemd[1]: Hostname set to . Dec 13 01:35:21.185060 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:35:21.185069 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:35:21.185077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:35:21.185085 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:35:21.185095 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:35:21.185106 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:35:21.185114 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:35:21.185123 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:35:21.185192 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:35:21.185202 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:35:21.185211 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:35:21.185219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:35:21.185231 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:35:21.185240 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:35:21.185248 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:35:21.185256 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:35:21.185265 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:35:21.185273 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:35:21.185281 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:35:21.185290 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:35:21.185301 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:35:21.185309 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:35:21.185317 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:35:21.185333 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:35:21.185342 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:35:21.185351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:35:21.185359 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:35:21.185368 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:35:21.185376 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:35:21.185387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:35:21.185395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:35:21.185404 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:35:21.185412 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:35:21.185440 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 01:35:21.185462 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:35:21.185472 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:35:21.185480 systemd-journald[193]: Journal started Dec 13 01:35:21.185501 systemd-journald[193]: Runtime Journal (/run/log/journal/f3fd4c6153c94f1aa2a398a83d7f8d91) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:35:21.174292 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:35:21.204156 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:35:21.205319 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:35:21.215181 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:35:21.216850 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:35:21.217932 kernel: Bridge firewalling registered Dec 13 01:35:21.218127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:35:21.222014 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:35:21.224800 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:35:21.227310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:35:21.230090 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:35:21.235760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:35:21.239373 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:35:21.242065 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:35:21.256773 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:35:21.262487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:35:21.265978 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:35:21.270285 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:35:21.289221 dracut-cmdline[231]: dracut-dracut-053 Dec 13 01:35:21.292596 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:35:21.299128 systemd-resolved[227]: Positive Trust Anchors: Dec 13 01:35:21.299163 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:35:21.299195 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:35:21.301934 systemd-resolved[227]: Defaulting to hostname 'linux'. Dec 13 01:35:21.303299 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:35:21.309485 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:35:21.400206 kernel: SCSI subsystem initialized Dec 13 01:35:21.410193 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:35:21.423216 kernel: iscsi: registered transport (tcp) Dec 13 01:35:21.447189 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:35:21.447294 kernel: QLogic iSCSI HBA Driver Dec 13 01:35:21.511177 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:35:21.530461 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:35:21.562103 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:35:21.562215 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:35:21.562231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:35:21.609195 kernel: raid6: avx2x4 gen() 27619 MB/s Dec 13 01:35:21.626187 kernel: raid6: avx2x2 gen() 29091 MB/s Dec 13 01:35:21.643581 kernel: raid6: avx2x1 gen() 20459 MB/s Dec 13 01:35:21.643680 kernel: raid6: using algorithm avx2x2 gen() 29091 MB/s Dec 13 01:35:21.661485 kernel: raid6: .... xor() 15106 MB/s, rmw enabled Dec 13 01:35:21.661547 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:35:21.686249 kernel: xor: automatically using best checksumming function avx Dec 13 01:35:21.861180 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:35:21.879524 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:35:21.895500 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:35:21.910134 systemd-udevd[413]: Using default interface naming scheme 'v255'. Dec 13 01:35:21.915954 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:35:21.919292 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:35:21.959810 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Dec 13 01:35:22.005164 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:35:22.018427 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:35:22.100398 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:35:22.110515 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:35:22.124841 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:35:22.127158 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:35:22.130737 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:35:22.133830 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:35:22.140174 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:35:22.180256 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:35:22.180508 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:35:22.180526 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:35:22.180542 kernel: GPT:9289727 != 19775487 Dec 13 01:35:22.180556 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:35:22.180570 kernel: GPT:9289727 != 19775487 Dec 13 01:35:22.180583 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:35:22.180598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:35:22.180612 kernel: libata version 3.00 loaded. Dec 13 01:35:22.142492 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:35:22.160460 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:35:22.184822 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:35:22.213285 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:35:22.213336 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:35:22.213575 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:35:22.213775 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:35:22.213792 kernel: AES CTR mode by8 optimization enabled Dec 13 01:35:22.213810 kernel: scsi host0: ahci Dec 13 01:35:22.214045 kernel: scsi host1: ahci Dec 13 01:35:22.214307 kernel: scsi host2: ahci Dec 13 01:35:22.214524 kernel: scsi host3: ahci Dec 13 01:35:22.214790 kernel: scsi host4: ahci Dec 13 01:35:22.215010 kernel: scsi host5: ahci Dec 13 01:35:22.215253 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 01:35:22.215271 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 01:35:22.215287 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 01:35:22.215312 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 01:35:22.215333 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 01:35:22.215348 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 01:35:22.181389 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:35:22.182976 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:35:22.222015 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (466) Dec 13 01:35:22.188797 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:35:22.190292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:35:22.190492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:35:22.193919 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:35:22.231290 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Dec 13 01:35:22.204116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:35:22.251546 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:35:22.268089 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:35:22.273719 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:35:22.275281 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:35:22.284869 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:35:22.298415 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:35:22.299810 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:35:22.299883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:35:22.302699 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:35:22.309866 disk-uuid[555]: Primary Header is updated. Dec 13 01:35:22.309866 disk-uuid[555]: Secondary Entries is updated. Dec 13 01:35:22.309866 disk-uuid[555]: Secondary Header is updated. Dec 13 01:35:22.313493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:35:22.304961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:35:22.318163 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:35:22.324172 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:35:22.329829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:35:22.337435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:35:22.374961 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:35:22.525960 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:35:22.526062 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:35:22.526080 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:35:22.527607 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:35:22.528160 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:35:22.529163 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:35:22.529181 kernel: ata3.00: applying bridge limits Dec 13 01:35:22.530160 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:35:22.531170 kernel: ata3.00: configured for UDMA/100 Dec 13 01:35:22.532172 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:35:22.589191 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:35:22.602246 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:35:22.602277 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:35:23.323190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:35:23.324031 disk-uuid[557]: The operation has completed successfully. Dec 13 01:35:23.360447 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:35:23.360600 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:35:23.411373 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:35:23.419384 sh[598]: Success Dec 13 01:35:23.466174 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:35:23.508940 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:35:23.517924 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:35:23.521493 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:35:23.535160 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:35:23.535215 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:35:23.535236 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:35:23.537285 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:35:23.537308 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:35:23.550603 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:35:23.551666 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:35:23.560567 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:35:23.564374 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:35:23.580626 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:35:23.580691 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:35:23.580703 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:35:23.585163 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:35:23.595387 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:35:23.597194 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:35:23.696303 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:35:23.707292 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:35:23.740972 systemd-networkd[776]: lo: Link UP Dec 13 01:35:23.740982 systemd-networkd[776]: lo: Gained carrier Dec 13 01:35:23.745742 systemd-networkd[776]: Enumeration completed Dec 13 01:35:23.745856 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:35:23.746798 systemd[1]: Reached target network.target - Network. Dec 13 01:35:23.750987 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:35:23.750999 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:35:23.755434 systemd-networkd[776]: eth0: Link UP Dec 13 01:35:23.755444 systemd-networkd[776]: eth0: Gained carrier Dec 13 01:35:23.755454 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:35:23.759322 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:35:23.766519 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:35:23.771364 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:35:23.824509 ignition[780]: Ignition 2.19.0 Dec 13 01:35:23.824521 ignition[780]: Stage: fetch-offline Dec 13 01:35:23.824573 ignition[780]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:35:23.824585 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:35:23.824689 ignition[780]: parsed url from cmdline: "" Dec 13 01:35:23.824693 ignition[780]: no config URL provided Dec 13 01:35:23.824698 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:35:23.824708 ignition[780]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:35:23.824739 ignition[780]: op(1): [started] loading QEMU firmware config module Dec 13 01:35:23.824745 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:35:23.839069 ignition[780]: op(1): [finished] loading QEMU firmware config module Dec 13 01:35:23.879982 ignition[780]: parsing config with SHA512: 7e321076924e830371e14a23f6b2d3b089ac67d864c338695b092d8609300752483c88fb2cca61074f573d71682d43bfc0c3c137a541fb3d0dacaaa435705fa8 Dec 13 01:35:23.886029 unknown[780]: fetched base config from "system" Dec 13 01:35:23.886051 unknown[780]: fetched user config from "qemu" Dec 13 01:35:23.886596 ignition[780]: fetch-offline: fetch-offline passed Dec 13 01:35:23.886687 ignition[780]: Ignition finished successfully Dec 13 01:35:23.892481 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:35:23.894218 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:35:23.911512 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:35:23.928068 ignition[790]: Ignition 2.19.0 Dec 13 01:35:23.928080 ignition[790]: Stage: kargs Dec 13 01:35:23.928311 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:35:23.928324 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:35:23.933562 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:35:23.929325 ignition[790]: kargs: kargs passed Dec 13 01:35:23.929375 ignition[790]: Ignition finished successfully Dec 13 01:35:23.947665 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:35:23.963047 ignition[799]: Ignition 2.19.0 Dec 13 01:35:23.963061 ignition[799]: Stage: disks Dec 13 01:35:23.963314 ignition[799]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:35:23.963330 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:35:23.967033 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:35:23.964448 ignition[799]: disks: disks passed Dec 13 01:35:23.968511 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:35:23.964512 ignition[799]: Ignition finished successfully Dec 13 01:35:23.970523 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:35:23.972840 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:35:23.974147 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:35:23.976771 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:35:23.987393 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:35:24.004221 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:35:24.013121 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:35:24.022347 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:35:24.150170 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:35:24.150704 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:35:24.152727 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:35:24.164335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:35:24.167065 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:35:24.167866 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:35:24.167924 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:35:24.177554 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) Dec 13 01:35:24.167953 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:35:24.183232 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:35:24.183296 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:35:24.183313 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:35:24.178693 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:35:24.184310 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:35:24.189172 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:35:24.191318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:35:24.247957 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:35:24.255624 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:35:24.262111 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:35:24.270113 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:35:24.459671 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:35:24.479421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:35:24.481779 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:35:24.493382 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:35:24.525639 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:35:24.534329 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:35:24.605433 ignition[932]: INFO : Ignition 2.19.0 Dec 13 01:35:24.605433 ignition[932]: INFO : Stage: mount Dec 13 01:35:24.607826 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:35:24.607826 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:35:24.607826 ignition[932]: INFO : mount: mount passed Dec 13 01:35:24.607826 ignition[932]: INFO : Ignition finished successfully Dec 13 01:35:24.616435 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:35:24.627374 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:35:24.640730 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:35:24.660192 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) Dec 13 01:35:24.663731 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:35:24.663834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:35:24.663853 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:35:24.670645 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:35:24.672374 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:35:24.736182 ignition[962]: INFO : Ignition 2.19.0 Dec 13 01:35:24.736182 ignition[962]: INFO : Stage: files Dec 13 01:35:24.738647 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:35:24.738647 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:35:24.738647 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:35:24.743823 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:35:24.743823 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:35:24.747783 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:35:24.749794 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:35:24.749794 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:35:24.748766 unknown[962]: wrote ssh authorized keys file for user: core Dec 13 01:35:24.755744 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:35:24.755744 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:35:24.755744 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:35:24.755744 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:35:24.805256 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:35:24.926691 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:35:24.929183 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:35:24.929183 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:35:25.292761 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 01:35:25.349535 systemd-networkd[776]: eth0: Gained IPv6LL Dec 13 01:35:25.542610 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:35:25.542610 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:35:25.547583 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:35:25.973530 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 01:35:26.736596 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:35:26.736596 ignition[962]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Dec 13 01:35:26.740986 ignition[962]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:35:26.773320 ignition[962]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:35:26.781399 ignition[962]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:35:26.783471 ignition[962]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:35:26.783471 ignition[962]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:35:26.783471 ignition[962]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:35:26.783471 ignition[962]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:35:26.783471 ignition[962]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:35:26.783471 ignition[962]: INFO : files: files passed Dec 13 01:35:26.783471 ignition[962]: INFO : Ignition finished successfully Dec 13 01:35:26.785153 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:35:26.800350 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:35:26.802720 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:35:26.804843 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:35:26.804982 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:35:26.815781 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:35:26.818753 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:35:26.818753 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:35:26.824259 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:35:26.822157 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:35:26.825023 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:35:26.837460 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:35:26.873572 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:35:26.874901 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:35:26.878069 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:35:26.880440 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:35:26.882841 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:35:26.897527 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:35:26.915536 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:35:26.924721 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:35:26.940089 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:35:26.941679 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:35:26.944337 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:35:26.946513 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:35:26.946678 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:35:26.951004 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:35:26.952453 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:35:26.953652 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:35:26.955032 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:35:26.957580 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:35:26.959999 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:35:26.963693 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:35:26.966637 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:35:26.969276 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:35:26.971884 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:35:26.974906 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:35:26.975131 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:35:26.979170 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:35:26.979433 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:35:26.981666 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:35:26.984295 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:35:26.987560 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:35:26.987790 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:35:26.990231 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:35:26.990423 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:35:26.995032 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:35:26.996833 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:35:27.001337 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:35:27.003565 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:35:27.004809 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:35:27.005244 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:35:27.005439 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:35:27.005843 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:35:27.005968 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:35:27.011676 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:35:27.011829 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:35:27.014671 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:35:27.014789 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:35:27.026491 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:35:27.029869 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:35:27.030943 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:35:27.031110 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:35:27.033614 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:35:27.033754 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:35:27.042581 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:35:27.042751 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:35:27.047931 ignition[1017]: INFO : Ignition 2.19.0 Dec 13 01:35:27.047931 ignition[1017]: INFO : Stage: umount Dec 13 01:35:27.049825 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:35:27.049825 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:35:27.049825 ignition[1017]: INFO : umount: umount passed Dec 13 01:35:27.049825 ignition[1017]: INFO : Ignition finished successfully Dec 13 01:35:27.051349 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:35:27.051506 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:35:27.055129 systemd[1]: Stopped target network.target - Network. Dec 13 01:35:27.055421 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:35:27.055585 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:35:27.058869 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:35:27.058950 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:35:27.061380 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:35:27.061448 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:35:27.061957 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:35:27.062059 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:35:27.069596 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:35:27.070823 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:35:27.075245 systemd-networkd[776]: eth0: DHCPv6 lease lost Dec 13 01:35:27.078285 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:35:27.078436 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:35:27.079011 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:35:27.079056 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:35:27.089342 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:35:27.091324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:35:27.091407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:35:27.092875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:35:27.093549 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:35:27.095173 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:35:27.106021 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:35:27.106204 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:35:27.108080 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:35:27.108569 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:35:27.110494 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:35:27.110575 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:35:27.117653 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:35:27.117935 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:35:27.121119 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:35:27.121332 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:35:27.124931 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:35:27.125029 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:35:27.126534 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:35:27.126594 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:35:27.128732 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:35:27.128870 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:35:27.134645 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:35:27.134720 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:35:27.138827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:35:27.138925 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:35:27.150334 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:35:27.150426 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:35:27.150492 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:35:27.155065 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:35:27.155162 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:35:27.158099 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:35:27.158198 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:35:27.158707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:35:27.158758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:35:27.161893 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:35:27.201980 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:35:27.202182 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:35:27.750212 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:35:27.750409 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:35:27.752074 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:35:27.755343 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:35:27.755427 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:35:27.793568 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:35:27.802424 systemd[1]: Switching root. Dec 13 01:35:27.837922 systemd-journald[193]: Journal stopped Dec 13 01:35:30.035496 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 01:35:30.035598 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:35:30.035630 kernel: SELinux: policy capability open_perms=1 Dec 13 01:35:30.035652 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:35:30.035673 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:35:30.035696 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:35:30.035713 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:35:30.035729 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:35:30.035744 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:35:30.035761 kernel: audit: type=1403 audit(1734053728.999:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:35:30.035787 systemd[1]: Successfully loaded SELinux policy in 44.420ms. Dec 13 01:35:30.035815 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.312ms. Dec 13 01:35:30.035838 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:35:30.035874 systemd[1]: Detected virtualization kvm. Dec 13 01:35:30.035890 systemd[1]: Detected architecture x86-64. Dec 13 01:35:30.035907 systemd[1]: Detected first boot. Dec 13 01:35:30.035923 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:35:30.035939 zram_generator::config[1079]: No configuration found. Dec 13 01:35:30.035958 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:35:30.035974 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:35:30.035994 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:35:30.036012 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:35:30.036029 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:35:30.036045 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:35:30.036062 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:35:30.036092 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:35:30.036111 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:35:30.036128 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:35:30.038362 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:35:30.038395 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:35:30.038409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:35:30.038422 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:35:30.038434 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:35:30.038448 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:35:30.038461 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:35:30.038474 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:35:30.038494 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:35:30.038506 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:35:30.038521 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:35:30.038534 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:35:30.038547 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:35:30.038561 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:35:30.038574 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:35:30.038588 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:35:30.038601 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:35:30.038614 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:35:30.038629 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:35:30.038642 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:35:30.038654 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:35:30.038667 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:35:30.038679 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:35:30.038692 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:35:30.038704 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:35:30.038717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:35:30.038729 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:35:30.038744 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:35:30.038759 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:35:30.038772 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:35:30.038785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:35:30.038798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:35:30.038813 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:35:30.038826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:35:30.038841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:35:30.038862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:35:30.038878 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:35:30.038891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:35:30.038905 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:35:30.038917 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:35:30.038931 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:35:30.038943 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:35:30.038955 kernel: fuse: init (API version 7.39) Dec 13 01:35:30.038972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:35:30.038985 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:35:30.038997 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:35:30.039009 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:35:30.039022 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:35:30.039035 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:35:30.039047 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:35:30.039081 kernel: ACPI: bus type drm_connector registered Dec 13 01:35:30.039094 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:35:30.039157 systemd-journald[1160]: Collecting audit messages is disabled. Dec 13 01:35:30.039183 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:35:30.039195 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:35:30.039208 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:35:30.039220 systemd-journald[1160]: Journal started Dec 13 01:35:30.039243 systemd-journald[1160]: Runtime Journal (/run/log/journal/f3fd4c6153c94f1aa2a398a83d7f8d91) is 6.0M, max 48.3M, 42.2M free. Dec 13 01:35:30.043984 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:35:30.043548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:35:30.045381 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:35:30.045775 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:35:30.048191 kernel: loop: module loaded Dec 13 01:35:30.047984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:35:30.048363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:35:30.050097 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:35:30.050347 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:35:30.052013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:35:30.052346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:35:30.054746 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:35:30.055044 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:35:30.057211 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:35:30.057528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:35:30.059695 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:35:30.061658 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:35:30.064244 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:35:30.085960 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:35:30.093393 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:35:30.102601 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:35:30.104089 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:35:30.109803 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:35:30.115343 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:35:30.116698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:35:30.120314 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:35:30.121667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:35:30.125400 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:35:30.132791 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:35:30.138898 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:35:30.141162 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:35:30.143686 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:35:30.150148 systemd-journald[1160]: Time spent on flushing to /var/log/journal/f3fd4c6153c94f1aa2a398a83d7f8d91 is 21.062ms for 993 entries. Dec 13 01:35:30.150148 systemd-journald[1160]: System Journal (/var/log/journal/f3fd4c6153c94f1aa2a398a83d7f8d91) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:35:30.214481 systemd-journald[1160]: Received client request to flush runtime journal. Dec 13 01:35:30.170588 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:35:30.172733 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:35:30.182628 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:35:30.188007 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:35:30.203686 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:35:30.221390 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:35:30.224103 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 13 01:35:30.224131 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 13 01:35:30.225745 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:35:30.237726 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:35:30.247748 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:35:30.280165 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:35:30.292573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:35:30.317417 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:35:30.317451 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:35:30.327049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:35:31.058566 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:35:31.070570 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:35:31.107839 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Dec 13 01:35:31.129640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:35:31.144445 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:35:31.159402 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:35:31.179171 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1262) Dec 13 01:35:31.257183 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1254) Dec 13 01:35:31.250396 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:35:31.262960 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1254) Dec 13 01:35:31.287744 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:35:31.354058 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:35:31.350259 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:35:31.364162 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:35:31.366302 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 01:35:31.371323 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:35:31.371518 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:35:31.371727 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:35:31.393391 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:35:31.464421 systemd-networkd[1250]: lo: Link UP Dec 13 01:35:31.483811 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:35:31.464435 systemd-networkd[1250]: lo: Gained carrier Dec 13 01:35:31.466759 systemd-networkd[1250]: Enumeration completed Dec 13 01:35:31.467271 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:35:31.467276 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:35:31.468334 systemd-networkd[1250]: eth0: Link UP Dec 13 01:35:31.468340 systemd-networkd[1250]: eth0: Gained carrier Dec 13 01:35:31.468354 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:35:31.483080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:35:31.484944 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:35:31.490484 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:35:31.555939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:35:31.556856 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:35:31.569240 systemd-networkd[1250]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:35:31.576744 kernel: kvm_amd: TSC scaling supported Dec 13 01:35:31.576828 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:35:31.576848 kernel: kvm_amd: Nested Paging enabled Dec 13 01:35:31.578179 kernel: kvm_amd: LBR virtualization supported Dec 13 01:35:31.578240 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:35:31.578265 kernel: kvm_amd: Virtual GIF supported Dec 13 01:35:31.590706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:35:31.597161 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:35:31.640268 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:35:31.648468 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:35:31.662115 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:35:31.675053 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:35:31.725881 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:35:31.728052 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:35:31.737489 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:35:31.744629 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:35:31.787010 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:35:31.789173 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:35:31.790771 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:35:31.790813 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:35:31.792050 systemd[1]: Reached target machines.target - Containers. Dec 13 01:35:31.794573 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:35:31.807436 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:35:31.811381 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:35:31.813011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:35:31.814613 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:35:31.818406 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:35:31.822355 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:35:31.826642 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:35:31.838860 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:35:31.845246 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:35:31.860606 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:35:31.861549 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:35:31.876437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:35:31.919187 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:35:31.965253 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 01:35:32.029183 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:35:32.055181 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:35:32.077172 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 01:35:32.086985 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:35:32.087916 (sd-merge)[1318]: Merged extensions into '/usr'. Dec 13 01:35:32.099313 systemd[1]: Reloading requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:35:32.099336 systemd[1]: Reloading... Dec 13 01:35:32.227196 zram_generator::config[1346]: No configuration found. Dec 13 01:35:32.403088 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:35:32.492698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:35:32.589190 systemd[1]: Reloading finished in 489 ms. Dec 13 01:35:32.611399 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:35:32.614271 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:35:32.631479 systemd[1]: Starting ensure-sysext.service... Dec 13 01:35:32.635441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:35:32.642074 systemd[1]: Reloading requested from client PID 1390 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:35:32.642103 systemd[1]: Reloading... Dec 13 01:35:32.783666 zram_generator::config[1421]: No configuration found. Dec 13 01:35:32.807665 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:35:32.808205 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:35:32.809615 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:35:32.810093 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Dec 13 01:35:32.810276 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Dec 13 01:35:32.815552 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:35:32.815572 systemd-tmpfiles[1391]: Skipping /boot Dec 13 01:35:32.831382 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:35:32.831405 systemd-tmpfiles[1391]: Skipping /boot Dec 13 01:35:32.945306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:35:33.024995 systemd[1]: Reloading finished in 382 ms. Dec 13 01:35:33.049996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:35:33.069703 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:35:33.083540 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:35:33.091061 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:35:33.099026 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:35:33.113427 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:35:33.119650 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:35:33.119880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:35:33.125120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:35:33.132813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:35:33.139304 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:35:33.147727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:35:33.148153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:35:33.149735 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:35:33.152878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:35:33.153312 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:35:33.163358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:35:33.163807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:35:33.184415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:35:33.186473 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:35:33.191359 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:35:33.193217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:35:33.196038 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:35:33.206058 augenrules[1498]: No rules Dec 13 01:35:33.205969 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:35:33.206376 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:35:33.212262 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:35:33.221544 systemd-networkd[1250]: eth0: Gained IPv6LL Dec 13 01:35:33.223726 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:35:33.224114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:35:33.227080 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:35:33.230379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:35:33.230732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:35:33.237948 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:35:33.255616 systemd[1]: Finished ensure-sysext.service. Dec 13 01:35:33.259861 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:35:33.260233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:35:33.268021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:35:33.271991 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:35:33.275490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:35:33.281423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:35:33.281467 systemd-resolved[1469]: Positive Trust Anchors: Dec 13 01:35:33.281481 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:35:33.281522 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:35:33.283305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:35:33.287505 systemd-resolved[1469]: Defaulting to hostname 'linux'. Dec 13 01:35:33.288329 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:35:33.290107 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:35:33.290773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:35:33.293228 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:35:33.295701 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:35:33.296025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:35:33.298101 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:35:33.298639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:35:33.301034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:35:33.301367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:35:33.304107 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:35:33.304480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:35:33.312537 systemd[1]: Reached target network.target - Network. Dec 13 01:35:33.314210 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:35:33.315889 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:35:33.317627 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:35:33.317759 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:35:33.317811 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:35:33.429482 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:35:33.432219 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:35:34.758356 systemd-resolved[1469]: Clock change detected. Flushing caches. Dec 13 01:35:34.758383 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:35:34.758439 systemd-timesyncd[1522]: Initial clock synchronization to Fri 2024-12-13 01:35:34.758221 UTC. Dec 13 01:35:34.759665 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:35:34.761867 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:35:34.763489 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:35:34.765079 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:35:34.765111 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:35:34.766116 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:35:34.767717 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:35:34.769304 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:35:34.770775 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:35:34.773568 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:35:34.776906 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:35:34.780048 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:35:34.787093 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:35:34.788387 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:35:34.789595 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:35:34.791138 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:35:34.791204 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:35:34.791237 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:35:34.793927 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:35:34.798063 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:35:34.803334 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:35:34.809655 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:35:34.812951 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:35:34.814170 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:35:34.818167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:35:34.820863 jq[1539]: false Dec 13 01:35:34.826255 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:35:34.830199 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:35:34.833190 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:35:34.839947 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:35:34.844515 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:35:34.852027 dbus-daemon[1537]: [system] SELinux support is enabled Dec 13 01:35:34.861587 extend-filesystems[1541]: Found loop3 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found loop4 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found loop5 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found sr0 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found vda Dec 13 01:35:34.861587 extend-filesystems[1541]: Found vda1 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found vda2 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found vda3 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found usr Dec 13 01:35:34.861587 extend-filesystems[1541]: Found vda4 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found vda6 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found vda7 Dec 13 01:35:34.861587 extend-filesystems[1541]: Found vda9 Dec 13 01:35:34.861587 extend-filesystems[1541]: Checking size of /dev/vda9 Dec 13 01:35:34.862316 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:35:34.881587 extend-filesystems[1541]: Resized partition /dev/vda9 Dec 13 01:35:34.869128 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:35:34.872378 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:35:34.876013 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:35:34.880048 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:35:34.884519 extend-filesystems[1571]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:35:34.890272 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:35:34.893166 jq[1570]: true Dec 13 01:35:34.890633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:35:34.894520 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:35:34.894879 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:35:34.911665 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:35:34.915816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:35:34.916319 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:35:34.924070 update_engine[1567]: I20241213 01:35:34.923514 1567 main.cc:92] Flatcar Update Engine starting Dec 13 01:35:34.932457 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1245) Dec 13 01:35:34.932534 update_engine[1567]: I20241213 01:35:34.925565 1567 update_check_scheduler.cc:74] Next update check in 6m27s Dec 13 01:35:34.943101 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:35:34.937827 (ntainerd)[1584]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:35:34.952094 jq[1583]: true Dec 13 01:35:34.954985 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:35:34.957404 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:35:34.981442 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:35:35.010570 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:35:35.010734 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:35:35.010769 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:35:35.012160 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:35:35.012183 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:35:35.014342 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:35:35.025308 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:35:35.046387 tar[1580]: linux-amd64/helm Dec 13 01:35:35.234230 systemd-logind[1558]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:35:35.234257 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:35:35.234492 systemd-logind[1558]: New seat seat0. Dec 13 01:35:35.235812 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:35:35.235930 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:35:35.249145 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:35:35.279521 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:35:35.295436 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:35:35.293469 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:35:35.308292 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:35:35.308887 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:35:35.355935 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:35:35.361212 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:35:35.361212 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:35:35.361212 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:35:35.368422 extend-filesystems[1541]: Resized filesystem in /dev/vda9 Dec 13 01:35:35.363405 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:35:35.369505 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:35:35.363797 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:35:35.377734 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:35:35.381871 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:35:35.422523 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:35:35.438535 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:35:35.444321 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:35:35.445743 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:35:35.807435 containerd[1584]: time="2024-12-13T01:35:35.807262186Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:35:35.862743 containerd[1584]: time="2024-12-13T01:35:35.862601287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.867044379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.867124169Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.867157011Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.867541522Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.867575014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.867703235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.867727250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.868167706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.868194576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.868220074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869047 containerd[1584]: time="2024-12-13T01:35:35.868236064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869439 containerd[1584]: time="2024-12-13T01:35:35.868397337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869439 containerd[1584]: time="2024-12-13T01:35:35.868827373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869697 containerd[1584]: time="2024-12-13T01:35:35.869667338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:35:35.869795 containerd[1584]: time="2024-12-13T01:35:35.869773087Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:35:35.870051 containerd[1584]: time="2024-12-13T01:35:35.870024869Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:35:35.870244 containerd[1584]: time="2024-12-13T01:35:35.870220896Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:35:35.877986 tar[1580]: linux-amd64/LICENSE Dec 13 01:35:35.878114 tar[1580]: linux-amd64/README.md Dec 13 01:35:35.894739 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:35:35.943438 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:35:35.994446 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:39736.service - OpenSSH per-connection server daemon (10.0.0.1:39736). Dec 13 01:35:36.148636 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 39736 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:36.150347 containerd[1584]: time="2024-12-13T01:35:36.150281460Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:35:36.150471 containerd[1584]: time="2024-12-13T01:35:36.150395093Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:35:36.150471 containerd[1584]: time="2024-12-13T01:35:36.150423226Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:35:36.150471 containerd[1584]: time="2024-12-13T01:35:36.150444295Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:35:36.150563 containerd[1584]: time="2024-12-13T01:35:36.150510880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:35:36.150761 containerd[1584]: time="2024-12-13T01:35:36.150729099Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:35:36.151257 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:36.151756 containerd[1584]: time="2024-12-13T01:35:36.151430575Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:35:36.151756 containerd[1584]: time="2024-12-13T01:35:36.151624759Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:35:36.151756 containerd[1584]: time="2024-12-13T01:35:36.151647702Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:35:36.151756 containerd[1584]: time="2024-12-13T01:35:36.151665976Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:35:36.151756 containerd[1584]: time="2024-12-13T01:35:36.151689811Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:35:36.151756 containerd[1584]: time="2024-12-13T01:35:36.151708697Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:35:36.151756 containerd[1584]: time="2024-12-13T01:35:36.151744544Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151765934Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151785871Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151817410Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151835715Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151851915Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151894755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151913701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151928990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151947755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151963374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151982209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.151998370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.152039507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152058 containerd[1584]: time="2024-12-13T01:35:36.152057641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152436 containerd[1584]: time="2024-12-13T01:35:36.152086836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152436 containerd[1584]: time="2024-12-13T01:35:36.152104980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152436 containerd[1584]: time="2024-12-13T01:35:36.152121831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152436 containerd[1584]: time="2024-12-13T01:35:36.152142440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.152436 containerd[1584]: time="2024-12-13T01:35:36.152211409Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:35:36.152436 containerd[1584]: time="2024-12-13T01:35:36.152246756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.153198 containerd[1584]: time="2024-12-13T01:35:36.153062035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.153198 containerd[1584]: time="2024-12-13T01:35:36.153089686Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:35:36.153198 containerd[1584]: time="2024-12-13T01:35:36.153140121Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:35:36.153198 containerd[1584]: time="2024-12-13T01:35:36.153161742Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:35:36.153198 containerd[1584]: time="2024-12-13T01:35:36.153172772Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:35:36.153198 containerd[1584]: time="2024-12-13T01:35:36.153186288Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:35:36.153198 containerd[1584]: time="2024-12-13T01:35:36.153196607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.153407 containerd[1584]: time="2024-12-13T01:35:36.153211956Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:35:36.153407 containerd[1584]: time="2024-12-13T01:35:36.153225701Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:35:36.153407 containerd[1584]: time="2024-12-13T01:35:36.153237373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:35:36.153775 containerd[1584]: time="2024-12-13T01:35:36.153560369Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:35:36.153775 containerd[1584]: time="2024-12-13T01:35:36.153645449Z" level=info msg="Connect containerd service" Dec 13 01:35:36.153775 containerd[1584]: time="2024-12-13T01:35:36.153705471Z" level=info msg="using legacy CRI server" Dec 13 01:35:36.153775 containerd[1584]: time="2024-12-13T01:35:36.153713416Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:35:36.154072 containerd[1584]: time="2024-12-13T01:35:36.153862205Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:35:36.154649 containerd[1584]: time="2024-12-13T01:35:36.154528124Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:35:36.154911 containerd[1584]: time="2024-12-13T01:35:36.154832966Z" level=info msg="Start subscribing containerd event" Dec 13 01:35:36.155029 containerd[1584]: time="2024-12-13T01:35:36.154921462Z" level=info msg="Start recovering state" Dec 13 01:35:36.155029 containerd[1584]: time="2024-12-13T01:35:36.154985351Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:35:36.156178 containerd[1584]: time="2024-12-13T01:35:36.155700112Z" level=info msg="Start event monitor" Dec 13 01:35:36.156178 containerd[1584]: time="2024-12-13T01:35:36.155706073Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:35:36.157043 containerd[1584]: time="2024-12-13T01:35:36.155742071Z" level=info msg="Start snapshots syncer" Dec 13 01:35:36.157043 containerd[1584]: time="2024-12-13T01:35:36.156346053Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:35:36.157043 containerd[1584]: time="2024-12-13T01:35:36.156361142Z" level=info msg="Start streaming server" Dec 13 01:35:36.157043 containerd[1584]: time="2024-12-13T01:35:36.156424430Z" level=info msg="containerd successfully booted in 0.353580s" Dec 13 01:35:36.160612 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:35:36.180186 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:35:36.234489 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:35:36.238702 systemd-logind[1558]: New session 1 of user core. Dec 13 01:35:36.259659 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:35:36.269310 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:35:36.275540 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:35:36.411076 systemd[1668]: Queued start job for default target default.target. Dec 13 01:35:36.411604 systemd[1668]: Created slice app.slice - User Application Slice. Dec 13 01:35:36.411636 systemd[1668]: Reached target paths.target - Paths. Dec 13 01:35:36.411654 systemd[1668]: Reached target timers.target - Timers. Dec 13 01:35:36.422231 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:35:36.431176 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:35:36.431284 systemd[1668]: Reached target sockets.target - Sockets. Dec 13 01:35:36.431303 systemd[1668]: Reached target basic.target - Basic System. Dec 13 01:35:36.431377 systemd[1668]: Reached target default.target - Main User Target. Dec 13 01:35:36.431429 systemd[1668]: Startup finished in 146ms. Dec 13 01:35:36.432267 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:35:36.450753 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:35:36.508365 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:40912.service - OpenSSH per-connection server daemon (10.0.0.1:40912). Dec 13 01:35:36.546624 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 40912 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:36.548674 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:36.553683 systemd-logind[1558]: New session 2 of user core. Dec 13 01:35:36.567461 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:35:36.641594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:35:36.643739 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:35:36.646327 systemd[1]: Startup finished in 9.630s (kernel) + 6.363s (userspace) = 15.994s. Dec 13 01:35:36.649477 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:35:36.653219 sshd[1680]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:36.659616 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:40914.service - OpenSSH per-connection server daemon (10.0.0.1:40914). Dec 13 01:35:36.660345 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:40912.service: Deactivated successfully. Dec 13 01:35:36.664536 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:35:36.666865 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:35:36.670475 systemd-logind[1558]: Removed session 2. Dec 13 01:35:36.699471 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 40914 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:36.701524 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:36.706952 systemd-logind[1558]: New session 3 of user core. Dec 13 01:35:36.789621 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:35:36.842747 sshd[1697]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:36.850263 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:40918.service - OpenSSH per-connection server daemon (10.0.0.1:40918). Dec 13 01:35:36.850834 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:40914.service: Deactivated successfully. Dec 13 01:35:36.853813 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:35:36.855894 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:35:36.863648 systemd-logind[1558]: Removed session 3. Dec 13 01:35:36.902996 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 40918 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:36.905089 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:36.910518 systemd-logind[1558]: New session 4 of user core. Dec 13 01:35:36.921327 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:35:36.984723 sshd[1706]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:36.995394 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:40926.service - OpenSSH per-connection server daemon (10.0.0.1:40926). Dec 13 01:35:36.996029 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:40918.service: Deactivated successfully. Dec 13 01:35:36.999827 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:35:37.002997 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:35:37.004558 systemd-logind[1558]: Removed session 4. Dec 13 01:35:37.076278 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 40926 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:37.078349 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:37.083091 systemd-logind[1558]: New session 5 of user core. Dec 13 01:35:37.090374 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:35:37.174814 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:35:37.175216 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:35:37.197125 sudo[1726]: pam_unix(sudo:session): session closed for user root Dec 13 01:35:37.199573 sshd[1719]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:37.207372 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:40942.service - OpenSSH per-connection server daemon (10.0.0.1:40942). Dec 13 01:35:37.208067 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:40926.service: Deactivated successfully. Dec 13 01:35:37.211619 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:35:37.214133 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:35:37.215544 systemd-logind[1558]: Removed session 5. Dec 13 01:35:37.298960 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 40942 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:37.300955 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:37.306958 systemd-logind[1558]: New session 6 of user core. Dec 13 01:35:37.316492 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:35:37.378239 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:35:37.378716 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:35:37.383639 sudo[1737]: pam_unix(sudo:session): session closed for user root Dec 13 01:35:37.391962 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:35:37.392347 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:35:37.418472 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:35:37.423630 auditctl[1741]: No rules Dec 13 01:35:37.425251 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:35:37.425619 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:35:37.429317 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:35:37.470701 augenrules[1760]: No rules Dec 13 01:35:37.473099 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:35:37.474693 sudo[1736]: pam_unix(sudo:session): session closed for user root Dec 13 01:35:37.476992 sshd[1728]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:37.493464 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:40952.service - OpenSSH per-connection server daemon (10.0.0.1:40952). Dec 13 01:35:37.494505 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:40942.service: Deactivated successfully. Dec 13 01:35:37.506829 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:35:37.507952 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:35:37.510515 systemd-logind[1558]: Removed session 6. Dec 13 01:35:37.531938 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 40952 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:35:37.535196 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:37.543473 systemd-logind[1558]: New session 7 of user core. Dec 13 01:35:37.550510 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:35:37.608713 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:35:37.609189 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:35:37.618437 kubelet[1693]: E1213 01:35:37.618345 1693 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:35:37.625703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:35:37.626544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:35:38.620334 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:35:38.620642 (dockerd)[1793]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:35:39.263234 dockerd[1793]: time="2024-12-13T01:35:39.263167172Z" level=info msg="Starting up" Dec 13 01:35:41.274108 dockerd[1793]: time="2024-12-13T01:35:41.274038002Z" level=info msg="Loading containers: start." Dec 13 01:35:41.753036 kernel: Initializing XFRM netlink socket Dec 13 01:35:41.838039 systemd-networkd[1250]: docker0: Link UP Dec 13 01:35:42.067154 dockerd[1793]: time="2024-12-13T01:35:42.067110587Z" level=info msg="Loading containers: done." Dec 13 01:35:42.339823 dockerd[1793]: time="2024-12-13T01:35:42.339688798Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:35:42.341599 dockerd[1793]: time="2024-12-13T01:35:42.340138241Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:35:42.341599 dockerd[1793]: time="2024-12-13T01:35:42.340408828Z" level=info msg="Daemon has completed initialization" Dec 13 01:35:42.527023 dockerd[1793]: time="2024-12-13T01:35:42.526855432Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:35:42.527567 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:35:43.569374 containerd[1584]: time="2024-12-13T01:35:43.569290881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:35:44.271632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2282657987.mount: Deactivated successfully. Dec 13 01:35:47.652527 containerd[1584]: time="2024-12-13T01:35:47.652398565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:47.674128 containerd[1584]: time="2024-12-13T01:35:47.674036554Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:35:47.719570 containerd[1584]: time="2024-12-13T01:35:47.719520803Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:47.738894 containerd[1584]: time="2024-12-13T01:35:47.738849171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:47.740329 containerd[1584]: time="2024-12-13T01:35:47.740273632Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 4.170905296s" Dec 13 01:35:47.740329 containerd[1584]: time="2024-12-13T01:35:47.740315932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:35:47.773900 containerd[1584]: time="2024-12-13T01:35:47.773852715Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:35:47.875976 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:35:47.885207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:35:48.071601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:35:48.077814 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:35:48.567159 kubelet[2020]: E1213 01:35:48.565761 2020 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:35:48.579794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:35:48.580677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:35:50.939237 containerd[1584]: time="2024-12-13T01:35:50.939127135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:50.940397 containerd[1584]: time="2024-12-13T01:35:50.940309462Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:35:50.943332 containerd[1584]: time="2024-12-13T01:35:50.943217836Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:50.948154 containerd[1584]: time="2024-12-13T01:35:50.948078471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:50.950147 containerd[1584]: time="2024-12-13T01:35:50.950104872Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.17620594s" Dec 13 01:35:50.950147 containerd[1584]: time="2024-12-13T01:35:50.950144566Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:35:51.025083 containerd[1584]: time="2024-12-13T01:35:51.024988014Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:35:53.785281 containerd[1584]: time="2024-12-13T01:35:53.784750394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:53.786926 containerd[1584]: time="2024-12-13T01:35:53.786821207Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:35:53.792692 containerd[1584]: time="2024-12-13T01:35:53.792396082Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:53.805881 containerd[1584]: time="2024-12-13T01:35:53.802957499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:53.805881 containerd[1584]: time="2024-12-13T01:35:53.804728009Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 2.779660395s" Dec 13 01:35:53.805881 containerd[1584]: time="2024-12-13T01:35:53.804776811Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:35:53.854443 containerd[1584]: time="2024-12-13T01:35:53.854328418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:35:55.287365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059744835.mount: Deactivated successfully. Dec 13 01:35:56.030189 containerd[1584]: time="2024-12-13T01:35:56.030082453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:56.031624 containerd[1584]: time="2024-12-13T01:35:56.031520430Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:35:56.033365 containerd[1584]: time="2024-12-13T01:35:56.033323982Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:56.036706 containerd[1584]: time="2024-12-13T01:35:56.036662413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:56.037716 containerd[1584]: time="2024-12-13T01:35:56.037658030Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.183284808s" Dec 13 01:35:56.037716 containerd[1584]: time="2024-12-13T01:35:56.037698005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:35:56.113959 containerd[1584]: time="2024-12-13T01:35:56.113900201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:35:56.950057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947467386.mount: Deactivated successfully. Dec 13 01:35:58.830616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:35:58.878489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:35:59.154309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:35:59.163107 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:35:59.579640 containerd[1584]: time="2024-12-13T01:35:59.579437931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:59.581341 containerd[1584]: time="2024-12-13T01:35:59.580953684Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:35:59.582961 containerd[1584]: time="2024-12-13T01:35:59.582888533Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:59.587509 containerd[1584]: time="2024-12-13T01:35:59.587449225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:59.588631 containerd[1584]: time="2024-12-13T01:35:59.588571099Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.474615003s" Dec 13 01:35:59.588631 containerd[1584]: time="2024-12-13T01:35:59.588621494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:35:59.599855 kubelet[2120]: E1213 01:35:59.599715 2120 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:35:59.604713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:35:59.605157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:35:59.617609 containerd[1584]: time="2024-12-13T01:35:59.617560274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:36:00.274764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121673800.mount: Deactivated successfully. Dec 13 01:36:00.286099 containerd[1584]: time="2024-12-13T01:36:00.285179078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:00.286759 containerd[1584]: time="2024-12-13T01:36:00.286688358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:36:00.289876 containerd[1584]: time="2024-12-13T01:36:00.289798671Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:00.292990 containerd[1584]: time="2024-12-13T01:36:00.292925064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:00.293924 containerd[1584]: time="2024-12-13T01:36:00.293718783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 676.103285ms" Dec 13 01:36:00.293924 containerd[1584]: time="2024-12-13T01:36:00.293752616Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:36:00.326922 containerd[1584]: time="2024-12-13T01:36:00.326861297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:36:01.547619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580528543.mount: Deactivated successfully. Dec 13 01:36:05.036396 containerd[1584]: time="2024-12-13T01:36:05.036309734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:05.037152 containerd[1584]: time="2024-12-13T01:36:05.037078826Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:36:05.038610 containerd[1584]: time="2024-12-13T01:36:05.038559613Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:05.041876 containerd[1584]: time="2024-12-13T01:36:05.041831419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:05.043452 containerd[1584]: time="2024-12-13T01:36:05.043403668Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.716486456s" Dec 13 01:36:05.043452 containerd[1584]: time="2024-12-13T01:36:05.043448421Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:36:08.106232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:08.122249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:08.139490 systemd[1]: Reloading requested from client PID 2275 ('systemctl') (unit session-7.scope)... Dec 13 01:36:08.139508 systemd[1]: Reloading... Dec 13 01:36:08.216047 zram_generator::config[2317]: No configuration found. Dec 13 01:36:08.739877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:36:08.826043 systemd[1]: Reloading finished in 686 ms. Dec 13 01:36:08.878440 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:36:08.878563 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:36:08.879100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:08.888365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:09.031841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:09.036678 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:36:09.099725 kubelet[2374]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:09.099725 kubelet[2374]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:36:09.099725 kubelet[2374]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:09.101511 kubelet[2374]: I1213 01:36:09.101432 2374 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:36:09.397643 kubelet[2374]: I1213 01:36:09.397593 2374 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:36:09.397643 kubelet[2374]: I1213 01:36:09.397628 2374 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:36:09.397928 kubelet[2374]: I1213 01:36:09.397902 2374 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:36:09.456398 kubelet[2374]: E1213 01:36:09.456337 2374 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:09.482234 kubelet[2374]: I1213 01:36:09.482168 2374 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:36:09.500618 kubelet[2374]: I1213 01:36:09.500541 2374 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:36:09.502325 kubelet[2374]: I1213 01:36:09.502282 2374 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:36:09.503053 kubelet[2374]: I1213 01:36:09.502820 2374 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:36:09.507811 kubelet[2374]: I1213 01:36:09.507563 2374 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:36:09.507811 kubelet[2374]: I1213 01:36:09.507609 2374 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:36:09.508111 kubelet[2374]: I1213 01:36:09.507883 2374 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:09.508299 kubelet[2374]: I1213 01:36:09.508277 2374 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:36:09.508299 kubelet[2374]: I1213 01:36:09.508300 2374 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:36:09.508346 kubelet[2374]: I1213 01:36:09.508340 2374 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:36:09.508371 kubelet[2374]: I1213 01:36:09.508365 2374 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:36:09.508998 kubelet[2374]: W1213 01:36:09.508925 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:09.509065 kubelet[2374]: E1213 01:36:09.509029 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:09.509391 kubelet[2374]: W1213 01:36:09.509346 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:09.509391 kubelet[2374]: E1213 01:36:09.509385 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:09.513728 kubelet[2374]: I1213 01:36:09.513701 2374 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:36:09.523942 kubelet[2374]: I1213 01:36:09.523892 2374 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:36:09.524062 kubelet[2374]: W1213 01:36:09.524035 2374 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:36:09.524986 kubelet[2374]: I1213 01:36:09.524822 2374 server.go:1256] "Started kubelet" Dec 13 01:36:09.524986 kubelet[2374]: I1213 01:36:09.524963 2374 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:36:09.525376 kubelet[2374]: I1213 01:36:09.525335 2374 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:36:09.525424 kubelet[2374]: I1213 01:36:09.525415 2374 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:36:09.526255 kubelet[2374]: I1213 01:36:09.526221 2374 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:36:09.526366 kubelet[2374]: I1213 01:36:09.526334 2374 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:36:09.534485 kubelet[2374]: E1213 01:36:09.534447 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:09.534587 kubelet[2374]: I1213 01:36:09.534515 2374 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:36:09.535305 kubelet[2374]: I1213 01:36:09.534624 2374 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:36:09.535305 kubelet[2374]: I1213 01:36:09.534728 2374 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:36:09.535305 kubelet[2374]: E1213 01:36:09.535118 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" Dec 13 01:36:09.535305 kubelet[2374]: W1213 01:36:09.535161 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:09.535305 kubelet[2374]: E1213 01:36:09.535214 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:09.535521 kubelet[2374]: I1213 01:36:09.535491 2374 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:36:09.535616 kubelet[2374]: I1213 01:36:09.535589 2374 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:36:09.536647 kubelet[2374]: E1213 01:36:09.536594 2374 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:36:09.536893 kubelet[2374]: I1213 01:36:09.536866 2374 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:36:09.562337 kubelet[2374]: I1213 01:36:09.562310 2374 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:36:09.562523 kubelet[2374]: I1213 01:36:09.562489 2374 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:36:09.562523 kubelet[2374]: I1213 01:36:09.562516 2374 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:09.566087 kubelet[2374]: I1213 01:36:09.566057 2374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:36:09.567849 kubelet[2374]: I1213 01:36:09.567810 2374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:36:09.567849 kubelet[2374]: I1213 01:36:09.567855 2374 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:36:09.567972 kubelet[2374]: I1213 01:36:09.567875 2374 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:36:09.567972 kubelet[2374]: E1213 01:36:09.567927 2374 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:36:09.636625 kubelet[2374]: I1213 01:36:09.636581 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:36:09.637072 kubelet[2374]: E1213 01:36:09.637050 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Dec 13 01:36:09.668418 kubelet[2374]: E1213 01:36:09.668203 2374 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:36:09.736328 kubelet[2374]: E1213 01:36:09.736236 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" Dec 13 01:36:09.838939 kubelet[2374]: I1213 01:36:09.838897 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:36:09.839361 kubelet[2374]: E1213 01:36:09.839329 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Dec 13 01:36:09.868732 kubelet[2374]: E1213 01:36:09.868558 2374 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:36:10.137560 kubelet[2374]: E1213 01:36:10.137485 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" Dec 13 01:36:10.241359 kubelet[2374]: I1213 01:36:10.241307 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:36:10.370939 kubelet[2374]: E1213 01:36:10.370673 2374 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:36:10.370939 kubelet[2374]: E1213 01:36:10.370879 2374 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181098b749f3b8dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:36:09.52478742 +0000 UTC m=+0.483513056,LastTimestamp:2024-12-13 01:36:09.52478742 +0000 UTC m=+0.483513056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:36:10.371291 kubelet[2374]: E1213 01:36:10.371197 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Dec 13 01:36:10.373373 kubelet[2374]: W1213 01:36:10.373290 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:10.373713 kubelet[2374]: E1213 01:36:10.373679 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:10.396613 kubelet[2374]: I1213 01:36:10.396434 2374 policy_none.go:49] "None policy: Start" Dec 13 01:36:10.397640 kubelet[2374]: I1213 01:36:10.397613 2374 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:36:10.397736 kubelet[2374]: I1213 01:36:10.397658 2374 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:36:10.416391 kubelet[2374]: I1213 01:36:10.416329 2374 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:36:10.416671 kubelet[2374]: I1213 01:36:10.416645 2374 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:36:10.418049 kubelet[2374]: E1213 01:36:10.417986 2374 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:36:10.473489 kubelet[2374]: W1213 01:36:10.473398 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:10.473489 kubelet[2374]: E1213 01:36:10.473489 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:10.694837 kubelet[2374]: W1213 01:36:10.694579 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:10.694837 kubelet[2374]: E1213 01:36:10.694669 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:10.938962 kubelet[2374]: E1213 01:36:10.938855 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="1.6s" Dec 13 01:36:10.975872 kubelet[2374]: W1213 01:36:10.975696 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:10.975872 kubelet[2374]: E1213 01:36:10.975785 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:11.171857 kubelet[2374]: I1213 01:36:11.171769 2374 topology_manager.go:215] "Topology Admit Handler" podUID="aba8d26090a901fceb9031b2fb5e9c27" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:36:11.173326 kubelet[2374]: I1213 01:36:11.173289 2374 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:36:11.173460 kubelet[2374]: I1213 01:36:11.173429 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:36:11.173857 kubelet[2374]: E1213 01:36:11.173828 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Dec 13 01:36:11.174403 kubelet[2374]: I1213 01:36:11.174343 2374 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:36:11.276905 kubelet[2374]: I1213 01:36:11.276702 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aba8d26090a901fceb9031b2fb5e9c27-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aba8d26090a901fceb9031b2fb5e9c27\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:11.276905 kubelet[2374]: I1213 01:36:11.276772 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:11.276905 kubelet[2374]: I1213 01:36:11.276807 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:11.276905 kubelet[2374]: I1213 01:36:11.276833 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:11.276905 kubelet[2374]: I1213 01:36:11.276860 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:11.277253 kubelet[2374]: I1213 01:36:11.276894 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aba8d26090a901fceb9031b2fb5e9c27-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aba8d26090a901fceb9031b2fb5e9c27\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:11.277253 kubelet[2374]: I1213 01:36:11.276918 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aba8d26090a901fceb9031b2fb5e9c27-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aba8d26090a901fceb9031b2fb5e9c27\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:11.277253 kubelet[2374]: I1213 01:36:11.276949 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:11.277253 kubelet[2374]: I1213 01:36:11.276980 2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:36:11.479402 kubelet[2374]: E1213 01:36:11.479326 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:11.480410 containerd[1584]: time="2024-12-13T01:36:11.480354912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aba8d26090a901fceb9031b2fb5e9c27,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:11.480976 kubelet[2374]: E1213 01:36:11.480470 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:11.481303 containerd[1584]: time="2024-12-13T01:36:11.481230337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:11.482728 kubelet[2374]: E1213 01:36:11.482697 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:11.483169 containerd[1584]: time="2024-12-13T01:36:11.483124021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:11.602278 kubelet[2374]: E1213 01:36:11.602219 2374 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:11.691957 kubelet[2374]: W1213 01:36:11.691875 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:11.691957 kubelet[2374]: E1213 01:36:11.691962 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:12.540276 kubelet[2374]: E1213 01:36:12.540233 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="3.2s" Dec 13 01:36:12.775672 kubelet[2374]: I1213 01:36:12.775621 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:36:12.776363 kubelet[2374]: E1213 01:36:12.776311 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Dec 13 01:36:12.938280 kubelet[2374]: W1213 01:36:12.938218 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:12.938280 kubelet[2374]: E1213 01:36:12.938272 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:13.186484 kubelet[2374]: W1213 01:36:13.186412 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:13.186484 kubelet[2374]: E1213 01:36:13.186483 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:13.495692 kubelet[2374]: W1213 01:36:13.495620 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:13.495692 kubelet[2374]: E1213 01:36:13.495690 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:13.519301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943776713.mount: Deactivated successfully. Dec 13 01:36:13.548664 containerd[1584]: time="2024-12-13T01:36:13.548489329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:13.550911 containerd[1584]: time="2024-12-13T01:36:13.550837424Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:13.552506 containerd[1584]: time="2024-12-13T01:36:13.552397036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:36:13.553944 containerd[1584]: time="2024-12-13T01:36:13.553882175Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:13.555557 containerd[1584]: time="2024-12-13T01:36:13.555472516Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:36:13.561168 containerd[1584]: time="2024-12-13T01:36:13.561104011Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:13.567855 containerd[1584]: time="2024-12-13T01:36:13.567709233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:36:13.578205 containerd[1584]: time="2024-12-13T01:36:13.578128081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:36:13.578918 containerd[1584]: time="2024-12-13T01:36:13.578829688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.095627647s" Dec 13 01:36:13.580779 containerd[1584]: time="2024-12-13T01:36:13.580662284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.099303689s" Dec 13 01:36:13.591637 containerd[1584]: time="2024-12-13T01:36:13.591547668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.111083545s" Dec 13 01:36:13.957736 containerd[1584]: time="2024-12-13T01:36:13.957252485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:13.957736 containerd[1584]: time="2024-12-13T01:36:13.957367877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:13.957736 containerd[1584]: time="2024-12-13T01:36:13.957395399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:13.957736 containerd[1584]: time="2024-12-13T01:36:13.957565525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:13.963567 containerd[1584]: time="2024-12-13T01:36:13.963433363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:13.964674 containerd[1584]: time="2024-12-13T01:36:13.964221395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:13.964674 containerd[1584]: time="2024-12-13T01:36:13.964292121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:13.964674 containerd[1584]: time="2024-12-13T01:36:13.964510280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:13.966068 containerd[1584]: time="2024-12-13T01:36:13.965872663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:13.966155 containerd[1584]: time="2024-12-13T01:36:13.966047849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:13.966155 containerd[1584]: time="2024-12-13T01:36:13.966104066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:13.966326 containerd[1584]: time="2024-12-13T01:36:13.966278832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:14.065331 containerd[1584]: time="2024-12-13T01:36:14.065269757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f16035604bee1cbf32152c6e0728072f46af38b980282dfeb5203d8312d3868e\"" Dec 13 01:36:14.067787 kubelet[2374]: E1213 01:36:14.067755 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:14.075034 containerd[1584]: time="2024-12-13T01:36:14.074782020Z" level=info msg="CreateContainer within sandbox \"f16035604bee1cbf32152c6e0728072f46af38b980282dfeb5203d8312d3868e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:36:14.109769 containerd[1584]: time="2024-12-13T01:36:14.109412166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aba8d26090a901fceb9031b2fb5e9c27,Namespace:kube-system,Attempt:0,} returns sandbox id \"a27be7a4b79e2311572de823a99d9e2e3d45e5466c0b656c1b1d5e34c94169f5\"" Dec 13 01:36:14.111718 containerd[1584]: time="2024-12-13T01:36:14.111639984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a9ed3e79de7669d968ab8d3bd540c31350abc914ec110abe45a4c3b7975f481\"" Dec 13 01:36:14.112578 kubelet[2374]: E1213 01:36:14.112504 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:14.113211 kubelet[2374]: W1213 01:36:14.112733 2374 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:14.113211 kubelet[2374]: E1213 01:36:14.112862 2374 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Dec 13 01:36:14.113677 kubelet[2374]: E1213 01:36:14.113606 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:14.116872 containerd[1584]: time="2024-12-13T01:36:14.116805070Z" level=info msg="CreateContainer within sandbox \"2a9ed3e79de7669d968ab8d3bd540c31350abc914ec110abe45a4c3b7975f481\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:36:14.117131 containerd[1584]: time="2024-12-13T01:36:14.116882408Z" level=info msg="CreateContainer within sandbox \"a27be7a4b79e2311572de823a99d9e2e3d45e5466c0b656c1b1d5e34c94169f5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:36:14.189099 containerd[1584]: time="2024-12-13T01:36:14.188912085Z" level=info msg="CreateContainer within sandbox \"f16035604bee1cbf32152c6e0728072f46af38b980282dfeb5203d8312d3868e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9b98fe23a055d3cdd0edbe03fd98c90438dbb9a623ae4059fe8ce3ffff7910b3\"" Dec 13 01:36:14.190441 containerd[1584]: time="2024-12-13T01:36:14.190365009Z" level=info msg="StartContainer for \"9b98fe23a055d3cdd0edbe03fd98c90438dbb9a623ae4059fe8ce3ffff7910b3\"" Dec 13 01:36:14.318870 containerd[1584]: time="2024-12-13T01:36:14.318791623Z" level=info msg="StartContainer for \"9b98fe23a055d3cdd0edbe03fd98c90438dbb9a623ae4059fe8ce3ffff7910b3\" returns successfully" Dec 13 01:36:14.339437 containerd[1584]: time="2024-12-13T01:36:14.339344008Z" level=info msg="CreateContainer within sandbox \"a27be7a4b79e2311572de823a99d9e2e3d45e5466c0b656c1b1d5e34c94169f5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b44d2777c66cf05fc00a14c3d7421ed3e19d36a68371ea58bd70f001330979e\"" Dec 13 01:36:14.340383 containerd[1584]: time="2024-12-13T01:36:14.340339866Z" level=info msg="StartContainer for \"5b44d2777c66cf05fc00a14c3d7421ed3e19d36a68371ea58bd70f001330979e\"" Dec 13 01:36:14.341021 containerd[1584]: time="2024-12-13T01:36:14.340933533Z" level=info msg="CreateContainer within sandbox \"2a9ed3e79de7669d968ab8d3bd540c31350abc914ec110abe45a4c3b7975f481\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c78fc9734bab33d4f158915e7c257deaf2c6f883196b5895f6921ed2222a1421\"" Dec 13 01:36:14.341861 containerd[1584]: time="2024-12-13T01:36:14.341364449Z" level=info msg="StartContainer for \"c78fc9734bab33d4f158915e7c257deaf2c6f883196b5895f6921ed2222a1421\"" Dec 13 01:36:14.636963 containerd[1584]: time="2024-12-13T01:36:14.635953132Z" level=info msg="StartContainer for \"c78fc9734bab33d4f158915e7c257deaf2c6f883196b5895f6921ed2222a1421\" returns successfully" Dec 13 01:36:14.636963 containerd[1584]: time="2024-12-13T01:36:14.636123768Z" level=info msg="StartContainer for \"5b44d2777c66cf05fc00a14c3d7421ed3e19d36a68371ea58bd70f001330979e\" returns successfully" Dec 13 01:36:14.647978 kubelet[2374]: E1213 01:36:14.647547 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:14.669222 kubelet[2374]: E1213 01:36:14.669167 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:14.669414 kubelet[2374]: E1213 01:36:14.669361 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:15.667172 kubelet[2374]: E1213 01:36:15.667134 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:15.983595 kubelet[2374]: I1213 01:36:15.983241 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:36:16.049793 kubelet[2374]: I1213 01:36:16.049748 2374 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:36:16.175753 kubelet[2374]: E1213 01:36:16.173837 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:16.176981 kubelet[2374]: E1213 01:36:16.176922 2374 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181098b749f3b8dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:36:09.52478742 +0000 UTC m=+0.483513056,LastTimestamp:2024-12-13 01:36:09.52478742 +0000 UTC m=+0.483513056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:36:16.232177 kubelet[2374]: E1213 01:36:16.232116 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="6.4s" Dec 13 01:36:16.232177 kubelet[2374]: E1213 01:36:16.232321 2374 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181098b74aa7ab71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:36:09.536580465 +0000 UTC m=+0.495306101,LastTimestamp:2024-12-13 01:36:09.536580465 +0000 UTC m=+0.495306101,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:36:16.274813 kubelet[2374]: E1213 01:36:16.274633 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:16.375598 kubelet[2374]: E1213 01:36:16.375524 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:16.476767 kubelet[2374]: E1213 01:36:16.476690 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:16.577536 kubelet[2374]: E1213 01:36:16.577465 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:16.677873 kubelet[2374]: E1213 01:36:16.677797 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:16.778606 kubelet[2374]: E1213 01:36:16.778544 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:16.879695 kubelet[2374]: E1213 01:36:16.879506 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:16.979840 kubelet[2374]: E1213 01:36:16.979774 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.080553 kubelet[2374]: E1213 01:36:17.080480 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.181295 kubelet[2374]: E1213 01:36:17.181141 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.282094 kubelet[2374]: E1213 01:36:17.282037 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.382759 kubelet[2374]: E1213 01:36:17.382695 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.483894 kubelet[2374]: E1213 01:36:17.483756 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.584753 kubelet[2374]: E1213 01:36:17.584688 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.685698 kubelet[2374]: E1213 01:36:17.685632 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.786379 kubelet[2374]: E1213 01:36:17.786212 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.886949 kubelet[2374]: E1213 01:36:17.886873 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:17.987875 kubelet[2374]: E1213 01:36:17.987805 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.088544 kubelet[2374]: E1213 01:36:18.088459 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.189234 kubelet[2374]: E1213 01:36:18.189154 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.289889 kubelet[2374]: E1213 01:36:18.289805 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.390579 kubelet[2374]: E1213 01:36:18.390374 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.490805 kubelet[2374]: E1213 01:36:18.490725 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.591897 kubelet[2374]: E1213 01:36:18.591837 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.610241 kubelet[2374]: E1213 01:36:18.610210 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:18.692764 kubelet[2374]: E1213 01:36:18.692611 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.792798 kubelet[2374]: E1213 01:36:18.792725 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.893419 kubelet[2374]: E1213 01:36:18.893361 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:18.994347 kubelet[2374]: E1213 01:36:18.994189 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:19.094975 kubelet[2374]: E1213 01:36:19.094926 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:19.195906 kubelet[2374]: E1213 01:36:19.195840 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:36:19.540394 kubelet[2374]: I1213 01:36:19.540314 2374 apiserver.go:52] "Watching apiserver" Dec 13 01:36:19.635284 kubelet[2374]: I1213 01:36:19.635225 2374 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:36:20.675821 update_engine[1567]: I20241213 01:36:20.675670 1567 update_attempter.cc:509] Updating boot flags... Dec 13 01:36:20.711048 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2654) Dec 13 01:36:20.760063 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2656) Dec 13 01:36:20.795091 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2656) Dec 13 01:36:21.303893 systemd[1]: Reloading requested from client PID 2663 ('systemctl') (unit session-7.scope)... Dec 13 01:36:21.303922 systemd[1]: Reloading... Dec 13 01:36:21.390336 zram_generator::config[2703]: No configuration found. Dec 13 01:36:21.526317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:36:21.613774 systemd[1]: Reloading finished in 309 ms. Dec 13 01:36:21.653538 kubelet[2374]: I1213 01:36:21.653498 2374 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:36:21.653563 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:21.676642 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:36:21.677192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:21.688552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:36:21.865819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:36:21.871707 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:36:21.929150 kubelet[2757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:21.929150 kubelet[2757]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:36:21.929150 kubelet[2757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:36:21.929566 kubelet[2757]: I1213 01:36:21.929232 2757 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:36:21.934840 kubelet[2757]: I1213 01:36:21.934806 2757 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:36:21.934840 kubelet[2757]: I1213 01:36:21.934836 2757 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:36:21.935107 kubelet[2757]: I1213 01:36:21.935092 2757 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:36:21.936537 kubelet[2757]: I1213 01:36:21.936514 2757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:36:21.939537 kubelet[2757]: I1213 01:36:21.939467 2757 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:36:21.948779 kubelet[2757]: I1213 01:36:21.948736 2757 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:36:21.952645 kubelet[2757]: I1213 01:36:21.952611 2757 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:36:21.952845 kubelet[2757]: I1213 01:36:21.952815 2757 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:36:21.952984 kubelet[2757]: I1213 01:36:21.952850 2757 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:36:21.952984 kubelet[2757]: I1213 01:36:21.952862 2757 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:36:21.952984 kubelet[2757]: I1213 01:36:21.952897 2757 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:21.953103 kubelet[2757]: I1213 01:36:21.953021 2757 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:36:21.953103 kubelet[2757]: I1213 01:36:21.953037 2757 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:36:21.953103 kubelet[2757]: I1213 01:36:21.953073 2757 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:36:21.953103 kubelet[2757]: I1213 01:36:21.953092 2757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:36:21.956269 kubelet[2757]: I1213 01:36:21.956229 2757 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:36:21.956298 sudo[2772]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:36:21.956825 sudo[2772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:36:21.957293 kubelet[2757]: I1213 01:36:21.957261 2757 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:36:21.957849 kubelet[2757]: I1213 01:36:21.957828 2757 server.go:1256] "Started kubelet" Dec 13 01:36:21.960218 kubelet[2757]: I1213 01:36:21.960171 2757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:36:21.967057 kubelet[2757]: I1213 01:36:21.964202 2757 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:36:21.967057 kubelet[2757]: I1213 01:36:21.965095 2757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:36:21.967057 kubelet[2757]: I1213 01:36:21.965369 2757 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:36:21.967057 kubelet[2757]: E1213 01:36:21.965590 2757 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:36:21.967057 kubelet[2757]: I1213 01:36:21.966361 2757 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:36:21.968607 kubelet[2757]: I1213 01:36:21.968560 2757 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:36:21.968662 kubelet[2757]: I1213 01:36:21.968656 2757 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:36:21.968828 kubelet[2757]: I1213 01:36:21.968792 2757 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:36:21.970552 kubelet[2757]: I1213 01:36:21.970314 2757 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:36:21.972106 kubelet[2757]: I1213 01:36:21.970921 2757 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:36:21.973183 kubelet[2757]: I1213 01:36:21.973090 2757 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:36:21.981737 kubelet[2757]: I1213 01:36:21.981680 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:36:21.983020 kubelet[2757]: I1213 01:36:21.982983 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:36:21.983066 kubelet[2757]: I1213 01:36:21.983035 2757 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:36:21.983066 kubelet[2757]: I1213 01:36:21.983054 2757 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:36:21.983129 kubelet[2757]: E1213 01:36:21.983100 2757 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:36:22.038049 kubelet[2757]: I1213 01:36:22.037732 2757 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:36:22.038049 kubelet[2757]: I1213 01:36:22.037765 2757 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:36:22.038049 kubelet[2757]: I1213 01:36:22.037790 2757 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:36:22.038049 kubelet[2757]: I1213 01:36:22.038020 2757 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:36:22.038049 kubelet[2757]: I1213 01:36:22.038047 2757 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:36:22.038049 kubelet[2757]: I1213 01:36:22.038058 2757 policy_none.go:49] "None policy: Start" Dec 13 01:36:22.038834 kubelet[2757]: I1213 01:36:22.038814 2757 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:36:22.038881 kubelet[2757]: I1213 01:36:22.038843 2757 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:36:22.039062 kubelet[2757]: I1213 01:36:22.039045 2757 state_mem.go:75] "Updated machine memory state" Dec 13 01:36:22.042446 kubelet[2757]: I1213 01:36:22.041031 2757 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:36:22.042446 kubelet[2757]: I1213 01:36:22.041392 2757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:36:22.074032 kubelet[2757]: I1213 01:36:22.073989 2757 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:36:22.084234 kubelet[2757]: I1213 01:36:22.084187 2757 topology_manager.go:215] "Topology Admit Handler" podUID="aba8d26090a901fceb9031b2fb5e9c27" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:36:22.084633 kubelet[2757]: I1213 01:36:22.084603 2757 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:36:22.084663 kubelet[2757]: I1213 01:36:22.084655 2757 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:36:22.269561 kubelet[2757]: I1213 01:36:22.269403 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aba8d26090a901fceb9031b2fb5e9c27-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aba8d26090a901fceb9031b2fb5e9c27\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:22.269561 kubelet[2757]: I1213 01:36:22.269458 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:22.269561 kubelet[2757]: I1213 01:36:22.269481 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:22.269561 kubelet[2757]: I1213 01:36:22.269501 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:22.269561 kubelet[2757]: I1213 01:36:22.269519 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aba8d26090a901fceb9031b2fb5e9c27-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aba8d26090a901fceb9031b2fb5e9c27\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:22.269920 kubelet[2757]: I1213 01:36:22.269540 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aba8d26090a901fceb9031b2fb5e9c27-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aba8d26090a901fceb9031b2fb5e9c27\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:22.269920 kubelet[2757]: I1213 01:36:22.269570 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:22.269920 kubelet[2757]: I1213 01:36:22.269592 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:36:22.269920 kubelet[2757]: I1213 01:36:22.269616 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:36:22.340564 kubelet[2757]: I1213 01:36:22.340505 2757 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:36:22.340723 kubelet[2757]: I1213 01:36:22.340658 2757 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:36:22.468150 sudo[2772]: pam_unix(sudo:session): session closed for user root Dec 13 01:36:22.641971 kubelet[2757]: E1213 01:36:22.641873 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:22.641971 kubelet[2757]: E1213 01:36:22.641895 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:22.641971 kubelet[2757]: E1213 01:36:22.641944 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:22.954754 kubelet[2757]: I1213 01:36:22.954564 2757 apiserver.go:52] "Watching apiserver" Dec 13 01:36:22.969017 kubelet[2757]: I1213 01:36:22.968952 2757 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:36:23.010970 kubelet[2757]: E1213 01:36:23.010869 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:23.010970 kubelet[2757]: E1213 01:36:23.010872 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:23.018855 kubelet[2757]: E1213 01:36:23.018151 2757 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:36:23.018855 kubelet[2757]: E1213 01:36:23.018628 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:23.041072 kubelet[2757]: I1213 01:36:23.040977 2757 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.040916187 podStartE2EDuration="1.040916187s" podCreationTimestamp="2024-12-13 01:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:23.031200867 +0000 UTC m=+1.154446300" watchObservedRunningTime="2024-12-13 01:36:23.040916187 +0000 UTC m=+1.164161620" Dec 13 01:36:23.096033 kubelet[2757]: I1213 01:36:23.095968 2757 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.095920826 podStartE2EDuration="1.095920826s" podCreationTimestamp="2024-12-13 01:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:23.041230283 +0000 UTC m=+1.164475716" watchObservedRunningTime="2024-12-13 01:36:23.095920826 +0000 UTC m=+1.219166259" Dec 13 01:36:23.096283 kubelet[2757]: I1213 01:36:23.096143 2757 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.096127477 podStartE2EDuration="1.096127477s" podCreationTimestamp="2024-12-13 01:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:23.096040091 +0000 UTC m=+1.219285525" watchObservedRunningTime="2024-12-13 01:36:23.096127477 +0000 UTC m=+1.219372910" Dec 13 01:36:24.012331 kubelet[2757]: E1213 01:36:24.012282 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:24.315261 sudo[1773]: pam_unix(sudo:session): session closed for user root Dec 13 01:36:24.317500 sshd[1766]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:24.321978 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:40952.service: Deactivated successfully. Dec 13 01:36:24.324723 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:36:24.325501 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:36:24.326496 systemd-logind[1558]: Removed session 7. Dec 13 01:36:25.014153 kubelet[2757]: E1213 01:36:25.014116 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:25.383959 kubelet[2757]: E1213 01:36:25.383865 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:26.016132 kubelet[2757]: E1213 01:36:26.016032 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:27.659494 kubelet[2757]: E1213 01:36:27.659433 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:28.020033 kubelet[2757]: E1213 01:36:28.019865 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:33.892708 kubelet[2757]: E1213 01:36:33.892673 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:33.948333 kubelet[2757]: I1213 01:36:33.948300 2757 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:36:33.948730 containerd[1584]: time="2024-12-13T01:36:33.948673096Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:36:33.949306 kubelet[2757]: I1213 01:36:33.948881 2757 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:36:34.029702 kubelet[2757]: E1213 01:36:34.029660 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:34.788494 kubelet[2757]: I1213 01:36:34.787416 2757 topology_manager.go:215] "Topology Admit Handler" podUID="874f9e5b-49e1-480e-b5c9-466e38403ccc" podNamespace="kube-system" podName="kube-proxy-npjjq" Dec 13 01:36:34.791897 kubelet[2757]: I1213 01:36:34.791850 2757 topology_manager.go:215] "Topology Admit Handler" podUID="cc18755d-eda9-4561-9d06-7e9d094f3933" podNamespace="kube-system" podName="cilium-2m4ml" Dec 13 01:36:34.852790 kubelet[2757]: I1213 01:36:34.852727 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-host-proc-sys-kernel\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.852790 kubelet[2757]: I1213 01:36:34.852803 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cni-path\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853109 kubelet[2757]: I1213 01:36:34.852834 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/874f9e5b-49e1-480e-b5c9-466e38403ccc-xtables-lock\") pod \"kube-proxy-npjjq\" (UID: \"874f9e5b-49e1-480e-b5c9-466e38403ccc\") " pod="kube-system/kube-proxy-npjjq" Dec 13 01:36:34.853109 kubelet[2757]: I1213 01:36:34.852938 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvc6k\" (UniqueName: \"kubernetes.io/projected/874f9e5b-49e1-480e-b5c9-466e38403ccc-kube-api-access-pvc6k\") pod \"kube-proxy-npjjq\" (UID: \"874f9e5b-49e1-480e-b5c9-466e38403ccc\") " pod="kube-system/kube-proxy-npjjq" Dec 13 01:36:34.853109 kubelet[2757]: I1213 01:36:34.853033 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc18755d-eda9-4561-9d06-7e9d094f3933-clustermesh-secrets\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853109 kubelet[2757]: I1213 01:36:34.853107 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-run\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853277 kubelet[2757]: I1213 01:36:34.853153 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc18755d-eda9-4561-9d06-7e9d094f3933-hubble-tls\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853277 kubelet[2757]: I1213 01:36:34.853197 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-hostproc\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853277 kubelet[2757]: I1213 01:36:34.853221 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-host-proc-sys-net\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853408 kubelet[2757]: I1213 01:36:34.853277 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-etc-cni-netd\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853408 kubelet[2757]: I1213 01:36:34.853319 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-lib-modules\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853408 kubelet[2757]: I1213 01:36:34.853358 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-config-path\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853408 kubelet[2757]: I1213 01:36:34.853398 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/874f9e5b-49e1-480e-b5c9-466e38403ccc-lib-modules\") pod \"kube-proxy-npjjq\" (UID: \"874f9e5b-49e1-480e-b5c9-466e38403ccc\") " pod="kube-system/kube-proxy-npjjq" Dec 13 01:36:34.853558 kubelet[2757]: I1213 01:36:34.853430 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-bpf-maps\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853558 kubelet[2757]: I1213 01:36:34.853461 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-cgroup\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853558 kubelet[2757]: I1213 01:36:34.853492 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/874f9e5b-49e1-480e-b5c9-466e38403ccc-kube-proxy\") pod \"kube-proxy-npjjq\" (UID: \"874f9e5b-49e1-480e-b5c9-466e38403ccc\") " pod="kube-system/kube-proxy-npjjq" Dec 13 01:36:34.853558 kubelet[2757]: I1213 01:36:34.853539 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-xtables-lock\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:34.853675 kubelet[2757]: I1213 01:36:34.853584 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5cv\" (UniqueName: \"kubernetes.io/projected/cc18755d-eda9-4561-9d06-7e9d094f3933-kube-api-access-6m5cv\") pod \"cilium-2m4ml\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " pod="kube-system/cilium-2m4ml" Dec 13 01:36:35.096684 kubelet[2757]: E1213 01:36:35.096640 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:35.097513 containerd[1584]: time="2024-12-13T01:36:35.097463610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-npjjq,Uid:874f9e5b-49e1-480e-b5c9-466e38403ccc,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:35.099808 kubelet[2757]: E1213 01:36:35.099773 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:35.100356 containerd[1584]: time="2024-12-13T01:36:35.100295070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2m4ml,Uid:cc18755d-eda9-4561-9d06-7e9d094f3933,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:35.156380 kubelet[2757]: I1213 01:36:35.156312 2757 topology_manager.go:215] "Topology Admit Handler" podUID="4d4f937f-2286-4aa7-8f97-000503f7ee73" podNamespace="kube-system" podName="cilium-operator-5cc964979-gkmsg" Dec 13 01:36:35.218251 containerd[1584]: time="2024-12-13T01:36:35.218051976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:35.218251 containerd[1584]: time="2024-12-13T01:36:35.218157194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:35.218251 containerd[1584]: time="2024-12-13T01:36:35.218173184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:35.219195 containerd[1584]: time="2024-12-13T01:36:35.219080164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:35.219279 containerd[1584]: time="2024-12-13T01:36:35.219146439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:35.219350 containerd[1584]: time="2024-12-13T01:36:35.219262007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:35.219588 containerd[1584]: time="2024-12-13T01:36:35.219283898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:35.219588 containerd[1584]: time="2024-12-13T01:36:35.219444591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:35.259941 kubelet[2757]: I1213 01:36:35.259839 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d4f937f-2286-4aa7-8f97-000503f7ee73-cilium-config-path\") pod \"cilium-operator-5cc964979-gkmsg\" (UID: \"4d4f937f-2286-4aa7-8f97-000503f7ee73\") " pod="kube-system/cilium-operator-5cc964979-gkmsg" Dec 13 01:36:35.259941 kubelet[2757]: I1213 01:36:35.259898 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txpc\" (UniqueName: \"kubernetes.io/projected/4d4f937f-2286-4aa7-8f97-000503f7ee73-kube-api-access-2txpc\") pod \"cilium-operator-5cc964979-gkmsg\" (UID: \"4d4f937f-2286-4aa7-8f97-000503f7ee73\") " pod="kube-system/cilium-operator-5cc964979-gkmsg" Dec 13 01:36:35.270128 containerd[1584]: time="2024-12-13T01:36:35.270060833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2m4ml,Uid:cc18755d-eda9-4561-9d06-7e9d094f3933,Namespace:kube-system,Attempt:0,} returns sandbox id \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\"" Dec 13 01:36:35.272362 kubelet[2757]: E1213 01:36:35.272332 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:35.274072 containerd[1584]: time="2024-12-13T01:36:35.274039827Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:36:35.280343 containerd[1584]: time="2024-12-13T01:36:35.280308549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-npjjq,Uid:874f9e5b-49e1-480e-b5c9-466e38403ccc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fd4c0a222674c046dbe3dac5bff2bac512b07b86fc1833a710d4951858c078a\"" Dec 13 01:36:35.280923 kubelet[2757]: E1213 01:36:35.280896 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:35.282977 containerd[1584]: time="2024-12-13T01:36:35.282935262Z" level=info msg="CreateContainer within sandbox \"6fd4c0a222674c046dbe3dac5bff2bac512b07b86fc1833a710d4951858c078a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:36:35.461903 kubelet[2757]: E1213 01:36:35.461751 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:35.462503 containerd[1584]: time="2024-12-13T01:36:35.462459588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gkmsg,Uid:4d4f937f-2286-4aa7-8f97-000503f7ee73,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:35.471964 containerd[1584]: time="2024-12-13T01:36:35.471892686Z" level=info msg="CreateContainer within sandbox \"6fd4c0a222674c046dbe3dac5bff2bac512b07b86fc1833a710d4951858c078a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac34a8bca5481dace0590e3f0d0fa3cbe060ffe9125d7e2a5d7e5af89373db39\"" Dec 13 01:36:35.472783 containerd[1584]: time="2024-12-13T01:36:35.472741798Z" level=info msg="StartContainer for \"ac34a8bca5481dace0590e3f0d0fa3cbe060ffe9125d7e2a5d7e5af89373db39\"" Dec 13 01:36:35.556181 containerd[1584]: time="2024-12-13T01:36:35.556116736Z" level=info msg="StartContainer for \"ac34a8bca5481dace0590e3f0d0fa3cbe060ffe9125d7e2a5d7e5af89373db39\" returns successfully" Dec 13 01:36:35.579599 containerd[1584]: time="2024-12-13T01:36:35.579318009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:35.579599 containerd[1584]: time="2024-12-13T01:36:35.579434038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:35.579775 containerd[1584]: time="2024-12-13T01:36:35.579727301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:35.580719 containerd[1584]: time="2024-12-13T01:36:35.579873697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:35.657383 containerd[1584]: time="2024-12-13T01:36:35.657219666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gkmsg,Uid:4d4f937f-2286-4aa7-8f97-000503f7ee73,Namespace:kube-system,Attempt:0,} returns sandbox id \"a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092\"" Dec 13 01:36:35.658777 kubelet[2757]: E1213 01:36:35.658741 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:36.041684 kubelet[2757]: E1213 01:36:36.041583 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:42.032131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014214599.mount: Deactivated successfully. Dec 13 01:36:44.393537 containerd[1584]: time="2024-12-13T01:36:44.393380829Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:44.394340 containerd[1584]: time="2024-12-13T01:36:44.394190653Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734711" Dec 13 01:36:44.395578 containerd[1584]: time="2024-12-13T01:36:44.395516676Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:44.397533 containerd[1584]: time="2024-12-13T01:36:44.397484629Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.123395951s" Dec 13 01:36:44.397533 containerd[1584]: time="2024-12-13T01:36:44.397527780Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:36:44.399171 containerd[1584]: time="2024-12-13T01:36:44.399139923Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:36:44.400898 containerd[1584]: time="2024-12-13T01:36:44.400851303Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:36:44.417751 containerd[1584]: time="2024-12-13T01:36:44.417682154Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\"" Dec 13 01:36:44.418401 containerd[1584]: time="2024-12-13T01:36:44.418368746Z" level=info msg="StartContainer for \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\"" Dec 13 01:36:44.482662 containerd[1584]: time="2024-12-13T01:36:44.482624233Z" level=info msg="StartContainer for \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\" returns successfully" Dec 13 01:36:44.563356 systemd-resolved[1469]: Under memory pressure, flushing caches. Dec 13 01:36:44.563427 systemd-resolved[1469]: Flushed all caches. Dec 13 01:36:44.565039 systemd-journald[1160]: Under memory pressure, flushing caches. Dec 13 01:36:45.074044 kubelet[2757]: E1213 01:36:45.073028 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:45.127249 kubelet[2757]: I1213 01:36:45.127200 2757 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-npjjq" podStartSLOduration=11.1271463 podStartE2EDuration="11.1271463s" podCreationTimestamp="2024-12-13 01:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:36.06079175 +0000 UTC m=+14.184037183" watchObservedRunningTime="2024-12-13 01:36:45.1271463 +0000 UTC m=+23.250391733" Dec 13 01:36:45.141913 containerd[1584]: time="2024-12-13T01:36:45.140047836Z" level=info msg="shim disconnected" id=d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da namespace=k8s.io Dec 13 01:36:45.142159 containerd[1584]: time="2024-12-13T01:36:45.141925437Z" level=warning msg="cleaning up after shim disconnected" id=d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da namespace=k8s.io Dec 13 01:36:45.142159 containerd[1584]: time="2024-12-13T01:36:45.141951376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:36:45.413461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da-rootfs.mount: Deactivated successfully. Dec 13 01:36:46.076363 kubelet[2757]: E1213 01:36:46.076310 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:46.078821 containerd[1584]: time="2024-12-13T01:36:46.078773053Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:36:46.105555 containerd[1584]: time="2024-12-13T01:36:46.105496065Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\"" Dec 13 01:36:46.106194 containerd[1584]: time="2024-12-13T01:36:46.106157369Z" level=info msg="StartContainer for \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\"" Dec 13 01:36:46.174480 containerd[1584]: time="2024-12-13T01:36:46.174404277Z" level=info msg="StartContainer for \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\" returns successfully" Dec 13 01:36:46.190634 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:36:46.191123 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:36:46.191216 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:36:46.200802 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:36:46.219588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:36:46.228383 containerd[1584]: time="2024-12-13T01:36:46.228295639Z" level=info msg="shim disconnected" id=ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933 namespace=k8s.io Dec 13 01:36:46.228383 containerd[1584]: time="2024-12-13T01:36:46.228370179Z" level=warning msg="cleaning up after shim disconnected" id=ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933 namespace=k8s.io Dec 13 01:36:46.228383 containerd[1584]: time="2024-12-13T01:36:46.228381921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:36:46.415886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933-rootfs.mount: Deactivated successfully. Dec 13 01:36:46.611408 systemd-resolved[1469]: Under memory pressure, flushing caches. Dec 13 01:36:46.611439 systemd-resolved[1469]: Flushed all caches. Dec 13 01:36:46.613032 systemd-journald[1160]: Under memory pressure, flushing caches. Dec 13 01:36:47.080189 kubelet[2757]: E1213 01:36:47.080153 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:47.082844 containerd[1584]: time="2024-12-13T01:36:47.082777897Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:36:47.238258 containerd[1584]: time="2024-12-13T01:36:47.238195700Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\"" Dec 13 01:36:47.238911 containerd[1584]: time="2024-12-13T01:36:47.238876028Z" level=info msg="StartContainer for \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\"" Dec 13 01:36:47.316121 containerd[1584]: time="2024-12-13T01:36:47.316069455Z" level=info msg="StartContainer for \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\" returns successfully" Dec 13 01:36:47.413673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc-rootfs.mount: Deactivated successfully. Dec 13 01:36:47.572496 containerd[1584]: time="2024-12-13T01:36:47.572419787Z" level=info msg="shim disconnected" id=c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc namespace=k8s.io Dec 13 01:36:47.572496 containerd[1584]: time="2024-12-13T01:36:47.572489528Z" level=warning msg="cleaning up after shim disconnected" id=c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc namespace=k8s.io Dec 13 01:36:47.572496 containerd[1584]: time="2024-12-13T01:36:47.572503063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:36:47.662454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161597628.mount: Deactivated successfully. Dec 13 01:36:48.084924 kubelet[2757]: E1213 01:36:48.084886 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:48.088105 containerd[1584]: time="2024-12-13T01:36:48.087931670Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:36:48.129281 containerd[1584]: time="2024-12-13T01:36:48.129232910Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\"" Dec 13 01:36:48.131077 containerd[1584]: time="2024-12-13T01:36:48.130123295Z" level=info msg="StartContainer for \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\"" Dec 13 01:36:48.205687 containerd[1584]: time="2024-12-13T01:36:48.205467542Z" level=info msg="StartContainer for \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\" returns successfully" Dec 13 01:36:48.362922 containerd[1584]: time="2024-12-13T01:36:48.362758709Z" level=info msg="shim disconnected" id=a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba namespace=k8s.io Dec 13 01:36:48.362922 containerd[1584]: time="2024-12-13T01:36:48.362820074Z" level=warning msg="cleaning up after shim disconnected" id=a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba namespace=k8s.io Dec 13 01:36:48.362922 containerd[1584]: time="2024-12-13T01:36:48.362829381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:36:48.813415 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:51370.service - OpenSSH per-connection server daemon (10.0.0.1:51370). Dec 13 01:36:48.857387 sshd[3404]: Accepted publickey for core from 10.0.0.1 port 51370 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:48.859232 sshd[3404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:48.863544 systemd-logind[1558]: New session 8 of user core. Dec 13 01:36:48.878454 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:36:49.024994 sshd[3404]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:49.029311 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:51370.service: Deactivated successfully. Dec 13 01:36:49.032147 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:36:49.032213 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:36:49.033293 systemd-logind[1558]: Removed session 8. Dec 13 01:36:49.089759 kubelet[2757]: E1213 01:36:49.089617 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:49.092908 containerd[1584]: time="2024-12-13T01:36:49.092668707Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:36:49.152467 containerd[1584]: time="2024-12-13T01:36:49.152408293Z" level=info msg="CreateContainer within sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\"" Dec 13 01:36:49.154100 containerd[1584]: time="2024-12-13T01:36:49.153049998Z" level=info msg="StartContainer for \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\"" Dec 13 01:36:49.225909 containerd[1584]: time="2024-12-13T01:36:49.225846005Z" level=info msg="StartContainer for \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\" returns successfully" Dec 13 01:36:49.252143 containerd[1584]: time="2024-12-13T01:36:49.252086066Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:49.254045 containerd[1584]: time="2024-12-13T01:36:49.253540299Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907229" Dec 13 01:36:49.256708 containerd[1584]: time="2024-12-13T01:36:49.256673998Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:36:49.258755 containerd[1584]: time="2024-12-13T01:36:49.258698905Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.859519157s" Dec 13 01:36:49.258822 containerd[1584]: time="2024-12-13T01:36:49.258761302Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:36:49.266064 containerd[1584]: time="2024-12-13T01:36:49.265974390Z" level=info msg="CreateContainer within sandbox \"a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:36:49.404505 containerd[1584]: time="2024-12-13T01:36:49.404345049Z" level=info msg="CreateContainer within sandbox \"a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\"" Dec 13 01:36:49.405550 containerd[1584]: time="2024-12-13T01:36:49.405027612Z" level=info msg="StartContainer for \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\"" Dec 13 01:36:49.413116 kubelet[2757]: I1213 01:36:49.412769 2757 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:36:49.448078 kubelet[2757]: I1213 01:36:49.444017 2757 topology_manager.go:215] "Topology Admit Handler" podUID="82087f3f-607e-4f4d-bc27-7c7f2f861903" podNamespace="kube-system" podName="coredns-76f75df574-7wl5n" Dec 13 01:36:49.448078 kubelet[2757]: I1213 01:36:49.446828 2757 topology_manager.go:215] "Topology Admit Handler" podUID="a1597ea5-ead4-4e83-8603-4a304f41b1f0" podNamespace="kube-system" podName="coredns-76f75df574-fcp99" Dec 13 01:36:49.450979 systemd[1]: run-containerd-runc-k8s.io-d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555-runc.Pt2dyz.mount: Deactivated successfully. Dec 13 01:36:49.531229 containerd[1584]: time="2024-12-13T01:36:49.531173925Z" level=info msg="StartContainer for \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\" returns successfully" Dec 13 01:36:49.610702 kubelet[2757]: I1213 01:36:49.610574 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82087f3f-607e-4f4d-bc27-7c7f2f861903-config-volume\") pod \"coredns-76f75df574-7wl5n\" (UID: \"82087f3f-607e-4f4d-bc27-7c7f2f861903\") " pod="kube-system/coredns-76f75df574-7wl5n" Dec 13 01:36:49.610702 kubelet[2757]: I1213 01:36:49.610718 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mjm\" (UniqueName: \"kubernetes.io/projected/82087f3f-607e-4f4d-bc27-7c7f2f861903-kube-api-access-z5mjm\") pod \"coredns-76f75df574-7wl5n\" (UID: \"82087f3f-607e-4f4d-bc27-7c7f2f861903\") " pod="kube-system/coredns-76f75df574-7wl5n" Dec 13 01:36:49.610702 kubelet[2757]: I1213 01:36:49.610749 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chj7h\" (UniqueName: \"kubernetes.io/projected/a1597ea5-ead4-4e83-8603-4a304f41b1f0-kube-api-access-chj7h\") pod \"coredns-76f75df574-fcp99\" (UID: \"a1597ea5-ead4-4e83-8603-4a304f41b1f0\") " pod="kube-system/coredns-76f75df574-fcp99" Dec 13 01:36:49.611103 kubelet[2757]: I1213 01:36:49.610786 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1597ea5-ead4-4e83-8603-4a304f41b1f0-config-volume\") pod \"coredns-76f75df574-fcp99\" (UID: \"a1597ea5-ead4-4e83-8603-4a304f41b1f0\") " pod="kube-system/coredns-76f75df574-fcp99" Dec 13 01:36:49.761633 kubelet[2757]: E1213 01:36:49.760838 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:49.761830 containerd[1584]: time="2024-12-13T01:36:49.761750122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7wl5n,Uid:82087f3f-607e-4f4d-bc27-7c7f2f861903,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:49.764729 kubelet[2757]: E1213 01:36:49.764707 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:49.765603 containerd[1584]: time="2024-12-13T01:36:49.765379675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fcp99,Uid:a1597ea5-ead4-4e83-8603-4a304f41b1f0,Namespace:kube-system,Attempt:0,}" Dec 13 01:36:50.093266 kubelet[2757]: E1213 01:36:50.093103 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:50.097449 kubelet[2757]: E1213 01:36:50.097429 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:50.273796 kubelet[2757]: I1213 01:36:50.273728 2757 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-gkmsg" podStartSLOduration=1.673537544 podStartE2EDuration="15.273686609s" podCreationTimestamp="2024-12-13 01:36:35 +0000 UTC" firstStartedPulling="2024-12-13 01:36:35.659431206 +0000 UTC m=+13.782676630" lastFinishedPulling="2024-12-13 01:36:49.259580262 +0000 UTC m=+27.382825695" observedRunningTime="2024-12-13 01:36:50.272864154 +0000 UTC m=+28.396109597" watchObservedRunningTime="2024-12-13 01:36:50.273686609 +0000 UTC m=+28.396932042" Dec 13 01:36:50.737696 kubelet[2757]: I1213 01:36:50.737024 2757 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2m4ml" podStartSLOduration=7.612521023 podStartE2EDuration="16.736948987s" podCreationTimestamp="2024-12-13 01:36:34 +0000 UTC" firstStartedPulling="2024-12-13 01:36:35.273551907 +0000 UTC m=+13.396797340" lastFinishedPulling="2024-12-13 01:36:44.397979871 +0000 UTC m=+22.521225304" observedRunningTime="2024-12-13 01:36:50.736770803 +0000 UTC m=+28.860016256" watchObservedRunningTime="2024-12-13 01:36:50.736948987 +0000 UTC m=+28.860194420" Dec 13 01:36:51.099561 kubelet[2757]: E1213 01:36:51.099501 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:51.100156 kubelet[2757]: E1213 01:36:51.099916 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:52.102235 kubelet[2757]: E1213 01:36:52.102178 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:52.418685 systemd-networkd[1250]: cilium_host: Link UP Dec 13 01:36:52.418893 systemd-networkd[1250]: cilium_net: Link UP Dec 13 01:36:52.418898 systemd-networkd[1250]: cilium_net: Gained carrier Dec 13 01:36:52.419174 systemd-networkd[1250]: cilium_host: Gained carrier Dec 13 01:36:52.477529 systemd-networkd[1250]: cilium_host: Gained IPv6LL Dec 13 01:36:52.558321 systemd-networkd[1250]: cilium_vxlan: Link UP Dec 13 01:36:52.558332 systemd-networkd[1250]: cilium_vxlan: Gained carrier Dec 13 01:36:52.797058 kernel: NET: Registered PF_ALG protocol family Dec 13 01:36:53.139260 systemd-networkd[1250]: cilium_net: Gained IPv6LL Dec 13 01:36:53.543320 systemd-networkd[1250]: lxc_health: Link UP Dec 13 01:36:53.552250 systemd-networkd[1250]: lxc_health: Gained carrier Dec 13 01:36:53.827691 systemd-networkd[1250]: lxc65822b8fd385: Link UP Dec 13 01:36:53.839042 kernel: eth0: renamed from tmp4d331 Dec 13 01:36:53.851646 systemd-networkd[1250]: lxc65822b8fd385: Gained carrier Dec 13 01:36:53.857461 systemd-networkd[1250]: lxc0de31b9bd73d: Link UP Dec 13 01:36:53.867508 kernel: eth0: renamed from tmpd0a18 Dec 13 01:36:53.876751 systemd-networkd[1250]: lxc0de31b9bd73d: Gained carrier Dec 13 01:36:54.034389 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:51376.service - OpenSSH per-connection server daemon (10.0.0.1:51376). Dec 13 01:36:54.075943 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 51376 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:54.078412 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:54.086608 systemd-logind[1558]: New session 9 of user core. Dec 13 01:36:54.092450 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:36:54.099211 systemd-networkd[1250]: cilium_vxlan: Gained IPv6LL Dec 13 01:36:54.256802 sshd[3969]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:54.262353 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:36:54.264107 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:51376.service: Deactivated successfully. Dec 13 01:36:54.268743 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:36:54.271204 systemd-logind[1558]: Removed session 9. Dec 13 01:36:55.102280 kubelet[2757]: E1213 01:36:55.102230 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:55.315214 systemd-networkd[1250]: lxc_health: Gained IPv6LL Dec 13 01:36:55.379206 systemd-networkd[1250]: lxc65822b8fd385: Gained IPv6LL Dec 13 01:36:55.507167 systemd-networkd[1250]: lxc0de31b9bd73d: Gained IPv6LL Dec 13 01:36:57.618876 containerd[1584]: time="2024-12-13T01:36:57.618720946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:57.618876 containerd[1584]: time="2024-12-13T01:36:57.618804834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:57.618876 containerd[1584]: time="2024-12-13T01:36:57.618822437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:57.619489 containerd[1584]: time="2024-12-13T01:36:57.619014387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:57.620915 containerd[1584]: time="2024-12-13T01:36:57.620750638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:57.620915 containerd[1584]: time="2024-12-13T01:36:57.620836149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:57.620915 containerd[1584]: time="2024-12-13T01:36:57.620852179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:57.621134 containerd[1584]: time="2024-12-13T01:36:57.620976542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:57.649288 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:36:57.652225 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:36:57.682368 containerd[1584]: time="2024-12-13T01:36:57.682324657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7wl5n,Uid:82087f3f-607e-4f4d-bc27-7c7f2f861903,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d331757600b7f826435d075733ab421a22f4bc7685a7b4e5a2e4f91081c853e\"" Dec 13 01:36:57.685407 containerd[1584]: time="2024-12-13T01:36:57.685342664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fcp99,Uid:a1597ea5-ead4-4e83-8603-4a304f41b1f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0a18c9b2765052eaa5d6a37f00a52d9410ccc917af532c3c989adcfc67552a1\"" Dec 13 01:36:57.687041 kubelet[2757]: E1213 01:36:57.686974 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:57.687608 kubelet[2757]: E1213 01:36:57.687096 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:57.689654 containerd[1584]: time="2024-12-13T01:36:57.689614035Z" level=info msg="CreateContainer within sandbox \"4d331757600b7f826435d075733ab421a22f4bc7685a7b4e5a2e4f91081c853e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:36:57.690447 containerd[1584]: time="2024-12-13T01:36:57.690382969Z" level=info msg="CreateContainer within sandbox \"d0a18c9b2765052eaa5d6a37f00a52d9410ccc917af532c3c989adcfc67552a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:36:57.724157 containerd[1584]: time="2024-12-13T01:36:57.724081201Z" level=info msg="CreateContainer within sandbox \"d0a18c9b2765052eaa5d6a37f00a52d9410ccc917af532c3c989adcfc67552a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9009345a52e3aee6aab4b23ae0c1ccfdeb91f9f13d29a19e027b835c8f11bf93\"" Dec 13 01:36:57.724784 containerd[1584]: time="2024-12-13T01:36:57.724755576Z" level=info msg="StartContainer for \"9009345a52e3aee6aab4b23ae0c1ccfdeb91f9f13d29a19e027b835c8f11bf93\"" Dec 13 01:36:57.728678 containerd[1584]: time="2024-12-13T01:36:57.728382347Z" level=info msg="CreateContainer within sandbox \"4d331757600b7f826435d075733ab421a22f4bc7685a7b4e5a2e4f91081c853e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de302b5c2f3ed35e9f64f341ddec258434cc5390d27e805650c1b9fcb054f3e1\"" Dec 13 01:36:57.729286 containerd[1584]: time="2024-12-13T01:36:57.729252622Z" level=info msg="StartContainer for \"de302b5c2f3ed35e9f64f341ddec258434cc5390d27e805650c1b9fcb054f3e1\"" Dec 13 01:36:57.798383 containerd[1584]: time="2024-12-13T01:36:57.798324610Z" level=info msg="StartContainer for \"de302b5c2f3ed35e9f64f341ddec258434cc5390d27e805650c1b9fcb054f3e1\" returns successfully" Dec 13 01:36:57.798535 containerd[1584]: time="2024-12-13T01:36:57.798334138Z" level=info msg="StartContainer for \"9009345a52e3aee6aab4b23ae0c1ccfdeb91f9f13d29a19e027b835c8f11bf93\" returns successfully" Dec 13 01:36:58.116388 kubelet[2757]: E1213 01:36:58.116328 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:58.117856 kubelet[2757]: E1213 01:36:58.117817 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:58.158547 kubelet[2757]: I1213 01:36:58.158495 2757 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7wl5n" podStartSLOduration=23.158447474 podStartE2EDuration="23.158447474s" podCreationTimestamp="2024-12-13 01:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:58.158107445 +0000 UTC m=+36.281352878" watchObservedRunningTime="2024-12-13 01:36:58.158447474 +0000 UTC m=+36.281692907" Dec 13 01:36:58.625560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963573756.mount: Deactivated successfully. Dec 13 01:36:59.123246 kubelet[2757]: E1213 01:36:59.123210 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:59.123246 kubelet[2757]: E1213 01:36:59.123243 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:36:59.270290 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:40438.service - OpenSSH per-connection server daemon (10.0.0.1:40438). Dec 13 01:36:59.304646 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 40438 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:36:59.306686 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:59.311256 systemd-logind[1558]: New session 10 of user core. Dec 13 01:36:59.317305 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:36:59.445609 sshd[4166]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:59.450810 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:40438.service: Deactivated successfully. Dec 13 01:36:59.454655 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:36:59.454750 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:36:59.456156 systemd-logind[1558]: Removed session 10. Dec 13 01:37:00.124910 kubelet[2757]: E1213 01:37:00.124875 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:00.125414 kubelet[2757]: E1213 01:37:00.125041 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:01.659912 kubelet[2757]: I1213 01:37:01.659747 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:37:01.660745 kubelet[2757]: E1213 01:37:01.660710 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:01.811861 kubelet[2757]: I1213 01:37:01.811793 2757 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fcp99" podStartSLOduration=26.811739062 podStartE2EDuration="26.811739062s" podCreationTimestamp="2024-12-13 01:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:58.562265872 +0000 UTC m=+36.685511315" watchObservedRunningTime="2024-12-13 01:37:01.811739062 +0000 UTC m=+39.934984495" Dec 13 01:37:02.129569 kubelet[2757]: E1213 01:37:02.129528 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:04.459309 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:40454.service - OpenSSH per-connection server daemon (10.0.0.1:40454). Dec 13 01:37:04.490762 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 40454 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:04.492538 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:04.496579 systemd-logind[1558]: New session 11 of user core. Dec 13 01:37:04.507282 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:37:04.628896 sshd[4186]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:04.634613 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:40454.service: Deactivated successfully. Dec 13 01:37:04.637702 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:37:04.637848 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:37:04.639569 systemd-logind[1558]: Removed session 11. Dec 13 01:37:09.644569 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:36652.service - OpenSSH per-connection server daemon (10.0.0.1:36652). Dec 13 01:37:09.683917 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 36652 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:09.686068 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:09.691175 systemd-logind[1558]: New session 12 of user core. Dec 13 01:37:09.704548 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:37:09.829076 sshd[4204]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:09.840368 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:36664.service - OpenSSH per-connection server daemon (10.0.0.1:36664). Dec 13 01:37:09.840938 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:36652.service: Deactivated successfully. Dec 13 01:37:09.843163 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:37:09.844673 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:37:09.845781 systemd-logind[1558]: Removed session 12. Dec 13 01:37:09.871297 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 36664 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:09.873324 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:09.877981 systemd-logind[1558]: New session 13 of user core. Dec 13 01:37:09.887523 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:37:10.147567 sshd[4218]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:10.156486 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:36678.service - OpenSSH per-connection server daemon (10.0.0.1:36678). Dec 13 01:37:10.157293 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:36664.service: Deactivated successfully. Dec 13 01:37:10.163643 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:37:10.164825 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:37:10.166750 systemd-logind[1558]: Removed session 13. Dec 13 01:37:10.196963 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 36678 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:10.199191 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:10.204527 systemd-logind[1558]: New session 14 of user core. Dec 13 01:37:10.211551 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:37:10.356737 sshd[4232]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:10.362875 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:36678.service: Deactivated successfully. Dec 13 01:37:10.366433 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:37:10.367408 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:37:10.368714 systemd-logind[1558]: Removed session 14. Dec 13 01:37:15.370560 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:36680.service - OpenSSH per-connection server daemon (10.0.0.1:36680). Dec 13 01:37:15.401719 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 36680 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:15.403764 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:15.409409 systemd-logind[1558]: New session 15 of user core. Dec 13 01:37:15.419341 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:37:15.536884 sshd[4252]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:15.541674 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:36680.service: Deactivated successfully. Dec 13 01:37:15.544362 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:37:15.544464 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:37:15.545645 systemd-logind[1558]: Removed session 15. Dec 13 01:37:20.552299 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:42932.service - OpenSSH per-connection server daemon (10.0.0.1:42932). Dec 13 01:37:20.584933 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 42932 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:20.586852 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:20.591226 systemd-logind[1558]: New session 16 of user core. Dec 13 01:37:20.601264 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:37:20.716813 sshd[4268]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:20.722355 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:42932.service: Deactivated successfully. Dec 13 01:37:20.725729 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:37:20.726443 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:37:20.727448 systemd-logind[1558]: Removed session 16. Dec 13 01:37:25.737269 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:42948.service - OpenSSH per-connection server daemon (10.0.0.1:42948). Dec 13 01:37:25.766912 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 42948 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:25.768727 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:25.773228 systemd-logind[1558]: New session 17 of user core. Dec 13 01:37:25.784320 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:37:25.905489 sshd[4285]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:25.918383 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:42962.service - OpenSSH per-connection server daemon (10.0.0.1:42962). Dec 13 01:37:25.919232 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:42948.service: Deactivated successfully. Dec 13 01:37:25.922650 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:37:25.925563 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:37:25.926763 systemd-logind[1558]: Removed session 17. Dec 13 01:37:25.949471 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 42962 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:25.951119 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:25.955632 systemd-logind[1558]: New session 18 of user core. Dec 13 01:37:25.970407 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:37:26.674903 sshd[4297]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:26.687516 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:46994.service - OpenSSH per-connection server daemon (10.0.0.1:46994). Dec 13 01:37:26.688121 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:42962.service: Deactivated successfully. Dec 13 01:37:26.692271 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:37:26.692419 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:37:26.693842 systemd-logind[1558]: Removed session 18. Dec 13 01:37:26.725980 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 46994 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:26.727816 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:26.732366 systemd-logind[1558]: New session 19 of user core. Dec 13 01:37:26.742438 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:37:29.984597 kubelet[2757]: E1213 01:37:29.984561 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:30.914600 sshd[4311]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:30.925249 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:47004.service - OpenSSH per-connection server daemon (10.0.0.1:47004). Dec 13 01:37:30.925756 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:46994.service: Deactivated successfully. Dec 13 01:37:30.932490 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:37:30.933406 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:37:30.934627 systemd-logind[1558]: Removed session 19. Dec 13 01:37:30.957071 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 47004 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:30.959297 sshd[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:30.965342 systemd-logind[1558]: New session 20 of user core. Dec 13 01:37:30.974456 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:37:32.327648 sshd[4342]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:32.335278 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:47012.service - OpenSSH per-connection server daemon (10.0.0.1:47012). Dec 13 01:37:32.335805 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:47004.service: Deactivated successfully. Dec 13 01:37:32.338863 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:37:32.340288 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:37:32.341616 systemd-logind[1558]: Removed session 20. Dec 13 01:37:32.367392 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 47012 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:32.369392 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:32.374709 systemd-logind[1558]: New session 21 of user core. Dec 13 01:37:32.383303 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:37:32.499023 sshd[4355]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:32.503764 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:47012.service: Deactivated successfully. Dec 13 01:37:32.506868 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:37:32.506878 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:37:32.508099 systemd-logind[1558]: Removed session 21. Dec 13 01:37:33.985101 kubelet[2757]: E1213 01:37:33.985031 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:37.510239 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:32800.service - OpenSSH per-connection server daemon (10.0.0.1:32800). Dec 13 01:37:37.542736 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 32800 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:37.544579 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:37.549059 systemd-logind[1558]: New session 22 of user core. Dec 13 01:37:37.560400 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:37:37.674444 sshd[4375]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:37.679346 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:32800.service: Deactivated successfully. Dec 13 01:37:37.682432 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:37:37.682445 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:37:37.683839 systemd-logind[1558]: Removed session 22. Dec 13 01:37:42.688627 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:32812.service - OpenSSH per-connection server daemon (10.0.0.1:32812). Dec 13 01:37:42.722600 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 32812 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:42.724482 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:42.729296 systemd-logind[1558]: New session 23 of user core. Dec 13 01:37:42.745413 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:37:42.879732 sshd[4393]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:42.884323 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:32812.service: Deactivated successfully. Dec 13 01:37:42.887601 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:37:42.888487 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:37:42.889628 systemd-logind[1558]: Removed session 23. Dec 13 01:37:44.987030 kubelet[2757]: E1213 01:37:44.984980 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:47.894504 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:53868.service - OpenSSH per-connection server daemon (10.0.0.1:53868). Dec 13 01:37:47.929755 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 53868 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:47.931706 sshd[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:47.937512 systemd-logind[1558]: New session 24 of user core. Dec 13 01:37:47.947537 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:37:48.096437 sshd[4408]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:48.103632 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:53868.service: Deactivated successfully. Dec 13 01:37:48.108966 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:37:48.110217 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:37:48.112204 systemd-logind[1558]: Removed session 24. Dec 13 01:37:53.119842 systemd[1]: Started sshd@24-10.0.0.111:22-10.0.0.1:53874.service - OpenSSH per-connection server daemon (10.0.0.1:53874). Dec 13 01:37:53.178605 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 53874 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:53.181912 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:53.191117 systemd-logind[1558]: New session 25 of user core. Dec 13 01:37:53.201893 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:37:53.459932 sshd[4424]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:53.468351 systemd[1]: sshd@24-10.0.0.111:22-10.0.0.1:53874.service: Deactivated successfully. Dec 13 01:37:53.474241 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:37:53.475520 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:37:53.477668 systemd-logind[1558]: Removed session 25. Dec 13 01:37:53.991890 kubelet[2757]: E1213 01:37:53.986625 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:56.990071 kubelet[2757]: E1213 01:37:56.988489 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:37:58.481544 systemd[1]: Started sshd@25-10.0.0.111:22-10.0.0.1:46546.service - OpenSSH per-connection server daemon (10.0.0.1:46546). Dec 13 01:37:58.541038 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 46546 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:58.545884 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:58.557599 systemd-logind[1558]: New session 26 of user core. Dec 13 01:37:58.566709 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:37:58.806065 sshd[4439]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:58.824239 systemd[1]: Started sshd@26-10.0.0.111:22-10.0.0.1:46548.service - OpenSSH per-connection server daemon (10.0.0.1:46548). Dec 13 01:37:58.825103 systemd[1]: sshd@25-10.0.0.111:22-10.0.0.1:46546.service: Deactivated successfully. Dec 13 01:37:58.837088 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:37:58.839633 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:37:58.843783 systemd-logind[1558]: Removed session 26. Dec 13 01:37:58.880724 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 46548 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:37:58.883836 sshd[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:58.907242 systemd-logind[1558]: New session 27 of user core. Dec 13 01:37:58.922796 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:38:00.984106 kubelet[2757]: E1213 01:38:00.984041 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:01.138632 containerd[1584]: time="2024-12-13T01:38:01.137384064Z" level=info msg="StopContainer for \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\" with timeout 30 (s)" Dec 13 01:38:01.142054 containerd[1584]: time="2024-12-13T01:38:01.140042672Z" level=info msg="Stop container \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\" with signal terminated" Dec 13 01:38:01.190824 containerd[1584]: time="2024-12-13T01:38:01.190694395Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:38:01.201298 containerd[1584]: time="2024-12-13T01:38:01.199380686Z" level=info msg="StopContainer for \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\" with timeout 2 (s)" Dec 13 01:38:01.201298 containerd[1584]: time="2024-12-13T01:38:01.200885788Z" level=info msg="Stop container \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\" with signal terminated" Dec 13 01:38:01.219236 systemd-networkd[1250]: lxc_health: Link DOWN Dec 13 01:38:01.219260 systemd-networkd[1250]: lxc_health: Lost carrier Dec 13 01:38:01.220472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555-rootfs.mount: Deactivated successfully. Dec 13 01:38:01.293558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90-rootfs.mount: Deactivated successfully. Dec 13 01:38:01.408225 containerd[1584]: time="2024-12-13T01:38:01.407758075Z" level=info msg="shim disconnected" id=59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90 namespace=k8s.io Dec 13 01:38:01.408225 containerd[1584]: time="2024-12-13T01:38:01.407839119Z" level=warning msg="cleaning up after shim disconnected" id=59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90 namespace=k8s.io Dec 13 01:38:01.408225 containerd[1584]: time="2024-12-13T01:38:01.407851452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:01.432041 containerd[1584]: time="2024-12-13T01:38:01.430084733Z" level=info msg="shim disconnected" id=d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555 namespace=k8s.io Dec 13 01:38:01.432041 containerd[1584]: time="2024-12-13T01:38:01.430177458Z" level=warning msg="cleaning up after shim disconnected" id=d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555 namespace=k8s.io Dec 13 01:38:01.432041 containerd[1584]: time="2024-12-13T01:38:01.430191315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:01.494326 containerd[1584]: time="2024-12-13T01:38:01.494223845Z" level=info msg="StopContainer for \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\" returns successfully" Dec 13 01:38:01.496617 containerd[1584]: time="2024-12-13T01:38:01.495679112Z" level=info msg="StopPodSandbox for \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\"" Dec 13 01:38:01.496617 containerd[1584]: time="2024-12-13T01:38:01.496028925Z" level=info msg="Container to stop \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:38:01.499726 containerd[1584]: time="2024-12-13T01:38:01.499195034Z" level=info msg="Container to stop \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:38:01.499726 containerd[1584]: time="2024-12-13T01:38:01.499246121Z" level=info msg="Container to stop \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:38:01.499726 containerd[1584]: time="2024-12-13T01:38:01.499266510Z" level=info msg="Container to stop \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:38:01.499726 containerd[1584]: time="2024-12-13T01:38:01.499286308Z" level=info msg="Container to stop \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:38:01.505702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b-shm.mount: Deactivated successfully. Dec 13 01:38:01.508997 containerd[1584]: time="2024-12-13T01:38:01.508939821Z" level=info msg="StopContainer for \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\" returns successfully" Dec 13 01:38:01.517744 containerd[1584]: time="2024-12-13T01:38:01.512307533Z" level=info msg="StopPodSandbox for \"a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092\"" Dec 13 01:38:01.517744 containerd[1584]: time="2024-12-13T01:38:01.512381422Z" level=info msg="Container to stop \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:38:01.524537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092-shm.mount: Deactivated successfully. Dec 13 01:38:01.625774 containerd[1584]: time="2024-12-13T01:38:01.625666184Z" level=info msg="shim disconnected" id=014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b namespace=k8s.io Dec 13 01:38:01.625774 containerd[1584]: time="2024-12-13T01:38:01.625766064Z" level=warning msg="cleaning up after shim disconnected" id=014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b namespace=k8s.io Dec 13 01:38:01.625774 containerd[1584]: time="2024-12-13T01:38:01.625783887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:01.631362 containerd[1584]: time="2024-12-13T01:38:01.626859175Z" level=info msg="shim disconnected" id=a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092 namespace=k8s.io Dec 13 01:38:01.631362 containerd[1584]: time="2024-12-13T01:38:01.626906725Z" level=warning msg="cleaning up after shim disconnected" id=a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092 namespace=k8s.io Dec 13 01:38:01.631362 containerd[1584]: time="2024-12-13T01:38:01.626920060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:01.678553 containerd[1584]: time="2024-12-13T01:38:01.678254096Z" level=info msg="TearDown network for sandbox \"a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092\" successfully" Dec 13 01:38:01.678553 containerd[1584]: time="2024-12-13T01:38:01.678307117Z" level=info msg="StopPodSandbox for \"a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092\" returns successfully" Dec 13 01:38:01.684962 containerd[1584]: time="2024-12-13T01:38:01.684875123Z" level=info msg="TearDown network for sandbox \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" successfully" Dec 13 01:38:01.684962 containerd[1584]: time="2024-12-13T01:38:01.684929336Z" level=info msg="StopPodSandbox for \"014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b\" returns successfully" Dec 13 01:38:01.780885 kubelet[2757]: I1213 01:38:01.780780 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-bpf-maps\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.780885 kubelet[2757]: I1213 01:38:01.780859 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc18755d-eda9-4561-9d06-7e9d094f3933-hubble-tls\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.780885 kubelet[2757]: I1213 01:38:01.780893 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-cgroup\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781290 kubelet[2757]: I1213 01:38:01.780935 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m5cv\" (UniqueName: \"kubernetes.io/projected/cc18755d-eda9-4561-9d06-7e9d094f3933-kube-api-access-6m5cv\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781290 kubelet[2757]: I1213 01:38:01.780972 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-run\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781290 kubelet[2757]: I1213 01:38:01.781019 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-lib-modules\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781290 kubelet[2757]: I1213 01:38:01.780987 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.781290 kubelet[2757]: I1213 01:38:01.781053 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-config-path\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781290 kubelet[2757]: I1213 01:38:01.781080 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-xtables-lock\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781547 kubelet[2757]: I1213 01:38:01.781109 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-host-proc-sys-kernel\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781547 kubelet[2757]: I1213 01:38:01.781136 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cni-path\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781547 kubelet[2757]: I1213 01:38:01.781172 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2txpc\" (UniqueName: \"kubernetes.io/projected/4d4f937f-2286-4aa7-8f97-000503f7ee73-kube-api-access-2txpc\") pod \"4d4f937f-2286-4aa7-8f97-000503f7ee73\" (UID: \"4d4f937f-2286-4aa7-8f97-000503f7ee73\") " Dec 13 01:38:01.781547 kubelet[2757]: I1213 01:38:01.781235 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d4f937f-2286-4aa7-8f97-000503f7ee73-cilium-config-path\") pod \"4d4f937f-2286-4aa7-8f97-000503f7ee73\" (UID: \"4d4f937f-2286-4aa7-8f97-000503f7ee73\") " Dec 13 01:38:01.781547 kubelet[2757]: I1213 01:38:01.781261 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-etc-cni-netd\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781547 kubelet[2757]: I1213 01:38:01.781282 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-host-proc-sys-net\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781776 kubelet[2757]: I1213 01:38:01.781309 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc18755d-eda9-4561-9d06-7e9d094f3933-clustermesh-secrets\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781776 kubelet[2757]: I1213 01:38:01.781331 2757 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-hostproc\") pod \"cc18755d-eda9-4561-9d06-7e9d094f3933\" (UID: \"cc18755d-eda9-4561-9d06-7e9d094f3933\") " Dec 13 01:38:01.781776 kubelet[2757]: I1213 01:38:01.781378 2757 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.781776 kubelet[2757]: I1213 01:38:01.781419 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-hostproc" (OuterVolumeSpecName: "hostproc") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.781776 kubelet[2757]: I1213 01:38:01.781448 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.781776 kubelet[2757]: I1213 01:38:01.781471 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.782815 kubelet[2757]: I1213 01:38:01.782772 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.785647 kubelet[2757]: I1213 01:38:01.784931 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.785647 kubelet[2757]: I1213 01:38:01.785060 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.785647 kubelet[2757]: I1213 01:38:01.785109 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.787272 kubelet[2757]: I1213 01:38:01.787164 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.788689 kubelet[2757]: I1213 01:38:01.788268 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:38:01.788689 kubelet[2757]: I1213 01:38:01.788385 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cni-path" (OuterVolumeSpecName: "cni-path") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:38:01.791092 kubelet[2757]: I1213 01:38:01.790944 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d4f937f-2286-4aa7-8f97-000503f7ee73-kube-api-access-2txpc" (OuterVolumeSpecName: "kube-api-access-2txpc") pod "4d4f937f-2286-4aa7-8f97-000503f7ee73" (UID: "4d4f937f-2286-4aa7-8f97-000503f7ee73"). InnerVolumeSpecName "kube-api-access-2txpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:38:01.791092 kubelet[2757]: I1213 01:38:01.790949 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc18755d-eda9-4561-9d06-7e9d094f3933-kube-api-access-6m5cv" (OuterVolumeSpecName: "kube-api-access-6m5cv") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "kube-api-access-6m5cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:38:01.794333 kubelet[2757]: I1213 01:38:01.794265 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4f937f-2286-4aa7-8f97-000503f7ee73-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d4f937f-2286-4aa7-8f97-000503f7ee73" (UID: "4d4f937f-2286-4aa7-8f97-000503f7ee73"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:38:01.794820 kubelet[2757]: I1213 01:38:01.794705 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc18755d-eda9-4561-9d06-7e9d094f3933-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:38:01.798237 kubelet[2757]: I1213 01:38:01.798153 2757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc18755d-eda9-4561-9d06-7e9d094f3933-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cc18755d-eda9-4561-9d06-7e9d094f3933" (UID: "cc18755d-eda9-4561-9d06-7e9d094f3933"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:38:01.882616 kubelet[2757]: I1213 01:38:01.882405 2757 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.882616 kubelet[2757]: I1213 01:38:01.882481 2757 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.882616 kubelet[2757]: I1213 01:38:01.882513 2757 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc18755d-eda9-4561-9d06-7e9d094f3933-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.882616 kubelet[2757]: I1213 01:38:01.882530 2757 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.882616 kubelet[2757]: I1213 01:38:01.882541 2757 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc18755d-eda9-4561-9d06-7e9d094f3933-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.882616 kubelet[2757]: I1213 01:38:01.882555 2757 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.882616 kubelet[2757]: I1213 01:38:01.882569 2757 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6m5cv\" (UniqueName: \"kubernetes.io/projected/cc18755d-eda9-4561-9d06-7e9d094f3933-kube-api-access-6m5cv\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.882616 kubelet[2757]: I1213 01:38:01.882581 2757 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.883106 kubelet[2757]: I1213 01:38:01.882592 2757 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.883106 kubelet[2757]: I1213 01:38:01.882603 2757 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.883106 kubelet[2757]: I1213 01:38:01.882615 2757 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.883106 kubelet[2757]: I1213 01:38:01.882627 2757 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc18755d-eda9-4561-9d06-7e9d094f3933-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.883106 kubelet[2757]: I1213 01:38:01.882641 2757 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc18755d-eda9-4561-9d06-7e9d094f3933-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.883106 kubelet[2757]: I1213 01:38:01.882655 2757 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2txpc\" (UniqueName: \"kubernetes.io/projected/4d4f937f-2286-4aa7-8f97-000503f7ee73-kube-api-access-2txpc\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:01.883106 kubelet[2757]: I1213 01:38:01.882667 2757 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d4f937f-2286-4aa7-8f97-000503f7ee73-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 01:38:02.076443 kubelet[2757]: E1213 01:38:02.076273 2757 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:38:02.151829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a218f9b8351fdf7737882e368962bea042462ee4d1ed173f2c87572275369092-rootfs.mount: Deactivated successfully. Dec 13 01:38:02.152147 systemd[1]: var-lib-kubelet-pods-4d4f937f\x2d2286\x2d4aa7\x2d8f97\x2d000503f7ee73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2txpc.mount: Deactivated successfully. Dec 13 01:38:02.152375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-014c3c93ff051ac93f09280ed4e882a025361b7c6cab162c6bb364ce182fdd6b-rootfs.mount: Deactivated successfully. Dec 13 01:38:02.152587 systemd[1]: var-lib-kubelet-pods-cc18755d\x2deda9\x2d4561\x2d9d06\x2d7e9d094f3933-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6m5cv.mount: Deactivated successfully. Dec 13 01:38:02.152789 systemd[1]: var-lib-kubelet-pods-cc18755d\x2deda9\x2d4561\x2d9d06\x2d7e9d094f3933-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:38:02.152991 systemd[1]: var-lib-kubelet-pods-cc18755d\x2deda9\x2d4561\x2d9d06\x2d7e9d094f3933-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:38:02.344571 kubelet[2757]: I1213 01:38:02.344517 2757 scope.go:117] "RemoveContainer" containerID="d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555" Dec 13 01:38:02.348611 containerd[1584]: time="2024-12-13T01:38:02.348549933Z" level=info msg="RemoveContainer for \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\"" Dec 13 01:38:02.450066 containerd[1584]: time="2024-12-13T01:38:02.449694120Z" level=info msg="RemoveContainer for \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\" returns successfully" Dec 13 01:38:02.450940 kubelet[2757]: I1213 01:38:02.450311 2757 scope.go:117] "RemoveContainer" containerID="d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555" Dec 13 01:38:02.451042 containerd[1584]: time="2024-12-13T01:38:02.450763435Z" level=error msg="ContainerStatus for \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\": not found" Dec 13 01:38:02.454097 kubelet[2757]: E1213 01:38:02.451222 2757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\": not found" containerID="d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555" Dec 13 01:38:02.454097 kubelet[2757]: I1213 01:38:02.451375 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555"} err="failed to get container status \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5c36d02036bc7449276eb9eeede5e19e2f5b4599250e71120d0845c7fae2555\": not found" Dec 13 01:38:02.454097 kubelet[2757]: I1213 01:38:02.451396 2757 scope.go:117] "RemoveContainer" containerID="59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90" Dec 13 01:38:02.456250 containerd[1584]: time="2024-12-13T01:38:02.456085298Z" level=info msg="RemoveContainer for \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\"" Dec 13 01:38:02.553741 containerd[1584]: time="2024-12-13T01:38:02.551247541Z" level=info msg="RemoveContainer for \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\" returns successfully" Dec 13 01:38:02.556559 kubelet[2757]: I1213 01:38:02.553410 2757 scope.go:117] "RemoveContainer" containerID="a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba" Dec 13 01:38:02.560673 containerd[1584]: time="2024-12-13T01:38:02.560637191Z" level=info msg="RemoveContainer for \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\"" Dec 13 01:38:02.710762 containerd[1584]: time="2024-12-13T01:38:02.710450685Z" level=info msg="RemoveContainer for \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\" returns successfully" Dec 13 01:38:02.713041 kubelet[2757]: I1213 01:38:02.711194 2757 scope.go:117] "RemoveContainer" containerID="c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc" Dec 13 01:38:02.720203 containerd[1584]: time="2024-12-13T01:38:02.719568089Z" level=info msg="RemoveContainer for \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\"" Dec 13 01:38:02.848901 containerd[1584]: time="2024-12-13T01:38:02.848802936Z" level=info msg="RemoveContainer for \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\" returns successfully" Dec 13 01:38:02.849485 kubelet[2757]: I1213 01:38:02.849281 2757 scope.go:117] "RemoveContainer" containerID="ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933" Dec 13 01:38:02.851332 containerd[1584]: time="2024-12-13T01:38:02.851261293Z" level=info msg="RemoveContainer for \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\"" Dec 13 01:38:02.924885 containerd[1584]: time="2024-12-13T01:38:02.924689199Z" level=info msg="RemoveContainer for \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\" returns successfully" Dec 13 01:38:02.930974 kubelet[2757]: I1213 01:38:02.930812 2757 scope.go:117] "RemoveContainer" containerID="d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da" Dec 13 01:38:02.932485 containerd[1584]: time="2024-12-13T01:38:02.932419274Z" level=info msg="RemoveContainer for \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\"" Dec 13 01:38:02.946811 sshd[4452]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:02.957949 containerd[1584]: time="2024-12-13T01:38:02.952557588Z" level=info msg="RemoveContainer for \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\" returns successfully" Dec 13 01:38:02.957949 containerd[1584]: time="2024-12-13T01:38:02.953456882Z" level=error msg="ContainerStatus for \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\": not found" Dec 13 01:38:02.957949 containerd[1584]: time="2024-12-13T01:38:02.954062650Z" level=error msg="ContainerStatus for \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\": not found" Dec 13 01:38:02.957949 containerd[1584]: time="2024-12-13T01:38:02.956182224Z" level=error msg="ContainerStatus for \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\": not found" Dec 13 01:38:02.957949 containerd[1584]: time="2024-12-13T01:38:02.956839410Z" level=error msg="ContainerStatus for \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\": not found" Dec 13 01:38:02.957949 containerd[1584]: time="2024-12-13T01:38:02.957428506Z" level=error msg="ContainerStatus for \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\": not found" Dec 13 01:38:02.958362 kubelet[2757]: I1213 01:38:02.952943 2757 scope.go:117] "RemoveContainer" containerID="59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90" Dec 13 01:38:02.958362 kubelet[2757]: E1213 01:38:02.953688 2757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\": not found" containerID="59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90" Dec 13 01:38:02.958362 kubelet[2757]: I1213 01:38:02.953742 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90"} err="failed to get container status \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\": rpc error: code = NotFound desc = an error occurred when try to find container \"59dd7af1d409e16ef984f25fdf276a8b64044a628d68420b45478a6de3b42c90\": not found" Dec 13 01:38:02.958362 kubelet[2757]: I1213 01:38:02.953762 2757 scope.go:117] "RemoveContainer" containerID="a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba" Dec 13 01:38:02.958362 kubelet[2757]: E1213 01:38:02.954277 2757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\": not found" containerID="a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba" Dec 13 01:38:02.958362 kubelet[2757]: I1213 01:38:02.954309 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba"} err="failed to get container status \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"a465f9525424144f08b6f0d039f8ab82da88fbfe2680e760f8240b942eeab7ba\": not found" Dec 13 01:38:02.958362 kubelet[2757]: I1213 01:38:02.954322 2757 scope.go:117] "RemoveContainer" containerID="c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc" Dec 13 01:38:02.958666 kubelet[2757]: E1213 01:38:02.956465 2757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\": not found" containerID="c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc" Dec 13 01:38:02.958666 kubelet[2757]: I1213 01:38:02.956518 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc"} err="failed to get container status \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"c081c1ec80a7204fd5f8d8aa6b5f92441d8c4d81dfcda4e780ea358670ea11dc\": not found" Dec 13 01:38:02.958666 kubelet[2757]: I1213 01:38:02.956533 2757 scope.go:117] "RemoveContainer" containerID="ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933" Dec 13 01:38:02.958666 kubelet[2757]: E1213 01:38:02.957109 2757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\": not found" containerID="ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933" Dec 13 01:38:02.958666 kubelet[2757]: I1213 01:38:02.957149 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933"} err="failed to get container status \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea7a875dbbfa6c7cae8390bb8f866961c08775eacc5e321c5bfe9767c1ff6933\": not found" Dec 13 01:38:02.958666 kubelet[2757]: I1213 01:38:02.957165 2757 scope.go:117] "RemoveContainer" containerID="d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da" Dec 13 01:38:02.958906 kubelet[2757]: E1213 01:38:02.957620 2757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\": not found" containerID="d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da" Dec 13 01:38:02.958906 kubelet[2757]: I1213 01:38:02.957705 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da"} err="failed to get container status \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7d69a365b61a93fa721ae92c99fda7077dde2e7d07528c2cd87b235750c08da\": not found" Dec 13 01:38:02.975550 systemd[1]: Started sshd@27-10.0.0.111:22-10.0.0.1:46560.service - OpenSSH per-connection server daemon (10.0.0.1:46560). Dec 13 01:38:02.976312 systemd[1]: sshd@26-10.0.0.111:22-10.0.0.1:46548.service: Deactivated successfully. Dec 13 01:38:02.983981 systemd-logind[1558]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:38:02.985898 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:38:02.987617 systemd-logind[1558]: Removed session 27. Dec 13 01:38:03.067247 sshd[4619]: Accepted publickey for core from 10.0.0.1 port 46560 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:03.068679 sshd[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:03.082182 systemd-logind[1558]: New session 28 of user core. Dec 13 01:38:03.089653 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:38:03.984223 kubelet[2757]: E1213 01:38:03.984117 2757 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fcp99" podUID="a1597ea5-ead4-4e83-8603-4a304f41b1f0" Dec 13 01:38:03.987772 kubelet[2757]: I1213 01:38:03.987659 2757 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4d4f937f-2286-4aa7-8f97-000503f7ee73" path="/var/lib/kubelet/pods/4d4f937f-2286-4aa7-8f97-000503f7ee73/volumes" Dec 13 01:38:03.988634 kubelet[2757]: I1213 01:38:03.988599 2757 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cc18755d-eda9-4561-9d06-7e9d094f3933" path="/var/lib/kubelet/pods/cc18755d-eda9-4561-9d06-7e9d094f3933/volumes" Dec 13 01:38:04.013371 kubelet[2757]: I1213 01:38:04.013325 2757 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:38:04Z","lastTransitionTime":"2024-12-13T01:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:38:04.065691 sshd[4619]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:04.074494 systemd[1]: Started sshd@28-10.0.0.111:22-10.0.0.1:46562.service - OpenSSH per-connection server daemon (10.0.0.1:46562). Dec 13 01:38:04.075247 systemd[1]: sshd@27-10.0.0.111:22-10.0.0.1:46560.service: Deactivated successfully. Dec 13 01:38:04.079899 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:38:04.084326 systemd-logind[1558]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:38:04.085982 systemd-logind[1558]: Removed session 28. Dec 13 01:38:04.118831 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 46562 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:04.122605 sshd[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:04.133939 kubelet[2757]: I1213 01:38:04.133879 2757 topology_manager.go:215] "Topology Admit Handler" podUID="ca25430b-84fc-4117-a9ce-7949a4d0938c" podNamespace="kube-system" podName="cilium-8jjdh" Dec 13 01:38:04.133915 systemd-logind[1558]: New session 29 of user core. Dec 13 01:38:04.134235 kubelet[2757]: E1213 01:38:04.134060 2757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc18755d-eda9-4561-9d06-7e9d094f3933" containerName="mount-bpf-fs" Dec 13 01:38:04.134235 kubelet[2757]: E1213 01:38:04.134084 2757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc18755d-eda9-4561-9d06-7e9d094f3933" containerName="cilium-agent" Dec 13 01:38:04.134235 kubelet[2757]: E1213 01:38:04.134096 2757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc18755d-eda9-4561-9d06-7e9d094f3933" containerName="mount-cgroup" Dec 13 01:38:04.134235 kubelet[2757]: E1213 01:38:04.134106 2757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc18755d-eda9-4561-9d06-7e9d094f3933" containerName="apply-sysctl-overwrites" Dec 13 01:38:04.134235 kubelet[2757]: E1213 01:38:04.134116 2757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc18755d-eda9-4561-9d06-7e9d094f3933" containerName="clean-cilium-state" Dec 13 01:38:04.134235 kubelet[2757]: E1213 01:38:04.134126 2757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d4f937f-2286-4aa7-8f97-000503f7ee73" containerName="cilium-operator" Dec 13 01:38:04.134235 kubelet[2757]: I1213 01:38:04.134186 2757 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc18755d-eda9-4561-9d06-7e9d094f3933" containerName="cilium-agent" Dec 13 01:38:04.134485 kubelet[2757]: I1213 01:38:04.134198 2757 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4f937f-2286-4aa7-8f97-000503f7ee73" containerName="cilium-operator" Dec 13 01:38:04.143109 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:38:04.204866 kubelet[2757]: I1213 01:38:04.204774 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-cilium-run\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.204866 kubelet[2757]: I1213 01:38:04.204867 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-etc-cni-netd\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.205142 kubelet[2757]: I1213 01:38:04.204903 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-hostproc\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.205142 kubelet[2757]: I1213 01:38:04.204933 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca25430b-84fc-4117-a9ce-7949a4d0938c-hubble-tls\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.205142 kubelet[2757]: I1213 01:38:04.204961 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-cilium-cgroup\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.205142 kubelet[2757]: I1213 01:38:04.204981 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-xtables-lock\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.205142 kubelet[2757]: I1213 01:38:04.205024 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-bpf-maps\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.207259 kubelet[2757]: I1213 01:38:04.206308 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-cni-path\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.207259 kubelet[2757]: I1213 01:38:04.206418 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-host-proc-sys-kernel\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.207259 kubelet[2757]: I1213 01:38:04.206597 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca25430b-84fc-4117-a9ce-7949a4d0938c-cilium-config-path\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.207259 kubelet[2757]: I1213 01:38:04.206659 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ca25430b-84fc-4117-a9ce-7949a4d0938c-cilium-ipsec-secrets\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.207259 kubelet[2757]: I1213 01:38:04.206761 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-lib-modules\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.207444 kubelet[2757]: I1213 01:38:04.206908 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca25430b-84fc-4117-a9ce-7949a4d0938c-clustermesh-secrets\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.207444 kubelet[2757]: I1213 01:38:04.206992 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rktg\" (UniqueName: \"kubernetes.io/projected/ca25430b-84fc-4117-a9ce-7949a4d0938c-kube-api-access-9rktg\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.207444 kubelet[2757]: I1213 01:38:04.207144 2757 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca25430b-84fc-4117-a9ce-7949a4d0938c-host-proc-sys-net\") pod \"cilium-8jjdh\" (UID: \"ca25430b-84fc-4117-a9ce-7949a4d0938c\") " pod="kube-system/cilium-8jjdh" Dec 13 01:38:04.221217 sshd[4634]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:04.232561 systemd[1]: Started sshd@29-10.0.0.111:22-10.0.0.1:46570.service - OpenSSH per-connection server daemon (10.0.0.1:46570). Dec 13 01:38:04.233390 systemd[1]: sshd@28-10.0.0.111:22-10.0.0.1:46562.service: Deactivated successfully. Dec 13 01:38:04.243456 systemd-logind[1558]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:38:04.243632 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:38:04.247068 systemd-logind[1558]: Removed session 29. Dec 13 01:38:04.271306 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 46570 ssh2: RSA SHA256:x0r+OYSWSaRwllGtX4o4H8bWGnkqZzK3ZUwKdtfgOO0 Dec 13 01:38:04.274126 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:38:04.281487 systemd-logind[1558]: New session 30 of user core. Dec 13 01:38:04.289678 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 01:38:04.458530 kubelet[2757]: E1213 01:38:04.458451 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:04.460606 containerd[1584]: time="2024-12-13T01:38:04.459852579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jjdh,Uid:ca25430b-84fc-4117-a9ce-7949a4d0938c,Namespace:kube-system,Attempt:0,}" Dec 13 01:38:04.509194 containerd[1584]: time="2024-12-13T01:38:04.505635755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:38:04.509194 containerd[1584]: time="2024-12-13T01:38:04.506454156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:38:04.509194 containerd[1584]: time="2024-12-13T01:38:04.506485785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:38:04.509194 containerd[1584]: time="2024-12-13T01:38:04.506612425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:38:04.592984 containerd[1584]: time="2024-12-13T01:38:04.592762461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jjdh,Uid:ca25430b-84fc-4117-a9ce-7949a4d0938c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\"" Dec 13 01:38:04.594629 kubelet[2757]: E1213 01:38:04.594532 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:04.597952 containerd[1584]: time="2024-12-13T01:38:04.597907275Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:38:04.630642 containerd[1584]: time="2024-12-13T01:38:04.630391404Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"abbc273d76d32d40319e1534acb8d32c977e4cd7518cf3537fd4eda42d6b6b46\"" Dec 13 01:38:04.632103 containerd[1584]: time="2024-12-13T01:38:04.632061056Z" level=info msg="StartContainer for \"abbc273d76d32d40319e1534acb8d32c977e4cd7518cf3537fd4eda42d6b6b46\"" Dec 13 01:38:04.725599 containerd[1584]: time="2024-12-13T01:38:04.725436736Z" level=info msg="StartContainer for \"abbc273d76d32d40319e1534acb8d32c977e4cd7518cf3537fd4eda42d6b6b46\" returns successfully" Dec 13 01:38:04.790166 containerd[1584]: time="2024-12-13T01:38:04.789911302Z" level=info msg="shim disconnected" id=abbc273d76d32d40319e1534acb8d32c977e4cd7518cf3537fd4eda42d6b6b46 namespace=k8s.io Dec 13 01:38:04.790166 containerd[1584]: time="2024-12-13T01:38:04.789994970Z" level=warning msg="cleaning up after shim disconnected" id=abbc273d76d32d40319e1534acb8d32c977e4cd7518cf3537fd4eda42d6b6b46 namespace=k8s.io Dec 13 01:38:04.790166 containerd[1584]: time="2024-12-13T01:38:04.790049594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:05.363520 kubelet[2757]: E1213 01:38:05.363471 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:05.366740 containerd[1584]: time="2024-12-13T01:38:05.365552443Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:38:05.383096 containerd[1584]: time="2024-12-13T01:38:05.383048121Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"18d2805fe89144e955c3bb7ddc4252a7e457c8d0d4000dd09975e903eee006df\"" Dec 13 01:38:05.383768 containerd[1584]: time="2024-12-13T01:38:05.383719884Z" level=info msg="StartContainer for \"18d2805fe89144e955c3bb7ddc4252a7e457c8d0d4000dd09975e903eee006df\"" Dec 13 01:38:05.448972 containerd[1584]: time="2024-12-13T01:38:05.448907016Z" level=info msg="StartContainer for \"18d2805fe89144e955c3bb7ddc4252a7e457c8d0d4000dd09975e903eee006df\" returns successfully" Dec 13 01:38:05.483167 containerd[1584]: time="2024-12-13T01:38:05.482961508Z" level=info msg="shim disconnected" id=18d2805fe89144e955c3bb7ddc4252a7e457c8d0d4000dd09975e903eee006df namespace=k8s.io Dec 13 01:38:05.483167 containerd[1584]: time="2024-12-13T01:38:05.483161086Z" level=warning msg="cleaning up after shim disconnected" id=18d2805fe89144e955c3bb7ddc4252a7e457c8d0d4000dd09975e903eee006df namespace=k8s.io Dec 13 01:38:05.483167 containerd[1584]: time="2024-12-13T01:38:05.483178429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:05.983829 kubelet[2757]: E1213 01:38:05.983750 2757 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fcp99" podUID="a1597ea5-ead4-4e83-8603-4a304f41b1f0" Dec 13 01:38:06.316210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18d2805fe89144e955c3bb7ddc4252a7e457c8d0d4000dd09975e903eee006df-rootfs.mount: Deactivated successfully. Dec 13 01:38:06.367308 kubelet[2757]: E1213 01:38:06.367268 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:06.372910 containerd[1584]: time="2024-12-13T01:38:06.372743937Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:38:06.395608 containerd[1584]: time="2024-12-13T01:38:06.395545033Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a584b7d922ec04d594e7f99c28833d8dae9bc4aaf3ba88d36c55621bf31e5f1c\"" Dec 13 01:38:06.396414 containerd[1584]: time="2024-12-13T01:38:06.396351109Z" level=info msg="StartContainer for \"a584b7d922ec04d594e7f99c28833d8dae9bc4aaf3ba88d36c55621bf31e5f1c\"" Dec 13 01:38:06.466527 containerd[1584]: time="2024-12-13T01:38:06.466478732Z" level=info msg="StartContainer for \"a584b7d922ec04d594e7f99c28833d8dae9bc4aaf3ba88d36c55621bf31e5f1c\" returns successfully" Dec 13 01:38:06.511142 containerd[1584]: time="2024-12-13T01:38:06.511046726Z" level=info msg="shim disconnected" id=a584b7d922ec04d594e7f99c28833d8dae9bc4aaf3ba88d36c55621bf31e5f1c namespace=k8s.io Dec 13 01:38:06.511142 containerd[1584]: time="2024-12-13T01:38:06.511124944Z" level=warning msg="cleaning up after shim disconnected" id=a584b7d922ec04d594e7f99c28833d8dae9bc4aaf3ba88d36c55621bf31e5f1c namespace=k8s.io Dec 13 01:38:06.511142 containerd[1584]: time="2024-12-13T01:38:06.511136938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:07.077602 kubelet[2757]: E1213 01:38:07.077549 2757 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:38:07.316224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a584b7d922ec04d594e7f99c28833d8dae9bc4aaf3ba88d36c55621bf31e5f1c-rootfs.mount: Deactivated successfully. Dec 13 01:38:07.370702 kubelet[2757]: E1213 01:38:07.370572 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:07.373700 containerd[1584]: time="2024-12-13T01:38:07.373659100Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:38:07.398876 containerd[1584]: time="2024-12-13T01:38:07.398827059Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e999781999af051d89c03d43116557ef9e06414f4900fe358653848ae1047198\"" Dec 13 01:38:07.399425 containerd[1584]: time="2024-12-13T01:38:07.399388101Z" level=info msg="StartContainer for \"e999781999af051d89c03d43116557ef9e06414f4900fe358653848ae1047198\"" Dec 13 01:38:07.456642 containerd[1584]: time="2024-12-13T01:38:07.456516175Z" level=info msg="StartContainer for \"e999781999af051d89c03d43116557ef9e06414f4900fe358653848ae1047198\" returns successfully" Dec 13 01:38:07.483902 containerd[1584]: time="2024-12-13T01:38:07.483819195Z" level=info msg="shim disconnected" id=e999781999af051d89c03d43116557ef9e06414f4900fe358653848ae1047198 namespace=k8s.io Dec 13 01:38:07.483902 containerd[1584]: time="2024-12-13T01:38:07.483876163Z" level=warning msg="cleaning up after shim disconnected" id=e999781999af051d89c03d43116557ef9e06414f4900fe358653848ae1047198 namespace=k8s.io Dec 13 01:38:07.483902 containerd[1584]: time="2024-12-13T01:38:07.483885811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:07.984151 kubelet[2757]: E1213 01:38:07.984054 2757 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fcp99" podUID="a1597ea5-ead4-4e83-8603-4a304f41b1f0" Dec 13 01:38:08.316055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e999781999af051d89c03d43116557ef9e06414f4900fe358653848ae1047198-rootfs.mount: Deactivated successfully. Dec 13 01:38:08.376235 kubelet[2757]: E1213 01:38:08.376197 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:08.379630 containerd[1584]: time="2024-12-13T01:38:08.379579270Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:38:08.413989 containerd[1584]: time="2024-12-13T01:38:08.413923468Z" level=info msg="CreateContainer within sandbox \"8477dfcadbc48d373e77e5314a1f738dd8df8be527d1287e60657c8153c339f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"924c1628cf361e9bf033e952f3f258fac13044e48d11c9231b05e8eab0e7d56a\"" Dec 13 01:38:08.414762 containerd[1584]: time="2024-12-13T01:38:08.414699837Z" level=info msg="StartContainer for \"924c1628cf361e9bf033e952f3f258fac13044e48d11c9231b05e8eab0e7d56a\"" Dec 13 01:38:08.535567 containerd[1584]: time="2024-12-13T01:38:08.535497028Z" level=info msg="StartContainer for \"924c1628cf361e9bf033e952f3f258fac13044e48d11c9231b05e8eab0e7d56a\" returns successfully" Dec 13 01:38:08.981060 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:38:09.381897 kubelet[2757]: E1213 01:38:09.381844 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:09.983677 kubelet[2757]: E1213 01:38:09.983618 2757 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fcp99" podUID="a1597ea5-ead4-4e83-8603-4a304f41b1f0" Dec 13 01:38:10.460677 kubelet[2757]: E1213 01:38:10.460635 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:10.824472 systemd[1]: run-containerd-runc-k8s.io-924c1628cf361e9bf033e952f3f258fac13044e48d11c9231b05e8eab0e7d56a-runc.XySBaX.mount: Deactivated successfully. Dec 13 01:38:11.984590 kubelet[2757]: E1213 01:38:11.984523 2757 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-fcp99" podUID="a1597ea5-ead4-4e83-8603-4a304f41b1f0" Dec 13 01:38:12.470764 kubelet[2757]: E1213 01:38:12.470257 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:12.480157 systemd-networkd[1250]: lxc_health: Link UP Dec 13 01:38:12.489975 systemd-networkd[1250]: lxc_health: Gained carrier Dec 13 01:38:12.492569 kubelet[2757]: I1213 01:38:12.490854 2757 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8jjdh" podStartSLOduration=8.490800618 podStartE2EDuration="8.490800618s" podCreationTimestamp="2024-12-13 01:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:38:09.475414931 +0000 UTC m=+107.598660374" watchObservedRunningTime="2024-12-13 01:38:12.490800618 +0000 UTC m=+110.614046061" Dec 13 01:38:12.979713 systemd[1]: run-containerd-runc-k8s.io-924c1628cf361e9bf033e952f3f258fac13044e48d11c9231b05e8eab0e7d56a-runc.OwYMYh.mount: Deactivated successfully. Dec 13 01:38:13.390777 kubelet[2757]: E1213 01:38:13.390740 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:13.984405 kubelet[2757]: E1213 01:38:13.984345 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:38:14.163329 systemd-networkd[1250]: lxc_health: Gained IPv6LL Dec 13 01:38:17.238989 systemd[1]: run-containerd-runc-k8s.io-924c1628cf361e9bf033e952f3f258fac13044e48d11c9231b05e8eab0e7d56a-runc.nNLc7u.mount: Deactivated successfully. Dec 13 01:38:19.413106 sshd[4645]: pam_unix(sshd:session): session closed for user core Dec 13 01:38:19.417951 systemd[1]: sshd@29-10.0.0.111:22-10.0.0.1:46570.service: Deactivated successfully. Dec 13 01:38:19.421359 systemd-logind[1558]: Session 30 logged out. Waiting for processes to exit. Dec 13 01:38:19.421391 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 01:38:19.422352 systemd-logind[1558]: Removed session 30.