Jul 11 00:14:35.002324 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:46:23 -00 2025 Jul 11 00:14:35.002362 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:14:35.002379 kernel: BIOS-provided physical RAM map: Jul 11 00:14:35.002388 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 11 00:14:35.002397 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 11 00:14:35.002406 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 11 00:14:35.002417 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 11 00:14:35.002427 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 11 00:14:35.002436 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 11 00:14:35.002445 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 11 00:14:35.002458 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 11 00:14:35.002468 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 11 00:14:35.002481 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 11 00:14:35.002491 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 11 00:14:35.002508 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 11 00:14:35.002520 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 11 00:14:35.002535 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 11 00:14:35.002545 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 11 00:14:35.002555 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 11 00:14:35.002565 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:14:35.002575 kernel: NX (Execute Disable) protection: active Jul 11 00:14:35.002585 kernel: APIC: Static calls initialized Jul 11 00:14:35.002595 kernel: efi: EFI v2.7 by EDK II Jul 11 00:14:35.002605 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jul 11 00:14:35.002615 kernel: SMBIOS 2.8 present. Jul 11 00:14:35.002625 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 11 00:14:35.002635 kernel: Hypervisor detected: KVM Jul 11 00:14:35.002649 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:14:35.002659 kernel: kvm-clock: using sched offset of 5001596667 cycles Jul 11 00:14:35.002669 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:14:35.002726 kernel: tsc: Detected 2794.746 MHz processor Jul 11 00:14:35.002738 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:14:35.002757 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:14:35.002768 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 11 00:14:35.002778 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 11 00:14:35.002789 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:14:35.002805 kernel: Using GB pages for direct mapping Jul 11 00:14:35.002816 kernel: Secure boot disabled Jul 11 00:14:35.002826 kernel: ACPI: Early table checksum verification disabled Jul 11 00:14:35.002837 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 11 00:14:35.002853 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:14:35.002864 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:14:35.002897 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:14:35.002914 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 11 00:14:35.002925 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:14:35.002939 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:14:35.002950 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:14:35.002961 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:14:35.002973 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 11 00:14:35.002984 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 11 00:14:35.002998 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 11 00:14:35.003009 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 11 00:14:35.003020 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 11 00:14:35.003031 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 11 00:14:35.003042 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 11 00:14:35.003052 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 11 00:14:35.003063 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 11 00:14:35.003074 kernel: No NUMA configuration found Jul 11 00:14:35.003089 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 11 00:14:35.003104 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 11 00:14:35.003115 kernel: Zone ranges: Jul 11 00:14:35.003126 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:14:35.003137 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 11 00:14:35.003148 kernel: Normal empty Jul 11 00:14:35.003158 kernel: Movable zone start for each node Jul 11 00:14:35.003169 kernel: Early memory node ranges Jul 11 00:14:35.003180 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 11 00:14:35.003191 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 11 00:14:35.003201 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 11 00:14:35.003216 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 11 00:14:35.003227 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 11 00:14:35.003238 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 11 00:14:35.003252 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 11 00:14:35.003263 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:14:35.003274 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 11 00:14:35.003285 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 11 00:14:35.003295 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:14:35.003306 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 11 00:14:35.003321 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 11 00:14:35.003333 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 11 00:14:35.003343 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:14:35.003354 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:14:35.003365 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:14:35.003376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:14:35.003387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:14:35.003398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:14:35.003409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:14:35.003423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:14:35.003434 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:14:35.003444 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:14:35.003455 kernel: TSC deadline timer available Jul 11 00:14:35.003466 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 11 00:14:35.003477 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:14:35.003488 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:14:35.003498 kernel: kvm-guest: setup PV sched yield Jul 11 00:14:35.003509 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 11 00:14:35.003524 kernel: Booting paravirtualized kernel on KVM Jul 11 00:14:35.003535 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:14:35.003546 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:14:35.003557 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 11 00:14:35.003568 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 11 00:14:35.003578 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:14:35.003589 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:14:35.003599 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:14:35.003612 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:14:35.003631 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:14:35.003642 kernel: random: crng init done Jul 11 00:14:35.003653 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:14:35.003664 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:14:35.003693 kernel: Fallback order for Node 0: 0 Jul 11 00:14:35.003704 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 11 00:14:35.003715 kernel: Policy zone: DMA32 Jul 11 00:14:35.003726 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:14:35.003737 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22744K rodata, 42872K init, 2320K bss, 166140K reserved, 0K cma-reserved) Jul 11 00:14:35.003763 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:14:35.003774 kernel: ftrace: allocating 37966 entries in 149 pages Jul 11 00:14:35.003785 kernel: ftrace: allocated 149 pages with 4 groups Jul 11 00:14:35.003796 kernel: Dynamic Preempt: voluntary Jul 11 00:14:35.003819 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:14:35.003836 kernel: rcu: RCU event tracing is enabled. Jul 11 00:14:35.003847 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:14:35.003859 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:14:35.003870 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:14:35.003881 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:14:35.003893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:14:35.003908 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:14:35.003919 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:14:35.003935 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:14:35.003946 kernel: Console: colour dummy device 80x25 Jul 11 00:14:35.003958 kernel: printk: console [ttyS0] enabled Jul 11 00:14:35.003973 kernel: ACPI: Core revision 20230628 Jul 11 00:14:35.003985 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:14:35.003996 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:14:35.004007 kernel: x2apic enabled Jul 11 00:14:35.004018 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:14:35.004030 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:14:35.004041 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:14:35.004052 kernel: kvm-guest: setup PV IPIs Jul 11 00:14:35.004064 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:14:35.004079 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 11 00:14:35.004090 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 11 00:14:35.004101 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:14:35.004113 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:14:35.004124 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:14:35.004135 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:14:35.004147 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:14:35.004158 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:14:35.004169 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:14:35.004184 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:14:35.004196 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:14:35.004207 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:14:35.004218 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:14:35.004235 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:14:35.004246 kernel: x86/bugs: return thunk changed Jul 11 00:14:35.004257 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:14:35.004269 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:14:35.004284 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:14:35.004295 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:14:35.004306 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:14:35.004318 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:14:35.004329 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:14:35.004341 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:14:35.004352 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:14:35.004363 kernel: landlock: Up and running. Jul 11 00:14:35.004374 kernel: SELinux: Initializing. Jul 11 00:14:35.004389 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:14:35.004401 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:14:35.004412 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:14:35.004424 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:14:35.004435 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:14:35.004447 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:14:35.004458 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:14:35.004469 kernel: ... version: 0 Jul 11 00:14:35.004481 kernel: ... bit width: 48 Jul 11 00:14:35.004496 kernel: ... generic registers: 6 Jul 11 00:14:35.004510 kernel: ... value mask: 0000ffffffffffff Jul 11 00:14:35.004522 kernel: ... max period: 00007fffffffffff Jul 11 00:14:35.004535 kernel: ... fixed-purpose events: 0 Jul 11 00:14:35.004546 kernel: ... event mask: 000000000000003f Jul 11 00:14:35.004557 kernel: signal: max sigframe size: 1776 Jul 11 00:14:35.004568 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:14:35.004580 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:14:35.004591 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:14:35.004605 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:14:35.004617 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:14:35.004628 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:14:35.004639 kernel: smpboot: Max logical packages: 1 Jul 11 00:14:35.004651 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 11 00:14:35.004662 kernel: devtmpfs: initialized Jul 11 00:14:35.004673 kernel: x86/mm: Memory block size: 128MB Jul 11 00:14:35.004721 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 11 00:14:35.004732 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 11 00:14:35.004744 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 11 00:14:35.004771 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 11 00:14:35.004782 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 11 00:14:35.004794 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:14:35.004805 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:14:35.004817 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:14:35.004828 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:14:35.004839 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:14:35.004851 kernel: audit: type=2000 audit(1752192873.682:1): state=initialized audit_enabled=0 res=1 Jul 11 00:14:35.004866 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:14:35.004877 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:14:35.004888 kernel: cpuidle: using governor menu Jul 11 00:14:35.004899 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:14:35.004910 kernel: dca service started, version 1.12.1 Jul 11 00:14:35.004921 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 11 00:14:35.004932 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:14:35.004943 kernel: PCI: Using configuration type 1 for base access Jul 11 00:14:35.004955 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:14:35.004970 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:14:35.004982 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:14:35.004993 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:14:35.005004 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:14:35.005015 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:14:35.005027 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:14:35.005038 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:14:35.005049 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:14:35.005060 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 11 00:14:35.005076 kernel: ACPI: Interpreter enabled Jul 11 00:14:35.005087 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:14:35.005099 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:14:35.005110 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:14:35.005122 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:14:35.005133 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:14:35.005144 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:14:35.005468 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:14:35.005662 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:14:35.005874 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:14:35.005890 kernel: PCI host bridge to bus 0000:00 Jul 11 00:14:35.006080 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:14:35.006241 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:14:35.006400 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:14:35.006558 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:14:35.006775 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:14:35.006940 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 11 00:14:35.007103 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:14:35.007318 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 11 00:14:35.007525 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 11 00:14:35.007728 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 11 00:14:35.007930 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 11 00:14:35.008105 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 11 00:14:35.008346 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 11 00:14:35.008524 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:14:35.008757 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:14:35.008941 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 11 00:14:35.009118 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 11 00:14:35.009300 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 11 00:14:35.009510 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 11 00:14:35.009731 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 11 00:14:35.009928 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 11 00:14:35.010104 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 11 00:14:35.010307 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 11 00:14:35.010485 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 11 00:14:35.010798 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 11 00:14:35.010983 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 11 00:14:35.011158 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 11 00:14:35.011357 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 11 00:14:35.011533 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:14:35.011776 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 11 00:14:35.011955 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 11 00:14:35.012136 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 11 00:14:35.012334 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 11 00:14:35.012591 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 11 00:14:35.012610 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:14:35.012622 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:14:35.012633 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:14:35.012645 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:14:35.012663 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:14:35.012777 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:14:35.012791 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:14:35.012803 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:14:35.012814 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:14:35.012825 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:14:35.012837 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:14:35.012849 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:14:35.012860 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:14:35.012878 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:14:35.012889 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:14:35.012901 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:14:35.012912 kernel: iommu: Default domain type: Translated Jul 11 00:14:35.012924 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:14:35.012936 kernel: efivars: Registered efivars operations Jul 11 00:14:35.012947 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:14:35.012959 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:14:35.012970 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 11 00:14:35.012981 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 11 00:14:35.012997 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 11 00:14:35.013008 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 11 00:14:35.013185 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:14:35.013357 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:14:35.013529 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:14:35.013545 kernel: vgaarb: loaded Jul 11 00:14:35.013557 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:14:35.013568 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:14:35.013586 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:14:35.013598 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:14:35.013610 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:14:35.013621 kernel: pnp: PnP ACPI init Jul 11 00:14:35.013854 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:14:35.013873 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:14:35.013885 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:14:35.013896 kernel: NET: Registered PF_INET protocol family Jul 11 00:14:35.013914 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:14:35.013925 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:14:35.013937 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:14:35.013949 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:14:35.013961 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:14:35.013973 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:14:35.013984 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:14:35.013996 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:14:35.014008 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:14:35.014024 kernel: NET: Registered PF_XDP protocol family Jul 11 00:14:35.014197 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 11 00:14:35.014370 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 11 00:14:35.014532 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:14:35.014709 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:14:35.014878 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:14:35.015036 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:14:35.015193 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:14:35.015357 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 11 00:14:35.015373 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:14:35.015384 kernel: Initialise system trusted keyrings Jul 11 00:14:35.015396 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:14:35.015408 kernel: Key type asymmetric registered Jul 11 00:14:35.015419 kernel: Asymmetric key parser 'x509' registered Jul 11 00:14:35.015431 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 11 00:14:35.015442 kernel: io scheduler mq-deadline registered Jul 11 00:14:35.015454 kernel: io scheduler kyber registered Jul 11 00:14:35.015471 kernel: io scheduler bfq registered Jul 11 00:14:35.015483 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:14:35.015496 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:14:35.015508 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:14:35.015519 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:14:35.015531 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:14:35.015543 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:14:35.015555 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:14:35.015566 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:14:35.015581 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:14:35.015857 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:14:35.015878 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:14:35.016147 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:14:35.016314 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:14:34 UTC (1752192874) Jul 11 00:14:35.016478 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:14:35.016495 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:14:35.016507 kernel: efifb: probing for efifb Jul 11 00:14:35.016525 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 11 00:14:35.016537 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 11 00:14:35.016548 kernel: efifb: scrolling: redraw Jul 11 00:14:35.016560 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 11 00:14:35.016572 kernel: Console: switching to colour frame buffer device 100x37 Jul 11 00:14:35.016586 kernel: fb0: EFI VGA frame buffer device Jul 11 00:14:35.016623 kernel: pstore: Using crash dump compression: deflate Jul 11 00:14:35.016638 kernel: pstore: Registered efi_pstore as persistent store backend Jul 11 00:14:35.016650 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:14:35.016666 kernel: Segment Routing with IPv6 Jul 11 00:14:35.016697 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:14:35.016709 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:14:35.016721 kernel: Key type dns_resolver registered Jul 11 00:14:35.016733 kernel: IPI shorthand broadcast: enabled Jul 11 00:14:35.016745 kernel: sched_clock: Marking stable (1117002949, 115999399)->(1254019440, -21017092) Jul 11 00:14:35.016769 kernel: registered taskstats version 1 Jul 11 00:14:35.016782 kernel: Loading compiled-in X.509 certificates Jul 11 00:14:35.016794 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5956f0842928c96096c398e9db55919cd236a39f' Jul 11 00:14:35.016810 kernel: Key type .fscrypt registered Jul 11 00:14:35.016822 kernel: Key type fscrypt-provisioning registered Jul 11 00:14:35.016835 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:14:35.016847 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:14:35.016859 kernel: ima: No architecture policies found Jul 11 00:14:35.016871 kernel: clk: Disabling unused clocks Jul 11 00:14:35.016883 kernel: Freeing unused kernel image (initmem) memory: 42872K Jul 11 00:14:35.016895 kernel: Write protecting the kernel read-only data: 36864k Jul 11 00:14:35.016911 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Jul 11 00:14:35.016923 kernel: Run /init as init process Jul 11 00:14:35.016935 kernel: with arguments: Jul 11 00:14:35.016946 kernel: /init Jul 11 00:14:35.016959 kernel: with environment: Jul 11 00:14:35.016970 kernel: HOME=/ Jul 11 00:14:35.016982 kernel: TERM=linux Jul 11 00:14:35.016994 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:14:35.017014 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:14:35.017039 systemd[1]: Detected virtualization kvm. Jul 11 00:14:35.017070 systemd[1]: Detected architecture x86-64. Jul 11 00:14:35.017100 systemd[1]: Running in initrd. Jul 11 00:14:35.017135 systemd[1]: No hostname configured, using default hostname. Jul 11 00:14:35.017165 systemd[1]: Hostname set to . Jul 11 00:14:35.017208 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:14:35.017238 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:14:35.017272 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:14:35.017305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:14:35.017341 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:14:35.017371 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:14:35.017401 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:14:35.017437 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:14:35.017457 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:14:35.017491 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:14:35.017522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:14:35.017552 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:14:35.017586 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:14:35.017617 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:14:35.017660 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:14:35.017773 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:14:35.017804 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:14:35.017835 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:14:35.017869 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:14:35.017899 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:14:35.017934 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:14:35.017965 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:14:35.018001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:14:35.018045 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:14:35.018075 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:14:35.018093 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:14:35.018106 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:14:35.018119 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:14:35.018132 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:14:35.018145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:14:35.018158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:14:35.018174 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:14:35.018188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:14:35.018200 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:14:35.018214 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:14:35.018263 systemd-journald[192]: Collecting audit messages is disabled. Jul 11 00:14:35.018299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:14:35.018313 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:14:35.018326 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:14:35.018343 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:14:35.018357 systemd-journald[192]: Journal started Jul 11 00:14:35.018384 systemd-journald[192]: Runtime Journal (/run/log/journal/38d3b60b59544f1482d97b44fc43a937) is 6.0M, max 48.3M, 42.2M free. Jul 11 00:14:35.018450 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:14:35.020878 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:14:34.992501 systemd-modules-load[193]: Inserted module 'overlay' Jul 11 00:14:35.025580 kernel: Bridge firewalling registered Jul 11 00:14:35.024427 systemd-modules-load[193]: Inserted module 'br_netfilter' Jul 11 00:14:35.026629 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:14:35.027834 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:14:35.032115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:14:35.050983 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:14:35.054266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:14:35.055823 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:14:35.068559 dracut-cmdline[220]: dracut-dracut-053 Jul 11 00:14:35.071567 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:14:35.072223 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:14:35.081375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:14:35.089912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:14:35.130364 systemd-resolved[251]: Positive Trust Anchors: Jul 11 00:14:35.130393 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:14:35.130437 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:14:35.133986 systemd-resolved[251]: Defaulting to hostname 'linux'. Jul 11 00:14:35.135659 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:14:35.141667 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:14:35.171708 kernel: SCSI subsystem initialized Jul 11 00:14:35.181701 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:14:35.192715 kernel: iscsi: registered transport (tcp) Jul 11 00:14:35.218128 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:14:35.218165 kernel: QLogic iSCSI HBA Driver Jul 11 00:14:35.283443 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:14:35.299933 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:14:35.366017 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:14:35.366116 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:14:35.367201 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:14:35.415720 kernel: raid6: avx2x4 gen() 28476 MB/s Jul 11 00:14:35.432723 kernel: raid6: avx2x2 gen() 30946 MB/s Jul 11 00:14:35.449767 kernel: raid6: avx2x1 gen() 25636 MB/s Jul 11 00:14:35.449834 kernel: raid6: using algorithm avx2x2 gen() 30946 MB/s Jul 11 00:14:35.467801 kernel: raid6: .... xor() 19850 MB/s, rmw enabled Jul 11 00:14:35.467892 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:14:35.488744 kernel: xor: automatically using best checksumming function avx Jul 11 00:14:35.649739 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:14:35.664432 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:14:35.676849 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:14:35.689803 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 11 00:14:35.694726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:14:35.737807 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:14:35.751795 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Jul 11 00:14:35.787961 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:14:35.800912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:14:35.870654 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:14:35.879884 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:14:35.897337 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:14:35.900483 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:14:35.902125 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:14:35.905781 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:14:35.913220 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:14:35.919715 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:14:35.926739 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:14:35.927142 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:14:35.935296 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:14:35.935367 kernel: GPT:9289727 != 19775487 Jul 11 00:14:35.935381 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:14:35.935394 kernel: GPT:9289727 != 19775487 Jul 11 00:14:35.936217 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:14:35.936249 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:14:35.937698 kernel: AVX2 version of gcm_enc/dec engaged. Jul 11 00:14:35.939153 kernel: AES CTR mode by8 optimization enabled Jul 11 00:14:35.941862 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:14:35.957708 kernel: libata version 3.00 loaded. Jul 11 00:14:35.960014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:14:35.960225 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:14:35.965983 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:14:35.971535 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Jul 11 00:14:35.971560 kernel: BTRFS: device fsid 54fb9359-b495-4b0c-b313-b0e2955e4a38 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (476) Jul 11 00:14:35.966245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:14:35.966425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:14:35.972972 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:14:35.978764 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:14:35.978964 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:14:35.978977 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 11 00:14:35.980708 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:14:35.983826 kernel: scsi host0: ahci Jul 11 00:14:35.984026 kernel: scsi host1: ahci Jul 11 00:14:35.984184 kernel: scsi host2: ahci Jul 11 00:14:35.984900 kernel: scsi host3: ahci Jul 11 00:14:35.985792 kernel: scsi host4: ahci Jul 11 00:14:35.986109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:14:35.993466 kernel: scsi host5: ahci Jul 11 00:14:35.993729 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 11 00:14:35.993751 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 11 00:14:35.993762 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 11 00:14:35.993772 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 11 00:14:35.993783 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 11 00:14:35.993793 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 11 00:14:36.008766 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:14:36.010377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:14:36.019159 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:14:36.027537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:14:36.090890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:14:36.094059 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:14:36.120897 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:14:36.122338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:14:36.149798 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:14:36.302984 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:14:36.303038 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:14:36.303987 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:14:36.304087 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:14:36.305702 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:14:36.305744 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:14:36.306722 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:14:36.307705 kernel: ata3.00: applying bridge limits Jul 11 00:14:36.307734 kernel: ata3.00: configured for UDMA/100 Jul 11 00:14:36.308690 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:14:36.358786 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:14:36.359017 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:14:36.370812 disk-uuid[559]: Primary Header is updated. Jul 11 00:14:36.370812 disk-uuid[559]: Secondary Entries is updated. Jul 11 00:14:36.370812 disk-uuid[559]: Secondary Header is updated. Jul 11 00:14:36.421700 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:14:36.452700 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:14:36.519702 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:14:37.528739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:14:37.529239 disk-uuid[580]: The operation has completed successfully. Jul 11 00:14:37.560204 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:14:37.560342 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:14:37.582933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:14:37.589444 sh[595]: Success Jul 11 00:14:37.604713 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 11 00:14:37.643814 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:14:37.659507 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:14:37.663025 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:14:37.676816 kernel: BTRFS info (device dm-0): first mount of filesystem 54fb9359-b495-4b0c-b313-b0e2955e4a38 Jul 11 00:14:37.677060 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:14:37.677074 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:14:37.678765 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:14:37.678786 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:14:37.685931 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:14:37.688656 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:14:37.700144 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:14:37.703530 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:14:37.714787 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:14:37.714831 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:14:37.714848 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:14:37.717710 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:14:37.730631 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:14:37.732029 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:14:37.853211 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:14:37.866908 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:14:37.896492 systemd-networkd[773]: lo: Link UP Jul 11 00:14:37.896507 systemd-networkd[773]: lo: Gained carrier Jul 11 00:14:37.898616 systemd-networkd[773]: Enumeration completed Jul 11 00:14:37.899280 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:14:37.899729 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:14:37.899734 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:14:37.910263 systemd[1]: Reached target network.target - Network. Jul 11 00:14:37.911212 systemd-networkd[773]: eth0: Link UP Jul 11 00:14:37.911216 systemd-networkd[773]: eth0: Gained carrier Jul 11 00:14:37.911233 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:14:37.933751 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:14:37.999889 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:14:38.013848 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:14:38.086895 ignition[778]: Ignition 2.19.0 Jul 11 00:14:38.086918 ignition[778]: Stage: fetch-offline Jul 11 00:14:38.087024 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:14:38.087062 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:14:38.087251 ignition[778]: parsed url from cmdline: "" Jul 11 00:14:38.087255 ignition[778]: no config URL provided Jul 11 00:14:38.087261 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:14:38.087272 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:14:38.087303 ignition[778]: op(1): [started] loading QEMU firmware config module Jul 11 00:14:38.087309 ignition[778]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:14:38.099743 ignition[778]: op(1): [finished] loading QEMU firmware config module Jul 11 00:14:38.143209 ignition[778]: parsing config with SHA512: eefb54e4918b4ddabde604ffec84fd87b0145eae29efc820ecf55d417a82cf1907dff7a1bc58ea1b3b36ad845a87b05e401f8cdbcb17eb7b4b1808a67900cfe5 Jul 11 00:14:38.150562 unknown[778]: fetched base config from "system" Jul 11 00:14:38.150580 unknown[778]: fetched user config from "qemu" Jul 11 00:14:38.153217 ignition[778]: fetch-offline: fetch-offline passed Jul 11 00:14:38.153359 ignition[778]: Ignition finished successfully Jul 11 00:14:38.158701 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:14:38.159511 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:14:38.171128 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:14:38.191644 ignition[787]: Ignition 2.19.0 Jul 11 00:14:38.191657 ignition[787]: Stage: kargs Jul 11 00:14:38.191910 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:14:38.191934 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:14:38.197604 ignition[787]: kargs: kargs passed Jul 11 00:14:38.199150 ignition[787]: Ignition finished successfully Jul 11 00:14:38.204299 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:14:38.215860 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:14:38.236505 ignition[795]: Ignition 2.19.0 Jul 11 00:14:38.236526 ignition[795]: Stage: disks Jul 11 00:14:38.236837 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:14:38.236872 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:14:38.242009 ignition[795]: disks: disks passed Jul 11 00:14:38.242731 ignition[795]: Ignition finished successfully Jul 11 00:14:38.246816 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:14:38.249269 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:14:38.249909 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:14:38.250400 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:14:38.255182 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:14:38.255637 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:14:38.269028 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:14:38.322174 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:14:38.672406 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:14:38.679959 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:14:38.837766 kernel: EXT4-fs (vda9): mounted filesystem 66ba5133-8c5a-461b-b2c1-a823c72af79b r/w with ordered data mode. Quota mode: none. Jul 11 00:14:38.838831 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:14:38.841124 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:14:38.857801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:14:38.860599 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:14:38.862863 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:14:38.862915 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:14:38.862946 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:14:38.871713 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:14:38.876305 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Jul 11 00:14:38.876327 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:14:38.876339 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:14:38.876351 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:14:38.878142 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:14:38.882706 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:14:38.884032 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:14:38.928664 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:14:38.935483 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:14:38.940191 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:14:38.946009 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:14:39.037506 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:14:39.050851 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:14:39.054857 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:14:39.057146 systemd-networkd[773]: eth0: Gained IPv6LL Jul 11 00:14:39.059467 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:14:39.060799 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:14:39.087586 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:14:39.091153 ignition[927]: INFO : Ignition 2.19.0 Jul 11 00:14:39.091153 ignition[927]: INFO : Stage: mount Jul 11 00:14:39.093192 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:14:39.093192 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:14:39.093192 ignition[927]: INFO : mount: mount passed Jul 11 00:14:39.093192 ignition[927]: INFO : Ignition finished successfully Jul 11 00:14:39.094910 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:14:39.106783 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:14:39.855007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:14:39.863712 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Jul 11 00:14:39.863751 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:14:39.863768 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:14:39.865364 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:14:39.868697 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:14:39.871115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:14:39.899923 ignition[958]: INFO : Ignition 2.19.0 Jul 11 00:14:39.899923 ignition[958]: INFO : Stage: files Jul 11 00:14:39.901883 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:14:39.901883 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:14:39.901883 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:14:39.905531 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:14:39.905531 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:14:39.908415 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:14:39.908415 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:14:39.908415 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:14:39.908415 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:14:39.908415 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:14:39.906526 unknown[958]: wrote ssh authorized keys file for user: core Jul 11 00:14:39.917507 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:14:39.917507 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 11 00:14:39.950946 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:14:40.285845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:14:40.285845 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:14:40.290040 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 11 00:14:40.854951 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 11 00:14:41.205964 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:14:41.205964 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:14:41.209980 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 11 00:14:41.671512 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 11 00:14:44.575134 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:14:44.575134 ignition[958]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 11 00:14:44.579018 ignition[958]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:14:44.608115 ignition[958]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:14:44.616422 ignition[958]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:14:44.618174 ignition[958]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:14:44.618174 ignition[958]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:14:44.620950 ignition[958]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:14:44.622376 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:14:44.624126 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:14:44.625811 ignition[958]: INFO : files: files passed Jul 11 00:14:44.625811 ignition[958]: INFO : Ignition finished successfully Jul 11 00:14:44.628352 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:14:44.640941 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:14:44.642310 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:14:44.646646 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:14:44.646826 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:14:44.653240 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:14:44.656194 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:14:44.656194 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:14:44.659711 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:14:44.662792 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:14:44.663275 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:14:44.673840 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:14:44.701155 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:14:44.701292 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:14:44.703598 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:14:44.704080 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:14:44.704427 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:14:44.705487 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:14:44.728943 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:14:44.746973 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:14:44.757860 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:14:44.760148 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:14:44.762400 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:14:44.764384 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:14:44.765402 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:14:44.767924 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:14:44.769929 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:14:44.771891 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:14:44.774059 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:14:44.776282 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:14:44.778482 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:14:44.780488 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:14:44.782889 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:14:44.784898 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:14:44.786882 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:14:44.788451 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:14:44.789450 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:14:44.791698 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:14:44.793950 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:14:44.796397 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:14:44.797432 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:14:44.800097 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:14:44.801130 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:14:44.803437 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:14:44.804558 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:14:44.806988 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:14:44.808753 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:14:44.812739 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:14:44.815536 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:14:44.817606 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:14:44.819496 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:14:44.820468 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:14:44.822524 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:14:44.823498 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:14:44.825586 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:14:44.826769 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:14:44.829259 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:14:44.830240 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:14:44.847057 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:14:44.849061 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:14:44.850135 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:14:44.853588 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:14:44.855342 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:14:44.856521 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:14:44.859056 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:14:44.860245 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:14:44.866193 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:14:44.866315 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:14:44.885389 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:14:44.948106 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:14:44.948239 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:14:44.952333 ignition[1012]: INFO : Ignition 2.19.0 Jul 11 00:14:44.952333 ignition[1012]: INFO : Stage: umount Jul 11 00:14:44.952333 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:14:44.952333 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:14:44.952333 ignition[1012]: INFO : umount: umount passed Jul 11 00:14:44.952333 ignition[1012]: INFO : Ignition finished successfully Jul 11 00:14:44.958709 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:14:44.959663 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:14:44.962370 systemd[1]: Stopped target network.target - Network. Jul 11 00:14:44.964175 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:14:44.964274 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:14:44.967324 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:14:44.967388 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:14:44.969427 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:14:44.970382 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:14:44.972453 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:14:44.973479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:14:44.976623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:14:44.976700 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:14:44.979944 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:14:44.982249 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:14:44.993765 systemd-networkd[773]: eth0: DHCPv6 lease lost Jul 11 00:14:44.994706 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:14:44.994963 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:14:44.996667 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:14:44.996782 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:14:44.999928 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:14:45.000122 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:14:45.001090 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:14:45.001150 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:14:45.012791 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:14:45.014746 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:14:45.014819 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:14:45.015246 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:14:45.015300 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:14:45.015574 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:14:45.015622 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:14:45.016155 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:14:45.027827 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:14:45.028930 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:14:45.033435 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:14:45.034559 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:14:45.037434 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:14:45.038574 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:14:45.040748 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:14:45.040798 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:14:45.043886 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:14:45.044876 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:14:45.047046 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:14:45.047113 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:14:45.050159 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:14:45.050221 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:14:45.068862 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:14:45.090832 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:14:45.090921 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:14:45.093392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:14:45.094770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:14:45.098294 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:14:45.099508 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:14:45.102544 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:14:45.105588 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:14:45.117056 systemd[1]: Switching root. Jul 11 00:14:45.151414 systemd-journald[192]: Journal stopped Jul 11 00:14:47.844780 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jul 11 00:14:47.844884 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:14:47.844909 kernel: SELinux: policy capability open_perms=1 Jul 11 00:14:47.844926 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:14:47.844942 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:14:47.844958 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:14:47.844974 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:14:47.844989 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:14:47.845024 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:14:47.845040 kernel: audit: type=1403 audit(1752192886.678:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:14:47.845063 systemd[1]: Successfully loaded SELinux policy in 47.507ms. Jul 11 00:14:47.845094 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.014ms. Jul 11 00:14:47.845113 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:14:47.845130 systemd[1]: Detected virtualization kvm. Jul 11 00:14:47.845147 systemd[1]: Detected architecture x86-64. Jul 11 00:14:47.845163 systemd[1]: Detected first boot. Jul 11 00:14:47.845179 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:14:47.845203 zram_generator::config[1075]: No configuration found. Jul 11 00:14:47.845225 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:14:47.845241 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:14:47.845258 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:14:47.845276 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:14:47.845293 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:14:47.845309 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:14:47.845325 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:14:47.845351 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:14:47.845368 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:14:47.845398 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:14:47.845415 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:14:47.845431 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:14:47.845447 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:14:47.845464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:14:47.845488 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:14:47.845510 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:14:47.845535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:14:47.845576 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:14:47.845605 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:14:47.845635 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:14:47.845662 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:14:47.845732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:14:47.845764 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:14:47.845793 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:14:47.845820 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:14:47.845839 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:14:47.845858 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:14:47.845874 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:14:47.845894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:14:47.845911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:14:47.845927 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:14:47.845944 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:14:47.845960 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:14:47.845977 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:14:47.846008 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:14:47.846025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:14:47.846058 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:14:47.846079 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:14:47.846102 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:14:47.846119 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:14:47.846136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:14:47.846152 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:14:47.846179 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:14:47.846201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:14:47.846218 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:14:47.846236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:14:47.846252 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:14:47.846268 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:14:47.846286 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:14:47.846302 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 11 00:14:47.846325 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 11 00:14:47.846339 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:14:47.846353 kernel: fuse: init (API version 7.39) Jul 11 00:14:47.846368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:14:47.846382 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:14:47.846406 kernel: loop: module loaded Jul 11 00:14:47.846421 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:14:47.846435 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:14:47.846451 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:14:47.846475 kernel: ACPI: bus type drm_connector registered Jul 11 00:14:47.846514 systemd-journald[1152]: Collecting audit messages is disabled. Jul 11 00:14:47.846540 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:14:47.846555 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:14:47.846570 systemd-journald[1152]: Journal started Jul 11 00:14:47.846609 systemd-journald[1152]: Runtime Journal (/run/log/journal/38d3b60b59544f1482d97b44fc43a937) is 6.0M, max 48.3M, 42.2M free. Jul 11 00:14:47.848742 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:14:47.850919 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:14:47.852057 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:14:47.853252 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:14:47.854535 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:14:47.855971 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:14:47.857668 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:14:47.857914 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:14:47.879937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:14:47.880158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:14:47.881779 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:14:47.882000 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:14:47.883436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:14:47.883660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:14:47.885358 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:14:47.885617 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:14:47.887080 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:14:47.887527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:14:47.889186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:14:47.890885 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:14:47.892838 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:14:47.905669 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:14:47.941853 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:14:47.944699 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:14:47.945868 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:14:47.950852 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:14:47.954945 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:14:47.956665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:14:47.959831 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:14:47.961124 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:14:47.962890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:14:47.968915 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:14:47.973765 systemd-journald[1152]: Time spent on flushing to /var/log/journal/38d3b60b59544f1482d97b44fc43a937 is 18.100ms for 981 entries. Jul 11 00:14:47.973765 systemd-journald[1152]: System Journal (/var/log/journal/38d3b60b59544f1482d97b44fc43a937) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:14:48.150834 systemd-journald[1152]: Received client request to flush runtime journal. Jul 11 00:14:47.974936 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:14:47.976287 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:14:47.995494 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:14:48.024702 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:14:48.044905 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:14:48.054579 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:14:48.056490 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 11 00:14:48.056508 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 11 00:14:48.063659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:14:48.130130 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:14:48.135242 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:14:48.150294 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:14:48.159033 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:14:48.161137 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:14:48.189659 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:14:48.202580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:14:48.229073 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 11 00:14:48.229103 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 11 00:14:48.237445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:14:48.807137 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:14:48.821940 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:14:48.852685 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jul 11 00:14:48.872587 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:14:48.889266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:14:48.905851 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:14:48.965806 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:14:48.975440 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 11 00:14:48.978863 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1250) Jul 11 00:14:49.020712 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 11 00:14:49.032787 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:14:49.041466 systemd-networkd[1244]: lo: Link UP Jul 11 00:14:49.041959 systemd-networkd[1244]: lo: Gained carrier Jul 11 00:14:49.044745 systemd-networkd[1244]: Enumeration completed Jul 11 00:14:49.046005 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:14:49.046405 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:14:49.046410 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:14:49.047756 systemd-networkd[1244]: eth0: Link UP Jul 11 00:14:49.047767 systemd-networkd[1244]: eth0: Gained carrier Jul 11 00:14:49.047782 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:14:49.058928 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:14:49.062880 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:14:49.070744 systemd-networkd[1244]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:14:49.094596 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 11 00:14:49.125701 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 11 00:14:49.125999 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:14:49.127074 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 11 00:14:49.127297 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:14:49.179181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:14:49.187565 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:14:49.202068 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:14:49.202479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:14:49.203703 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:14:49.217908 kernel: kvm_amd: TSC scaling supported Jul 11 00:14:49.217975 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:14:49.218019 kernel: kvm_amd: Nested Paging enabled Jul 11 00:14:49.218898 kernel: kvm_amd: LBR virtualization supported Jul 11 00:14:49.218972 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:14:49.219872 kernel: kvm_amd: Virtual GIF supported Jul 11 00:14:49.231004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:14:49.241761 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:14:49.277171 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:14:49.287848 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:14:49.302826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:14:49.305333 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:14:49.342731 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:14:49.344466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:14:49.356129 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:14:49.363008 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:14:49.403429 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:14:49.405257 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:14:49.406819 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:14:49.406854 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:14:49.408017 systemd[1]: Reached target machines.target - Containers. Jul 11 00:14:49.410401 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:14:49.427924 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:14:49.430784 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:14:49.431973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:14:49.433412 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:14:49.436834 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:14:49.441348 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:14:49.444619 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:14:49.460699 kernel: loop0: detected capacity change from 0 to 142488 Jul 11 00:14:49.465031 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:14:49.477451 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:14:49.479539 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:14:49.492143 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:14:49.529740 kernel: loop1: detected capacity change from 0 to 140768 Jul 11 00:14:49.575700 kernel: loop2: detected capacity change from 0 to 221472 Jul 11 00:14:49.613699 kernel: loop3: detected capacity change from 0 to 142488 Jul 11 00:14:49.630716 kernel: loop4: detected capacity change from 0 to 140768 Jul 11 00:14:49.643695 kernel: loop5: detected capacity change from 0 to 221472 Jul 11 00:14:49.652449 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:14:49.653420 (sd-merge)[1313]: Merged extensions into '/usr'. Jul 11 00:14:49.658648 systemd[1]: Reloading requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:14:49.658787 systemd[1]: Reloading... Jul 11 00:14:49.717750 zram_generator::config[1341]: No configuration found. Jul 11 00:14:49.859894 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:14:49.914407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:14:49.985926 systemd[1]: Reloading finished in 326 ms. Jul 11 00:14:50.007811 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:14:50.009473 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:14:50.030904 systemd[1]: Starting ensure-sysext.service... Jul 11 00:14:50.033563 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:14:50.038579 systemd[1]: Reloading requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:14:50.038596 systemd[1]: Reloading... Jul 11 00:14:50.069159 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:14:50.069550 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:14:50.072450 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:14:50.088059 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jul 11 00:14:50.088162 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jul 11 00:14:50.091855 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:14:50.091868 systemd-tmpfiles[1386]: Skipping /boot Jul 11 00:14:50.094981 zram_generator::config[1412]: No configuration found. Jul 11 00:14:50.105825 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:14:50.105840 systemd-tmpfiles[1386]: Skipping /boot Jul 11 00:14:50.231160 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:14:50.303945 systemd[1]: Reloading finished in 264 ms. Jul 11 00:14:50.322275 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:14:50.339367 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:14:50.342400 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:14:50.345295 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:14:50.350930 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:14:50.355987 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:14:50.362728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:14:50.362970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:14:50.366005 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:14:50.368912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:14:50.377007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:14:50.378284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:14:50.378456 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:14:50.379615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:14:50.379916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:14:50.382096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:14:50.382531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:14:50.387694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:14:50.388288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:14:50.393943 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:14:50.401354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:14:50.403011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:14:50.403155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:14:50.405473 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:14:50.412093 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:14:50.412460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:14:50.414552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:14:50.414796 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:14:50.416874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:14:50.417209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:14:50.443664 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:14:50.444110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:14:50.449557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:14:50.460073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:14:50.466017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:14:50.470628 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:14:50.472171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:14:50.476744 augenrules[1500]: No rules Jul 11 00:14:50.478219 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:14:50.511200 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:14:50.511912 systemd-networkd[1244]: eth0: Gained IPv6LL Jul 11 00:14:50.513472 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:14:50.515580 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:14:50.517779 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:14:50.520357 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:14:50.522086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:14:50.522317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:14:50.523903 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:14:50.524116 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:14:50.525612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:14:50.525843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:14:50.527700 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:14:50.527990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:14:50.535098 systemd[1]: Finished ensure-sysext.service. Jul 11 00:14:50.536770 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:14:50.551250 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:14:50.551364 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:14:50.554390 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:14:50.555632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:14:50.626257 systemd-resolved[1464]: Positive Trust Anchors: Jul 11 00:14:50.626279 systemd-resolved[1464]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:14:50.626320 systemd-resolved[1464]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:14:50.630250 systemd-resolved[1464]: Defaulting to hostname 'linux'. Jul 11 00:14:50.632668 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:14:50.634000 systemd[1]: Reached target network.target - Network. Jul 11 00:14:50.635123 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:14:50.636349 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:14:50.644920 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:14:50.646327 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:14:50.647483 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:14:50.648763 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:14:50.650001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:14:50.651183 systemd-timesyncd[1528]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:14:50.651231 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:14:50.651235 systemd-timesyncd[1528]: Initial clock synchronization to Fri 2025-07-11 00:14:50.795671 UTC. Jul 11 00:14:50.651269 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:14:50.652211 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:14:50.653429 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:14:50.654590 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:14:50.655790 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:14:50.657627 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:14:50.660648 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:14:50.663087 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:14:50.670962 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:14:50.672090 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:14:50.673133 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:14:50.674449 systemd[1]: System is tainted: cgroupsv1 Jul 11 00:14:50.674498 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:14:50.674530 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:14:50.676154 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:14:50.678409 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:14:50.680999 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:14:50.685776 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:14:50.688514 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:14:50.689942 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:14:50.696996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:14:50.698113 jq[1536]: false Jul 11 00:14:50.704992 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:14:50.708288 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:14:50.712790 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:14:50.716692 extend-filesystems[1539]: Found loop3 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found loop4 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found loop5 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found sr0 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found vda Jul 11 00:14:50.716692 extend-filesystems[1539]: Found vda1 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found vda2 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found vda3 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found usr Jul 11 00:14:50.716692 extend-filesystems[1539]: Found vda4 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found vda6 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found vda7 Jul 11 00:14:50.716692 extend-filesystems[1539]: Found vda9 Jul 11 00:14:50.716692 extend-filesystems[1539]: Checking size of /dev/vda9 Jul 11 00:14:50.719193 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:14:50.730378 dbus-daemon[1535]: [system] SELinux support is enabled Jul 11 00:14:50.728152 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:14:50.738855 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:14:50.740323 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:14:50.743013 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:14:50.748645 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:14:50.750634 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:14:50.759184 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:14:50.759276 jq[1564]: true Jul 11 00:14:50.759633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:14:50.761392 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:14:50.761738 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:14:50.764630 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:14:50.765014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:14:50.827267 update_engine[1560]: I20250711 00:14:50.769315 1560 main.cc:92] Flatcar Update Engine starting Jul 11 00:14:50.827267 update_engine[1560]: I20250711 00:14:50.770640 1560 update_check_scheduler.cc:74] Next update check in 8m3s Jul 11 00:14:50.829921 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:14:50.838033 extend-filesystems[1539]: Resized partition /dev/vda9 Jul 11 00:14:50.924224 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:14:50.924659 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:14:50.929856 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:14:50.929887 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:14:50.932702 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1258) Jul 11 00:14:50.942691 extend-filesystems[1578]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:14:50.946723 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:14:50.952962 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:14:50.956242 jq[1579]: true Jul 11 00:14:50.959902 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:14:50.968225 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:14:50.968615 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:14:50.990642 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:14:50.991395 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:14:51.004909 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:14:51.006910 tar[1572]: linux-amd64/helm Jul 11 00:14:51.040522 systemd-logind[1558]: Watching system buttons on /dev/input/event1 (Power Button) Jul 11 00:14:51.041069 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:14:51.041537 systemd-logind[1558]: New seat seat0. Jul 11 00:14:51.225360 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:14:51.225909 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:14:51.260322 extend-filesystems[1578]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:14:51.260322 extend-filesystems[1578]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:14:51.260322 extend-filesystems[1578]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:14:51.266122 extend-filesystems[1539]: Resized filesystem in /dev/vda9 Jul 11 00:14:51.262635 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:14:51.263077 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:14:51.283991 bash[1617]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:14:51.281444 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:14:51.284297 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:14:51.285659 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:14:51.307557 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:14:51.341368 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:14:51.698312 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:14:51.735085 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:14:51.735502 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:14:51.778737 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:14:51.784574 containerd[1591]: time="2025-07-11T00:14:51.783806170Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:14:51.847215 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:14:51.850510 containerd[1591]: time="2025-07-11T00:14:51.850155534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:14:51.852904 containerd[1591]: time="2025-07-11T00:14:51.852866626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:14:51.852994 containerd[1591]: time="2025-07-11T00:14:51.852969225Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:14:51.853084 containerd[1591]: time="2025-07-11T00:14:51.853068110Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:14:51.853465 containerd[1591]: time="2025-07-11T00:14:51.853445195Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:14:51.853539 containerd[1591]: time="2025-07-11T00:14:51.853526052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:14:51.853707 containerd[1591]: time="2025-07-11T00:14:51.853672074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:14:51.853764 containerd[1591]: time="2025-07-11T00:14:51.853751543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:14:51.854449 containerd[1591]: time="2025-07-11T00:14:51.854393728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:14:51.854589 containerd[1591]: time="2025-07-11T00:14:51.854557941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:14:51.854778 containerd[1591]: time="2025-07-11T00:14:51.854725430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:14:51.854873 containerd[1591]: time="2025-07-11T00:14:51.854858433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:14:51.855048 containerd[1591]: time="2025-07-11T00:14:51.855031666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:14:51.855403 containerd[1591]: time="2025-07-11T00:14:51.855381897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:14:51.855659 containerd[1591]: time="2025-07-11T00:14:51.855614448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:14:51.855778 containerd[1591]: time="2025-07-11T00:14:51.855760002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:14:51.855970 containerd[1591]: time="2025-07-11T00:14:51.855952263Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:14:51.856080 containerd[1591]: time="2025-07-11T00:14:51.856065973Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:14:51.886491 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:14:51.899243 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:14:51.900948 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:14:52.004050 tar[1572]: linux-amd64/LICENSE Jul 11 00:14:52.004172 tar[1572]: linux-amd64/README.md Jul 11 00:14:52.020374 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:14:52.100963 containerd[1591]: time="2025-07-11T00:14:52.100833716Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:14:52.101099 containerd[1591]: time="2025-07-11T00:14:52.100995696Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:14:52.101099 containerd[1591]: time="2025-07-11T00:14:52.101025829Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:14:52.101099 containerd[1591]: time="2025-07-11T00:14:52.101046005Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:14:52.101099 containerd[1591]: time="2025-07-11T00:14:52.101074020Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:14:52.101348 containerd[1591]: time="2025-07-11T00:14:52.101297822Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:14:52.102023 containerd[1591]: time="2025-07-11T00:14:52.101988480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:14:52.102246 containerd[1591]: time="2025-07-11T00:14:52.102208017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:14:52.102246 containerd[1591]: time="2025-07-11T00:14:52.102235289Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:14:52.102348 containerd[1591]: time="2025-07-11T00:14:52.102255515Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:14:52.102348 containerd[1591]: time="2025-07-11T00:14:52.102279387Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:14:52.102348 containerd[1591]: time="2025-07-11T00:14:52.102306140Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:14:52.102348 containerd[1591]: time="2025-07-11T00:14:52.102330989Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:14:52.102461 containerd[1591]: time="2025-07-11T00:14:52.102355818Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:14:52.102461 containerd[1591]: time="2025-07-11T00:14:52.102381837Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:14:52.102461 containerd[1591]: time="2025-07-11T00:14:52.102400638Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:14:52.102461 containerd[1591]: time="2025-07-11T00:14:52.102425406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:14:52.102461 containerd[1591]: time="2025-07-11T00:14:52.102447221Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:14:52.102601 containerd[1591]: time="2025-07-11T00:14:52.102478392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102601 containerd[1591]: time="2025-07-11T00:14:52.102504982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102601 containerd[1591]: time="2025-07-11T00:14:52.102521941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102601 containerd[1591]: time="2025-07-11T00:14:52.102539033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102601 containerd[1591]: time="2025-07-11T00:14:52.102560574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102601 containerd[1591]: time="2025-07-11T00:14:52.102578357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102800 containerd[1591]: time="2025-07-11T00:14:52.102603685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102800 containerd[1591]: time="2025-07-11T00:14:52.102636759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102800 containerd[1591]: time="2025-07-11T00:14:52.102657220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102800 containerd[1591]: time="2025-07-11T00:14:52.102679371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102800 containerd[1591]: time="2025-07-11T00:14:52.102722370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102800 containerd[1591]: time="2025-07-11T00:14:52.102748564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102800 containerd[1591]: time="2025-07-11T00:14:52.102770989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.102800 containerd[1591]: time="2025-07-11T00:14:52.102791634Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:14:52.103036 containerd[1591]: time="2025-07-11T00:14:52.102825156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.103036 containerd[1591]: time="2025-07-11T00:14:52.102842950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.103036 containerd[1591]: time="2025-07-11T00:14:52.102871138Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:14:52.103036 containerd[1591]: time="2025-07-11T00:14:52.102980723Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:14:52.103036 containerd[1591]: time="2025-07-11T00:14:52.103011803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:14:52.103036 containerd[1591]: time="2025-07-11T00:14:52.103028171Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:14:52.103190 containerd[1591]: time="2025-07-11T00:14:52.103044704Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:14:52.103190 containerd[1591]: time="2025-07-11T00:14:52.103058813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.103190 containerd[1591]: time="2025-07-11T00:14:52.103084018Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:14:52.103190 containerd[1591]: time="2025-07-11T00:14:52.103101272Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:14:52.103190 containerd[1591]: time="2025-07-11T00:14:52.103115544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:14:52.103623 containerd[1591]: time="2025-07-11T00:14:52.103520313Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:14:52.103623 containerd[1591]: time="2025-07-11T00:14:52.103617326Z" level=info msg="Connect containerd service" Jul 11 00:14:52.103866 containerd[1591]: time="2025-07-11T00:14:52.103671493Z" level=info msg="using legacy CRI server" Jul 11 00:14:52.103866 containerd[1591]: time="2025-07-11T00:14:52.103682588Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:14:52.103923 containerd[1591]: time="2025-07-11T00:14:52.103896100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:14:52.104702 containerd[1591]: time="2025-07-11T00:14:52.104649820Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:14:52.105282 containerd[1591]: time="2025-07-11T00:14:52.105147938Z" level=info msg="Start subscribing containerd event" Jul 11 00:14:52.105282 containerd[1591]: time="2025-07-11T00:14:52.105252281Z" level=info msg="Start recovering state" Jul 11 00:14:52.105658 containerd[1591]: time="2025-07-11T00:14:52.105378754Z" level=info msg="Start event monitor" Jul 11 00:14:52.105658 containerd[1591]: time="2025-07-11T00:14:52.105417142Z" level=info msg="Start snapshots syncer" Jul 11 00:14:52.105658 containerd[1591]: time="2025-07-11T00:14:52.105435435Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:14:52.105658 containerd[1591]: time="2025-07-11T00:14:52.105445035Z" level=info msg="Start streaming server" Jul 11 00:14:52.106008 containerd[1591]: time="2025-07-11T00:14:52.105983444Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:14:52.106349 containerd[1591]: time="2025-07-11T00:14:52.106329994Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:14:52.106605 containerd[1591]: time="2025-07-11T00:14:52.106578146Z" level=info msg="containerd successfully booted in 0.324971s" Jul 11 00:14:52.107573 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:14:53.105405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:14:53.107127 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:14:53.110402 systemd[1]: Startup finished in 13.219s (kernel) + 6.473s (userspace) = 19.692s. Jul 11 00:14:53.111520 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:14:53.879642 kubelet[1670]: E0711 00:14:53.879521 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:14:53.885679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:14:53.886046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:14:59.596998 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:14:59.609068 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:48554.service - OpenSSH per-connection server daemon (10.0.0.1:48554). Jul 11 00:14:59.649303 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 48554 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:14:59.652098 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:59.661552 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:14:59.671985 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:14:59.674127 systemd-logind[1558]: New session 1 of user core. Jul 11 00:14:59.686330 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:14:59.693127 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:14:59.700242 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:14:59.837736 systemd[1689]: Queued start job for default target default.target. Jul 11 00:14:59.838183 systemd[1689]: Created slice app.slice - User Application Slice. Jul 11 00:14:59.838208 systemd[1689]: Reached target paths.target - Paths. Jul 11 00:14:59.838221 systemd[1689]: Reached target timers.target - Timers. Jul 11 00:14:59.856859 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:14:59.865847 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:14:59.865936 systemd[1689]: Reached target sockets.target - Sockets. Jul 11 00:14:59.865952 systemd[1689]: Reached target basic.target - Basic System. Jul 11 00:14:59.866008 systemd[1689]: Reached target default.target - Main User Target. Jul 11 00:14:59.866046 systemd[1689]: Startup finished in 157ms. Jul 11 00:14:59.866733 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:14:59.868807 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:14:59.931022 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:48560.service - OpenSSH per-connection server daemon (10.0.0.1:48560). Jul 11 00:14:59.960374 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 48560 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:14:59.962169 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:59.967766 systemd-logind[1558]: New session 2 of user core. Jul 11 00:14:59.977998 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:15:00.034760 sshd[1701]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:00.042956 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:48568.service - OpenSSH per-connection server daemon (10.0.0.1:48568). Jul 11 00:15:00.043566 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:48560.service: Deactivated successfully. Jul 11 00:15:00.047377 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:15:00.048189 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:15:00.049974 systemd-logind[1558]: Removed session 2. Jul 11 00:15:00.074580 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 48568 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:15:00.076452 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:00.083048 systemd-logind[1558]: New session 3 of user core. Jul 11 00:15:00.093306 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:15:00.148498 sshd[1706]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:00.160218 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:48572.service - OpenSSH per-connection server daemon (10.0.0.1:48572). Jul 11 00:15:00.160976 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:48568.service: Deactivated successfully. Jul 11 00:15:00.165749 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:15:00.166782 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:15:00.171250 systemd-logind[1558]: Removed session 3. Jul 11 00:15:00.194024 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 48572 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:15:00.197082 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:00.203633 systemd-logind[1558]: New session 4 of user core. Jul 11 00:15:00.211217 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:15:00.270570 sshd[1714]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:00.280978 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:48580.service - OpenSSH per-connection server daemon (10.0.0.1:48580). Jul 11 00:15:00.281502 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:48572.service: Deactivated successfully. Jul 11 00:15:00.285119 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:15:00.286097 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:15:00.288609 systemd-logind[1558]: Removed session 4. Jul 11 00:15:00.322581 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 48580 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:15:00.324504 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:00.329293 systemd-logind[1558]: New session 5 of user core. Jul 11 00:15:00.339991 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:15:00.400201 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:15:00.400595 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:15:00.417985 sudo[1729]: pam_unix(sudo:session): session closed for user root Jul 11 00:15:00.420035 sshd[1722]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:00.435024 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:48584.service - OpenSSH per-connection server daemon (10.0.0.1:48584). Jul 11 00:15:00.435675 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:48580.service: Deactivated successfully. Jul 11 00:15:00.438043 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:15:00.438864 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:15:00.440310 systemd-logind[1558]: Removed session 5. Jul 11 00:15:00.465376 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 48584 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:15:00.467237 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:00.472032 systemd-logind[1558]: New session 6 of user core. Jul 11 00:15:00.481997 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:15:00.539765 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:15:00.540259 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:15:00.544495 sudo[1739]: pam_unix(sudo:session): session closed for user root Jul 11 00:15:00.551553 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:15:00.551963 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:15:00.572032 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:15:00.574235 auditctl[1742]: No rules Jul 11 00:15:00.575714 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:15:00.576131 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:15:00.578451 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:15:00.615178 augenrules[1761]: No rules Jul 11 00:15:00.617616 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:15:00.619772 sudo[1738]: pam_unix(sudo:session): session closed for user root Jul 11 00:15:00.622200 sshd[1732]: pam_unix(sshd:session): session closed for user core Jul 11 00:15:00.638166 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:48586.service - OpenSSH per-connection server daemon (10.0.0.1:48586). Jul 11 00:15:00.638806 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:48584.service: Deactivated successfully. Jul 11 00:15:00.641791 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:15:00.642900 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:15:00.644293 systemd-logind[1558]: Removed session 6. Jul 11 00:15:00.671520 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 48586 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:15:00.673647 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:15:00.678259 systemd-logind[1558]: New session 7 of user core. Jul 11 00:15:00.688142 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:15:00.744554 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:15:00.744934 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:15:02.971961 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:15:02.972292 (dockerd)[1792]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:15:03.633300 dockerd[1792]: time="2025-07-11T00:15:03.633193266Z" level=info msg="Starting up" Jul 11 00:15:03.984384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:15:04.001859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:04.266309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:04.271558 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:15:04.420019 kubelet[1827]: E0711 00:15:04.419932 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:15:04.426966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:15:04.427274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:15:06.694509 dockerd[1792]: time="2025-07-11T00:15:06.694448609Z" level=info msg="Loading containers: start." Jul 11 00:15:06.982709 kernel: Initializing XFRM netlink socket Jul 11 00:15:07.064634 systemd-networkd[1244]: docker0: Link UP Jul 11 00:15:07.254846 dockerd[1792]: time="2025-07-11T00:15:07.254778172Z" level=info msg="Loading containers: done." Jul 11 00:15:07.275720 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2579308917-merged.mount: Deactivated successfully. Jul 11 00:15:07.357335 dockerd[1792]: time="2025-07-11T00:15:07.357162994Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:15:07.357489 dockerd[1792]: time="2025-07-11T00:15:07.357339074Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:15:07.357553 dockerd[1792]: time="2025-07-11T00:15:07.357524512Z" level=info msg="Daemon has completed initialization" Jul 11 00:15:07.978880 dockerd[1792]: time="2025-07-11T00:15:07.978782552Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:15:07.979073 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:15:09.017723 containerd[1591]: time="2025-07-11T00:15:09.017617666Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:15:10.555499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114258510.mount: Deactivated successfully. Jul 11 00:15:14.484454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:15:14.544141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:14.742335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:14.748513 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:15:14.854860 kubelet[2028]: E0711 00:15:14.854799 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:15:14.859573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:15:14.859907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:15:15.965658 containerd[1591]: time="2025-07-11T00:15:15.965585299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:16.075798 containerd[1591]: time="2025-07-11T00:15:16.075722077Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 11 00:15:16.114513 containerd[1591]: time="2025-07-11T00:15:16.114417865Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:16.219040 containerd[1591]: time="2025-07-11T00:15:16.218333988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:16.219703 containerd[1591]: time="2025-07-11T00:15:16.219630764Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 7.201903552s" Jul 11 00:15:16.219780 containerd[1591]: time="2025-07-11T00:15:16.219716631Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 11 00:15:16.220865 containerd[1591]: time="2025-07-11T00:15:16.220827066Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:15:21.988972 containerd[1591]: time="2025-07-11T00:15:21.988887751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:22.076054 containerd[1591]: time="2025-07-11T00:15:22.075973578Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 11 00:15:22.121547 containerd[1591]: time="2025-07-11T00:15:22.121487271Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:22.190795 containerd[1591]: time="2025-07-11T00:15:22.190728715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:22.191777 containerd[1591]: time="2025-07-11T00:15:22.191732150Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 5.970872665s" Jul 11 00:15:22.191777 containerd[1591]: time="2025-07-11T00:15:22.191768198Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 11 00:15:22.192316 containerd[1591]: time="2025-07-11T00:15:22.192283805Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:15:24.984354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 11 00:15:24.995939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:25.169786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:25.174839 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:15:25.254636 kubelet[2054]: E0711 00:15:25.254450 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:15:25.258881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:15:25.259278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:15:27.562080 containerd[1591]: time="2025-07-11T00:15:27.561988208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:27.593778 containerd[1591]: time="2025-07-11T00:15:27.593706107Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 11 00:15:27.642783 containerd[1591]: time="2025-07-11T00:15:27.642695784Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:27.736922 containerd[1591]: time="2025-07-11T00:15:27.736809135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:27.738715 containerd[1591]: time="2025-07-11T00:15:27.738640543Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 5.546324488s" Jul 11 00:15:27.739447 containerd[1591]: time="2025-07-11T00:15:27.738708646Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 11 00:15:27.740328 containerd[1591]: time="2025-07-11T00:15:27.740305461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:15:34.842228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014260616.mount: Deactivated successfully. Jul 11 00:15:35.472339 containerd[1591]: time="2025-07-11T00:15:35.472273213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:35.484203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 11 00:15:35.490002 containerd[1591]: time="2025-07-11T00:15:35.489897305Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 11 00:15:35.494933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:35.505189 containerd[1591]: time="2025-07-11T00:15:35.505108075Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:35.510742 containerd[1591]: time="2025-07-11T00:15:35.510599995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:35.511099 containerd[1591]: time="2025-07-11T00:15:35.511065978Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 7.770728912s" Jul 11 00:15:35.511200 containerd[1591]: time="2025-07-11T00:15:35.511102531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 11 00:15:35.512024 containerd[1591]: time="2025-07-11T00:15:35.511958984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:15:35.669880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:35.709640 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:15:35.764721 kubelet[2088]: E0711 00:15:35.764396 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:15:35.769379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:15:35.769767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:15:36.436874 update_engine[1560]: I20250711 00:15:36.436769 1560 update_attempter.cc:509] Updating boot flags... Jul 11 00:15:36.489777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2103) Jul 11 00:15:36.527729 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2103) Jul 11 00:15:36.613724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2103) Jul 11 00:15:39.459987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013264377.mount: Deactivated successfully. Jul 11 00:15:45.984377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 11 00:15:46.004044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:46.319251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:46.344447 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:15:46.395088 kubelet[2132]: E0711 00:15:46.395011 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:15:46.399889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:15:46.400271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:15:56.484407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 11 00:15:56.576057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:15:56.825904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:15:56.829257 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:15:56.920644 kubelet[2198]: E0711 00:15:56.920389 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:15:56.925554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:15:56.925924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:15:59.489761 containerd[1591]: time="2025-07-11T00:15:59.489648136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:59.634354 containerd[1591]: time="2025-07-11T00:15:59.634187519Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 00:15:59.807253 containerd[1591]: time="2025-07-11T00:15:59.807056033Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:59.994369 containerd[1591]: time="2025-07-11T00:15:59.994279110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:15:59.995851 containerd[1591]: time="2025-07-11T00:15:59.995811931Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 24.483823739s" Jul 11 00:15:59.995851 containerd[1591]: time="2025-07-11T00:15:59.995852068Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 00:15:59.996564 containerd[1591]: time="2025-07-11T00:15:59.996493828Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:16:04.667663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806476787.mount: Deactivated successfully. Jul 11 00:16:04.991709 containerd[1591]: time="2025-07-11T00:16:04.991479058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:05.014023 containerd[1591]: time="2025-07-11T00:16:05.013542862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:16:05.054736 containerd[1591]: time="2025-07-11T00:16:05.054612263Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:05.060829 containerd[1591]: time="2025-07-11T00:16:05.060720476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:05.062618 containerd[1591]: time="2025-07-11T00:16:05.062129246Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 5.065577265s" Jul 11 00:16:05.062618 containerd[1591]: time="2025-07-11T00:16:05.062186125Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:16:05.064414 containerd[1591]: time="2025-07-11T00:16:05.064371019Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:16:06.164601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232484165.mount: Deactivated successfully. Jul 11 00:16:06.984388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 11 00:16:07.051166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:07.445108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:07.452922 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:16:07.939795 kubelet[2234]: E0711 00:16:07.939658 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:16:07.946620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:16:07.947094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:16:13.257320 containerd[1591]: time="2025-07-11T00:16:13.257172838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:13.260719 containerd[1591]: time="2025-07-11T00:16:13.260587555Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 11 00:16:13.264469 containerd[1591]: time="2025-07-11T00:16:13.264105680Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:13.272813 containerd[1591]: time="2025-07-11T00:16:13.272656886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:13.276350 containerd[1591]: time="2025-07-11T00:16:13.276264684Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 8.211846414s" Jul 11 00:16:13.276350 containerd[1591]: time="2025-07-11T00:16:13.276319389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 11 00:16:15.752557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:15.762345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:15.798505 systemd[1]: Reloading requested from client PID 2318 ('systemctl') (unit session-7.scope)... Jul 11 00:16:15.798538 systemd[1]: Reloading... Jul 11 00:16:16.204711 zram_generator::config[2357]: No configuration found. Jul 11 00:16:16.633294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:16:16.741533 systemd[1]: Reloading finished in 942 ms. Jul 11 00:16:16.811593 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:16:16.811771 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:16:16.812265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:16.815855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:17.013591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:17.020247 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:16:17.078260 kubelet[2418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:16:17.078260 kubelet[2418]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:16:17.078260 kubelet[2418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:16:17.078794 kubelet[2418]: I0711 00:16:17.078302 2418 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:16:17.556521 kubelet[2418]: I0711 00:16:17.556453 2418 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:16:17.556521 kubelet[2418]: I0711 00:16:17.556500 2418 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:16:17.556842 kubelet[2418]: I0711 00:16:17.556807 2418 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:16:17.754420 kubelet[2418]: I0711 00:16:17.749453 2418 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:16:17.754580 kubelet[2418]: E0711 00:16:17.754560 2418 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:17.776547 kubelet[2418]: E0711 00:16:17.776490 2418 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:16:17.776547 kubelet[2418]: I0711 00:16:17.776535 2418 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:16:17.805620 kubelet[2418]: I0711 00:16:17.805545 2418 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:16:17.806080 kubelet[2418]: I0711 00:16:17.806042 2418 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:16:17.806395 kubelet[2418]: I0711 00:16:17.806232 2418 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:16:17.806748 kubelet[2418]: I0711 00:16:17.806379 2418 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:16:17.806748 kubelet[2418]: I0711 00:16:17.806701 2418 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:16:17.806748 kubelet[2418]: I0711 00:16:17.806718 2418 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:16:17.807267 kubelet[2418]: I0711 00:16:17.806920 2418 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:16:17.811133 kubelet[2418]: I0711 00:16:17.811083 2418 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:16:17.811133 kubelet[2418]: I0711 00:16:17.811123 2418 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:16:17.811240 kubelet[2418]: I0711 00:16:17.811178 2418 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:16:17.811240 kubelet[2418]: I0711 00:16:17.811217 2418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:16:17.817109 kubelet[2418]: I0711 00:16:17.816984 2418 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:16:17.817816 kubelet[2418]: I0711 00:16:17.817594 2418 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:16:17.820718 kubelet[2418]: W0711 00:16:17.818899 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:17.820718 kubelet[2418]: E0711 00:16:17.818981 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:17.820718 kubelet[2418]: W0711 00:16:17.819043 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:17.820718 kubelet[2418]: E0711 00:16:17.819090 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:17.820718 kubelet[2418]: W0711 00:16:17.819259 2418 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:16:17.822306 kubelet[2418]: I0711 00:16:17.822272 2418 server.go:1274] "Started kubelet" Jul 11 00:16:17.822411 kubelet[2418]: I0711 00:16:17.822360 2418 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:16:17.823078 kubelet[2418]: I0711 00:16:17.823008 2418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:16:17.824450 kubelet[2418]: I0711 00:16:17.823506 2418 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:16:17.824450 kubelet[2418]: I0711 00:16:17.823908 2418 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:16:17.832072 kubelet[2418]: I0711 00:16:17.832008 2418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:16:17.833203 kubelet[2418]: I0711 00:16:17.833139 2418 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:16:17.834185 kubelet[2418]: I0711 00:16:17.833801 2418 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:16:17.834398 kubelet[2418]: E0711 00:16:17.834354 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:17.834484 kubelet[2418]: E0711 00:16:17.832608 2418 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a3ad507485d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:16:17.822238813 +0000 UTC m=+0.797077995,LastTimestamp:2025-07-11 00:16:17.822238813 +0000 UTC m=+0.797077995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:16:17.835668 kubelet[2418]: I0711 00:16:17.835641 2418 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:16:17.835877 kubelet[2418]: I0711 00:16:17.835860 2418 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:16:17.836515 kubelet[2418]: W0711 00:16:17.836342 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:17.836515 kubelet[2418]: E0711 00:16:17.836411 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:17.836515 kubelet[2418]: E0711 00:16:17.836484 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Jul 11 00:16:17.838212 kubelet[2418]: I0711 00:16:17.838070 2418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:16:17.839666 kubelet[2418]: E0711 00:16:17.839615 2418 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:16:17.840436 kubelet[2418]: I0711 00:16:17.840410 2418 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:16:17.840436 kubelet[2418]: I0711 00:16:17.840433 2418 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:16:17.869892 kubelet[2418]: I0711 00:16:17.869330 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:16:17.873944 kubelet[2418]: I0711 00:16:17.873847 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:16:17.874158 kubelet[2418]: I0711 00:16:17.873984 2418 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:16:17.874432 kubelet[2418]: I0711 00:16:17.874402 2418 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:16:17.874568 kubelet[2418]: E0711 00:16:17.874497 2418 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:16:17.875018 kubelet[2418]: W0711 00:16:17.874980 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:17.875178 kubelet[2418]: E0711 00:16:17.875132 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:17.876061 kubelet[2418]: I0711 00:16:17.876027 2418 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:16:17.876061 kubelet[2418]: I0711 00:16:17.876048 2418 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:16:17.876295 kubelet[2418]: I0711 00:16:17.876079 2418 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:16:17.935308 kubelet[2418]: E0711 00:16:17.935208 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:17.974833 kubelet[2418]: E0711 00:16:17.974754 2418 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:16:18.036139 kubelet[2418]: E0711 00:16:18.036068 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:18.037780 kubelet[2418]: E0711 00:16:18.037710 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Jul 11 00:16:18.136468 kubelet[2418]: E0711 00:16:18.136231 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:18.175787 kubelet[2418]: E0711 00:16:18.175705 2418 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:16:18.237251 kubelet[2418]: E0711 00:16:18.237185 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:18.337698 kubelet[2418]: E0711 00:16:18.337594 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:18.438558 kubelet[2418]: E0711 00:16:18.438365 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:18.438865 kubelet[2418]: E0711 00:16:18.438820 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Jul 11 00:16:18.539474 kubelet[2418]: E0711 00:16:18.539330 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:18.576699 kubelet[2418]: E0711 00:16:18.576617 2418 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:16:18.640428 kubelet[2418]: E0711 00:16:18.640280 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:18.703539 kubelet[2418]: W0711 00:16:18.703221 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:18.703539 kubelet[2418]: E0711 00:16:18.703356 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:18.724481 kubelet[2418]: W0711 00:16:18.724379 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:18.724481 kubelet[2418]: E0711 00:16:18.724454 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:18.741214 kubelet[2418]: E0711 00:16:18.741092 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:18.796206 kubelet[2418]: I0711 00:16:18.796055 2418 policy_none.go:49] "None policy: Start" Jul 11 00:16:18.797474 kubelet[2418]: I0711 00:16:18.797434 2418 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:16:18.797474 kubelet[2418]: I0711 00:16:18.797483 2418 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:16:18.811323 kubelet[2418]: I0711 00:16:18.811258 2418 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:16:18.811707 kubelet[2418]: I0711 00:16:18.811565 2418 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:16:18.811707 kubelet[2418]: I0711 00:16:18.811595 2418 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:16:18.813385 kubelet[2418]: I0711 00:16:18.813280 2418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:16:18.814566 kubelet[2418]: E0711 00:16:18.814535 2418 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:16:18.913802 kubelet[2418]: I0711 00:16:18.913757 2418 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:16:18.914237 kubelet[2418]: E0711 00:16:18.914207 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jul 11 00:16:19.023547 kubelet[2418]: W0711 00:16:19.023442 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:19.023547 kubelet[2418]: E0711 00:16:19.023523 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:19.116241 kubelet[2418]: I0711 00:16:19.116114 2418 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:16:19.116668 kubelet[2418]: E0711 00:16:19.116618 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jul 11 00:16:19.240275 kubelet[2418]: E0711 00:16:19.240192 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="1.6s" Jul 11 00:16:19.270098 kubelet[2418]: W0711 00:16:19.270050 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:19.270156 kubelet[2418]: E0711 00:16:19.270103 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:19.446291 kubelet[2418]: I0711 00:16:19.446080 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d04380d2edd491eb0876030ae1df311-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d04380d2edd491eb0876030ae1df311\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:19.446291 kubelet[2418]: I0711 00:16:19.446133 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:19.446291 kubelet[2418]: I0711 00:16:19.446167 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:16:19.446291 kubelet[2418]: I0711 00:16:19.446186 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d04380d2edd491eb0876030ae1df311-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d04380d2edd491eb0876030ae1df311\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:19.446291 kubelet[2418]: I0711 00:16:19.446211 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:19.446566 kubelet[2418]: I0711 00:16:19.446247 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:19.446566 kubelet[2418]: I0711 00:16:19.446283 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:19.446566 kubelet[2418]: I0711 00:16:19.446311 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:19.446566 kubelet[2418]: I0711 00:16:19.446338 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d04380d2edd491eb0876030ae1df311-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8d04380d2edd491eb0876030ae1df311\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:19.518631 kubelet[2418]: I0711 00:16:19.518587 2418 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:16:19.519093 kubelet[2418]: E0711 00:16:19.519048 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jul 11 00:16:19.683768 kubelet[2418]: E0711 00:16:19.683670 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:19.684644 containerd[1591]: time="2025-07-11T00:16:19.684604782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8d04380d2edd491eb0876030ae1df311,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:19.685059 kubelet[2418]: E0711 00:16:19.684635 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:19.685119 containerd[1591]: time="2025-07-11T00:16:19.685087106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:19.690393 kubelet[2418]: E0711 00:16:19.690374 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:19.690666 containerd[1591]: time="2025-07-11T00:16:19.690647319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:19.865974 kubelet[2418]: E0711 00:16:19.865926 2418 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:20.320823 kubelet[2418]: I0711 00:16:20.320713 2418 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:16:20.321502 kubelet[2418]: E0711 00:16:20.321133 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jul 11 00:16:20.841443 kubelet[2418]: E0711 00:16:20.841343 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="3.2s" Jul 11 00:16:20.875161 kubelet[2418]: W0711 00:16:20.875107 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:20.875161 kubelet[2418]: E0711 00:16:20.875158 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:21.269708 kubelet[2418]: W0711 00:16:21.267475 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:21.269708 kubelet[2418]: E0711 00:16:21.267551 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:21.562291 kubelet[2418]: W0711 00:16:21.562099 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:21.562291 kubelet[2418]: E0711 00:16:21.562194 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:21.858897 kubelet[2418]: W0711 00:16:21.858693 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 11 00:16:21.858897 kubelet[2418]: E0711 00:16:21.858788 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:16:21.923578 kubelet[2418]: I0711 00:16:21.923536 2418 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:16:21.924070 kubelet[2418]: E0711 00:16:21.924016 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jul 11 00:16:22.024725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588024059.mount: Deactivated successfully. Jul 11 00:16:22.041146 containerd[1591]: time="2025-07-11T00:16:22.041052478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:16:22.043551 containerd[1591]: time="2025-07-11T00:16:22.043364220Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:16:22.048707 containerd[1591]: time="2025-07-11T00:16:22.048599084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 11 00:16:22.050919 containerd[1591]: time="2025-07-11T00:16:22.050824591Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:16:22.053387 containerd[1591]: time="2025-07-11T00:16:22.053283275Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:16:22.054982 containerd[1591]: time="2025-07-11T00:16:22.054926889Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:16:22.057133 containerd[1591]: time="2025-07-11T00:16:22.057056572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:16:22.060352 containerd[1591]: time="2025-07-11T00:16:22.060277273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:16:22.062260 containerd[1591]: time="2025-07-11T00:16:22.062204160Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.377492343s" Jul 11 00:16:22.063172 containerd[1591]: time="2025-07-11T00:16:22.063120853Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.377983252s" Jul 11 00:16:22.070291 containerd[1591]: time="2025-07-11T00:16:22.070228530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.379518322s" Jul 11 00:16:22.440031 containerd[1591]: time="2025-07-11T00:16:22.439730116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:22.440263 containerd[1591]: time="2025-07-11T00:16:22.440070617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:22.440410 containerd[1591]: time="2025-07-11T00:16:22.440217097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:22.441257 containerd[1591]: time="2025-07-11T00:16:22.441139482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:22.443796 containerd[1591]: time="2025-07-11T00:16:22.443424022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:22.443796 containerd[1591]: time="2025-07-11T00:16:22.443517401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:22.443796 containerd[1591]: time="2025-07-11T00:16:22.443536276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:22.444612 containerd[1591]: time="2025-07-11T00:16:22.444303414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:22.444612 containerd[1591]: time="2025-07-11T00:16:22.444128810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:22.444612 containerd[1591]: time="2025-07-11T00:16:22.444194706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:22.444612 containerd[1591]: time="2025-07-11T00:16:22.444212710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:22.444612 containerd[1591]: time="2025-07-11T00:16:22.444352979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:22.656773 containerd[1591]: time="2025-07-11T00:16:22.656626200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"762b92a366c2f27960f30875522764fb932afdb69744d1f6b2ba7acce7485404\"" Jul 11 00:16:22.658528 kubelet[2418]: E0711 00:16:22.658463 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:22.669503 containerd[1591]: time="2025-07-11T00:16:22.669447637Z" level=info msg="CreateContainer within sandbox \"762b92a366c2f27960f30875522764fb932afdb69744d1f6b2ba7acce7485404\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:16:22.673714 containerd[1591]: time="2025-07-11T00:16:22.673684302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8d04380d2edd491eb0876030ae1df311,Namespace:kube-system,Attempt:0,} returns sandbox id \"436b31a6f0b30d9dd43525d95fc7c240ff696105b86350ede0d98ea54ae01a9c\"" Jul 11 00:16:22.674379 kubelet[2418]: E0711 00:16:22.674349 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:22.676501 containerd[1591]: time="2025-07-11T00:16:22.676448510Z" level=info msg="CreateContainer within sandbox \"436b31a6f0b30d9dd43525d95fc7c240ff696105b86350ede0d98ea54ae01a9c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:16:22.687434 containerd[1591]: time="2025-07-11T00:16:22.687346868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77ea4b28c9d1ae0fea07c16bc190816f4afd3d8f9e1611251c99d0074d9874d\"" Jul 11 00:16:22.688364 kubelet[2418]: E0711 00:16:22.688336 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:22.690620 containerd[1591]: time="2025-07-11T00:16:22.690380161Z" level=info msg="CreateContainer within sandbox \"c77ea4b28c9d1ae0fea07c16bc190816f4afd3d8f9e1611251c99d0074d9874d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:16:22.704495 containerd[1591]: time="2025-07-11T00:16:22.704389080Z" level=info msg="CreateContainer within sandbox \"762b92a366c2f27960f30875522764fb932afdb69744d1f6b2ba7acce7485404\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"924e7b941d6624e595e4bc737809793f53de6ba9d8a305603365398493bf0024\"" Jul 11 00:16:22.705735 containerd[1591]: time="2025-07-11T00:16:22.705651485Z" level=info msg="StartContainer for \"924e7b941d6624e595e4bc737809793f53de6ba9d8a305603365398493bf0024\"" Jul 11 00:16:22.780212 containerd[1591]: time="2025-07-11T00:16:22.779324348Z" level=info msg="CreateContainer within sandbox \"436b31a6f0b30d9dd43525d95fc7c240ff696105b86350ede0d98ea54ae01a9c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5026c89c842efa2a7df952ae5e74dbe87cdc47ceb0e77fefedf986ed74b0f1ac\"" Jul 11 00:16:22.780858 containerd[1591]: time="2025-07-11T00:16:22.780823076Z" level=info msg="StartContainer for \"5026c89c842efa2a7df952ae5e74dbe87cdc47ceb0e77fefedf986ed74b0f1ac\"" Jul 11 00:16:22.790746 containerd[1591]: time="2025-07-11T00:16:22.790638201Z" level=info msg="CreateContainer within sandbox \"c77ea4b28c9d1ae0fea07c16bc190816f4afd3d8f9e1611251c99d0074d9874d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5f458ed857a8e28332e63b0ad80423d360606194d9c0a1f3d8188e397b5fb0a4\"" Jul 11 00:16:22.791816 containerd[1591]: time="2025-07-11T00:16:22.791752794Z" level=info msg="StartContainer for \"5f458ed857a8e28332e63b0ad80423d360606194d9c0a1f3d8188e397b5fb0a4\"" Jul 11 00:16:22.831163 containerd[1591]: time="2025-07-11T00:16:22.831104191Z" level=info msg="StartContainer for \"924e7b941d6624e595e4bc737809793f53de6ba9d8a305603365398493bf0024\" returns successfully" Jul 11 00:16:22.901420 kubelet[2418]: E0711 00:16:22.901376 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:22.931664 containerd[1591]: time="2025-07-11T00:16:22.931507109Z" level=info msg="StartContainer for \"5026c89c842efa2a7df952ae5e74dbe87cdc47ceb0e77fefedf986ed74b0f1ac\" returns successfully" Jul 11 00:16:22.946745 containerd[1591]: time="2025-07-11T00:16:22.946423554Z" level=info msg="StartContainer for \"5f458ed857a8e28332e63b0ad80423d360606194d9c0a1f3d8188e397b5fb0a4\" returns successfully" Jul 11 00:16:23.911999 kubelet[2418]: E0711 00:16:23.911813 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:23.913479 kubelet[2418]: E0711 00:16:23.913413 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:24.478314 kubelet[2418]: E0711 00:16:24.478054 2418 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:16:24.830277 kubelet[2418]: E0711 00:16:24.830219 2418 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 11 00:16:24.914876 kubelet[2418]: E0711 00:16:24.914819 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:24.915381 kubelet[2418]: E0711 00:16:24.915207 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:25.127022 kubelet[2418]: I0711 00:16:25.126825 2418 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:16:25.140709 kubelet[2418]: I0711 00:16:25.140597 2418 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:16:25.140709 kubelet[2418]: E0711 00:16:25.140698 2418 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:16:25.153070 kubelet[2418]: E0711 00:16:25.153009 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:25.254207 kubelet[2418]: E0711 00:16:25.254142 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:25.356523 kubelet[2418]: E0711 00:16:25.356423 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:25.458966 kubelet[2418]: E0711 00:16:25.457349 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:25.558391 kubelet[2418]: E0711 00:16:25.557802 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:25.658014 kubelet[2418]: E0711 00:16:25.657940 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:25.759930 kubelet[2418]: E0711 00:16:25.759448 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:25.860114 kubelet[2418]: E0711 00:16:25.860043 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:25.917905 kubelet[2418]: E0711 00:16:25.917853 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:25.960891 kubelet[2418]: E0711 00:16:25.960840 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.064015 kubelet[2418]: E0711 00:16:26.063587 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.164247 kubelet[2418]: E0711 00:16:26.164196 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.264702 kubelet[2418]: E0711 00:16:26.264610 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.365304 kubelet[2418]: E0711 00:16:26.365107 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.466374 kubelet[2418]: E0711 00:16:26.466312 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.567325 kubelet[2418]: E0711 00:16:26.567104 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.668015 kubelet[2418]: E0711 00:16:26.667800 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.768318 kubelet[2418]: E0711 00:16:26.768218 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.869232 kubelet[2418]: E0711 00:16:26.869099 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:26.970273 kubelet[2418]: E0711 00:16:26.970013 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:27.590832 systemd[1]: Reloading requested from client PID 2693 ('systemctl') (unit session-7.scope)... Jul 11 00:16:27.590852 systemd[1]: Reloading... Jul 11 00:16:27.703734 zram_generator::config[2732]: No configuration found. Jul 11 00:16:27.818774 kubelet[2418]: I0711 00:16:27.818720 2418 apiserver.go:52] "Watching apiserver" Jul 11 00:16:27.836205 kubelet[2418]: I0711 00:16:27.836125 2418 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:16:27.886043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:16:28.008571 systemd[1]: Reloading finished in 417 ms. Jul 11 00:16:28.054198 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:28.069417 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:16:28.070165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:28.083214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:28.321794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:28.331018 (kubelet)[2787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:16:28.410669 kubelet[2787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:16:28.410669 kubelet[2787]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:16:28.410669 kubelet[2787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:16:28.411557 kubelet[2787]: I0711 00:16:28.410752 2787 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:16:28.423871 kubelet[2787]: I0711 00:16:28.423824 2787 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:16:28.423871 kubelet[2787]: I0711 00:16:28.423861 2787 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:16:28.424194 kubelet[2787]: I0711 00:16:28.424175 2787 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:16:28.426100 kubelet[2787]: I0711 00:16:28.425985 2787 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:16:28.429053 kubelet[2787]: I0711 00:16:28.429001 2787 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:16:28.434263 kubelet[2787]: E0711 00:16:28.434192 2787 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:16:28.434263 kubelet[2787]: I0711 00:16:28.434247 2787 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:16:28.442797 kubelet[2787]: I0711 00:16:28.442497 2787 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:16:28.445533 kubelet[2787]: I0711 00:16:28.445490 2787 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:16:28.445757 kubelet[2787]: I0711 00:16:28.445664 2787 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:16:28.446186 kubelet[2787]: I0711 00:16:28.445909 2787 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:16:28.446316 kubelet[2787]: I0711 00:16:28.446196 2787 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:16:28.446316 kubelet[2787]: I0711 00:16:28.446207 2787 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:16:28.446316 kubelet[2787]: I0711 00:16:28.446249 2787 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:16:28.446483 kubelet[2787]: I0711 00:16:28.446373 2787 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:16:28.446483 kubelet[2787]: I0711 00:16:28.446389 2787 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:16:28.446483 kubelet[2787]: I0711 00:16:28.446428 2787 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:16:28.446483 kubelet[2787]: I0711 00:16:28.446440 2787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:16:28.451861 kubelet[2787]: I0711 00:16:28.451830 2787 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:16:28.452321 kubelet[2787]: I0711 00:16:28.452296 2787 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:16:28.452886 kubelet[2787]: I0711 00:16:28.452860 2787 server.go:1274] "Started kubelet" Jul 11 00:16:28.453690 kubelet[2787]: I0711 00:16:28.453608 2787 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:16:28.455733 kubelet[2787]: I0711 00:16:28.453764 2787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:16:28.455733 kubelet[2787]: I0711 00:16:28.454109 2787 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:16:28.455733 kubelet[2787]: I0711 00:16:28.455644 2787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:16:28.460215 kubelet[2787]: I0711 00:16:28.460181 2787 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:16:28.462315 kubelet[2787]: I0711 00:16:28.462179 2787 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:16:28.467879 kubelet[2787]: I0711 00:16:28.467799 2787 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:16:28.468774 kubelet[2787]: E0711 00:16:28.468444 2787 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:16:28.468982 kubelet[2787]: I0711 00:16:28.468957 2787 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:16:28.469297 kubelet[2787]: I0711 00:16:28.469278 2787 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:16:28.476752 kubelet[2787]: I0711 00:16:28.476649 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:16:28.478564 kubelet[2787]: I0711 00:16:28.478529 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:16:28.478662 kubelet[2787]: I0711 00:16:28.478575 2787 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:16:28.478662 kubelet[2787]: I0711 00:16:28.478600 2787 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:16:28.478800 kubelet[2787]: E0711 00:16:28.478665 2787 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:16:28.479174 kubelet[2787]: I0711 00:16:28.479132 2787 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:16:28.482123 kubelet[2787]: I0711 00:16:28.481890 2787 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:16:28.483572 kubelet[2787]: E0711 00:16:28.483539 2787 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:16:28.489989 kubelet[2787]: I0711 00:16:28.489898 2787 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:16:28.556853 sudo[2822]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 00:16:28.558137 sudo[2822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 00:16:28.579100 kubelet[2787]: E0711 00:16:28.578887 2787 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:16:28.584734 kubelet[2787]: I0711 00:16:28.582906 2787 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:16:28.584734 kubelet[2787]: I0711 00:16:28.582938 2787 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:16:28.584734 kubelet[2787]: I0711 00:16:28.582964 2787 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:16:28.584734 kubelet[2787]: I0711 00:16:28.583205 2787 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:16:28.584734 kubelet[2787]: I0711 00:16:28.583221 2787 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:16:28.584734 kubelet[2787]: I0711 00:16:28.583246 2787 policy_none.go:49] "None policy: Start" Jul 11 00:16:28.584734 kubelet[2787]: I0711 00:16:28.584405 2787 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:16:28.584734 kubelet[2787]: I0711 00:16:28.584447 2787 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:16:28.585127 kubelet[2787]: I0711 00:16:28.584833 2787 state_mem.go:75] "Updated machine memory state" Jul 11 00:16:28.587073 kubelet[2787]: I0711 00:16:28.587040 2787 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:16:28.587695 kubelet[2787]: I0711 00:16:28.587491 2787 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:16:28.587695 kubelet[2787]: I0711 00:16:28.587511 2787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:16:28.589744 kubelet[2787]: I0711 00:16:28.589636 2787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:16:28.701235 kubelet[2787]: I0711 00:16:28.701196 2787 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:16:28.980454 kubelet[2787]: I0711 00:16:28.980304 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:16:28.980454 kubelet[2787]: I0711 00:16:28.980394 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d04380d2edd491eb0876030ae1df311-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d04380d2edd491eb0876030ae1df311\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:28.980454 kubelet[2787]: I0711 00:16:28.980426 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:28.980454 kubelet[2787]: I0711 00:16:28.980445 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:28.980700 kubelet[2787]: I0711 00:16:28.980479 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:28.980700 kubelet[2787]: I0711 00:16:28.980509 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d04380d2edd491eb0876030ae1df311-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d04380d2edd491eb0876030ae1df311\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:28.980700 kubelet[2787]: I0711 00:16:28.980532 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d04380d2edd491eb0876030ae1df311-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8d04380d2edd491eb0876030ae1df311\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:16:28.980700 kubelet[2787]: I0711 00:16:28.980557 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:28.980700 kubelet[2787]: I0711 00:16:28.980580 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:16:29.186723 kubelet[2787]: I0711 00:16:29.186516 2787 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:16:29.186723 kubelet[2787]: I0711 00:16:29.186651 2787 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:16:29.188300 kubelet[2787]: E0711 00:16:29.188243 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:29.190935 kubelet[2787]: E0711 00:16:29.190842 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:29.192729 kubelet[2787]: E0711 00:16:29.191325 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:29.256380 sudo[2822]: pam_unix(sudo:session): session closed for user root Jul 11 00:16:29.446994 kubelet[2787]: I0711 00:16:29.446896 2787 apiserver.go:52] "Watching apiserver" Jul 11 00:16:29.469984 kubelet[2787]: I0711 00:16:29.469921 2787 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:16:29.513531 kubelet[2787]: E0711 00:16:29.513367 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:29.513667 kubelet[2787]: E0711 00:16:29.513652 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:29.513997 kubelet[2787]: E0711 00:16:29.513959 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:30.515464 kubelet[2787]: E0711 00:16:30.515354 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:30.516215 kubelet[2787]: E0711 00:16:30.516169 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:30.587564 kubelet[2787]: I0711 00:16:30.587303 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.587267084 podStartE2EDuration="2.587267084s" podCreationTimestamp="2025-07-11 00:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:30.380603985 +0000 UTC m=+2.042492444" watchObservedRunningTime="2025-07-11 00:16:30.587267084 +0000 UTC m=+2.249155533" Jul 11 00:16:30.603817 kubelet[2787]: I0711 00:16:30.603670 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.603641258 podStartE2EDuration="2.603641258s" podCreationTimestamp="2025-07-11 00:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:30.586980668 +0000 UTC m=+2.248869147" watchObservedRunningTime="2025-07-11 00:16:30.603641258 +0000 UTC m=+2.265529717" Jul 11 00:16:30.604062 kubelet[2787]: I0711 00:16:30.603863 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.603856337 podStartE2EDuration="2.603856337s" podCreationTimestamp="2025-07-11 00:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:30.603421982 +0000 UTC m=+2.265310451" watchObservedRunningTime="2025-07-11 00:16:30.603856337 +0000 UTC m=+2.265744796" Jul 11 00:16:31.518162 kubelet[2787]: E0711 00:16:31.518121 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:31.957297 kubelet[2787]: I0711 00:16:31.957154 2787 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:16:31.959332 containerd[1591]: time="2025-07-11T00:16:31.959267339Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:16:31.959923 kubelet[2787]: I0711 00:16:31.959560 2787 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:16:32.917071 kubelet[2787]: E0711 00:16:32.916990 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:33.522584 kubelet[2787]: E0711 00:16:33.521832 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:33.988259 sudo[1774]: pam_unix(sudo:session): session closed for user root Jul 11 00:16:33.993394 sshd[1767]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:34.000705 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:48586.service: Deactivated successfully. Jul 11 00:16:34.012975 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:16:34.014612 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:16:34.016930 kubelet[2787]: I0711 00:16:34.016870 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-xtables-lock\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017378 kubelet[2787]: I0711 00:16:34.016935 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f0988df-97d2-48c0-9e63-ac9993404378-cilium-config-path\") pod \"cilium-operator-5d85765b45-w784t\" (UID: \"6f0988df-97d2-48c0-9e63-ac9993404378\") " pod="kube-system/cilium-operator-5d85765b45-w784t" Jul 11 00:16:34.017378 kubelet[2787]: I0711 00:16:34.016964 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/832730e3-9be9-4e1c-8ef2-5f25259be402-lib-modules\") pod \"kube-proxy-5m9ld\" (UID: \"832730e3-9be9-4e1c-8ef2-5f25259be402\") " pod="kube-system/kube-proxy-5m9ld" Jul 11 00:16:34.017378 kubelet[2787]: I0711 00:16:34.016987 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcrrb\" (UniqueName: \"kubernetes.io/projected/832730e3-9be9-4e1c-8ef2-5f25259be402-kube-api-access-pcrrb\") pod \"kube-proxy-5m9ld\" (UID: \"832730e3-9be9-4e1c-8ef2-5f25259be402\") " pod="kube-system/kube-proxy-5m9ld" Jul 11 00:16:34.017378 kubelet[2787]: I0711 00:16:34.017011 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khr4k\" (UniqueName: \"kubernetes.io/projected/ed030c49-5a5e-44e0-bb68-d63d453cf142-kube-api-access-khr4k\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017378 kubelet[2787]: I0711 00:16:34.017033 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cni-path\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017574 kubelet[2787]: I0711 00:16:34.017054 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed030c49-5a5e-44e0-bb68-d63d453cf142-hubble-tls\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017574 kubelet[2787]: I0711 00:16:34.017077 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/832730e3-9be9-4e1c-8ef2-5f25259be402-xtables-lock\") pod \"kube-proxy-5m9ld\" (UID: \"832730e3-9be9-4e1c-8ef2-5f25259be402\") " pod="kube-system/kube-proxy-5m9ld" Jul 11 00:16:34.017574 kubelet[2787]: I0711 00:16:34.017099 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-run\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017574 kubelet[2787]: I0711 00:16:34.017121 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-hostproc\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017574 kubelet[2787]: I0711 00:16:34.017142 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/832730e3-9be9-4e1c-8ef2-5f25259be402-kube-proxy\") pod \"kube-proxy-5m9ld\" (UID: \"832730e3-9be9-4e1c-8ef2-5f25259be402\") " pod="kube-system/kube-proxy-5m9ld" Jul 11 00:16:34.017574 kubelet[2787]: I0711 00:16:34.017168 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed030c49-5a5e-44e0-bb68-d63d453cf142-clustermesh-secrets\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017795 kubelet[2787]: I0711 00:16:34.017201 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-config-path\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017795 kubelet[2787]: I0711 00:16:34.017234 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2xg\" (UniqueName: \"kubernetes.io/projected/6f0988df-97d2-48c0-9e63-ac9993404378-kube-api-access-rt2xg\") pod \"cilium-operator-5d85765b45-w784t\" (UID: \"6f0988df-97d2-48c0-9e63-ac9993404378\") " pod="kube-system/cilium-operator-5d85765b45-w784t" Jul 11 00:16:34.017795 kubelet[2787]: I0711 00:16:34.017260 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-lib-modules\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017795 kubelet[2787]: I0711 00:16:34.017281 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-bpf-maps\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017795 kubelet[2787]: I0711 00:16:34.017341 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-cgroup\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017951 kubelet[2787]: I0711 00:16:34.017374 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-etc-cni-netd\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017951 kubelet[2787]: I0711 00:16:34.017397 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-host-proc-sys-net\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.017951 kubelet[2787]: I0711 00:16:34.017419 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-host-proc-sys-kernel\") pod \"cilium-j7d4k\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " pod="kube-system/cilium-j7d4k" Jul 11 00:16:34.018707 systemd-logind[1558]: Removed session 7. Jul 11 00:16:34.250190 kubelet[2787]: E0711 00:16:34.249973 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:34.251006 containerd[1591]: time="2025-07-11T00:16:34.250930552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5m9ld,Uid:832730e3-9be9-4e1c-8ef2-5f25259be402,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:34.256816 kubelet[2787]: E0711 00:16:34.256758 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:34.257529 containerd[1591]: time="2025-07-11T00:16:34.257472693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7d4k,Uid:ed030c49-5a5e-44e0-bb68-d63d453cf142,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:34.278463 kubelet[2787]: E0711 00:16:34.278391 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:34.279242 containerd[1591]: time="2025-07-11T00:16:34.279195165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-w784t,Uid:6f0988df-97d2-48c0-9e63-ac9993404378,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:34.833669 kubelet[2787]: E0711 00:16:34.833593 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:35.525800 kubelet[2787]: E0711 00:16:35.525724 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:37.927732 containerd[1591]: time="2025-07-11T00:16:37.927519329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:37.927732 containerd[1591]: time="2025-07-11T00:16:37.927652323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:37.927732 containerd[1591]: time="2025-07-11T00:16:37.927668513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:37.928540 containerd[1591]: time="2025-07-11T00:16:37.927862823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:37.982113 containerd[1591]: time="2025-07-11T00:16:37.982046112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5m9ld,Uid:832730e3-9be9-4e1c-8ef2-5f25259be402,Namespace:kube-system,Attempt:0,} returns sandbox id \"26c661c17e22b434a5ec191d83ad56cb3e4df1cb2720090df339e9abf3d0bc83\"" Jul 11 00:16:37.983065 kubelet[2787]: E0711 00:16:37.983029 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:37.985226 containerd[1591]: time="2025-07-11T00:16:37.985193884Z" level=info msg="CreateContainer within sandbox \"26c661c17e22b434a5ec191d83ad56cb3e4df1cb2720090df339e9abf3d0bc83\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:16:38.018046 containerd[1591]: time="2025-07-11T00:16:38.017756419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:38.018046 containerd[1591]: time="2025-07-11T00:16:38.017824969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:38.018046 containerd[1591]: time="2025-07-11T00:16:38.017835640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:38.018046 containerd[1591]: time="2025-07-11T00:16:38.017939889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:38.070391 containerd[1591]: time="2025-07-11T00:16:38.070324810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7d4k,Uid:ed030c49-5a5e-44e0-bb68-d63d453cf142,Namespace:kube-system,Attempt:0,} returns sandbox id \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\"" Jul 11 00:16:38.072291 kubelet[2787]: E0711 00:16:38.072232 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:38.073852 containerd[1591]: time="2025-07-11T00:16:38.073812038Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:16:38.076596 containerd[1591]: time="2025-07-11T00:16:38.076435872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:16:38.076596 containerd[1591]: time="2025-07-11T00:16:38.076530052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:16:38.076596 containerd[1591]: time="2025-07-11T00:16:38.076548276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:38.076870 containerd[1591]: time="2025-07-11T00:16:38.076713030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:16:38.161982 containerd[1591]: time="2025-07-11T00:16:38.161917668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-w784t,Uid:6f0988df-97d2-48c0-9e63-ac9993404378,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e\"" Jul 11 00:16:38.164354 kubelet[2787]: E0711 00:16:38.163858 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:38.366037 containerd[1591]: time="2025-07-11T00:16:38.365846122Z" level=info msg="CreateContainer within sandbox \"26c661c17e22b434a5ec191d83ad56cb3e4df1cb2720090df339e9abf3d0bc83\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"59b070247274e6952cee2db899e5c660232ec8944edaddf21bb997e4f076b87b\"" Jul 11 00:16:38.367915 containerd[1591]: time="2025-07-11T00:16:38.367885123Z" level=info msg="StartContainer for \"59b070247274e6952cee2db899e5c660232ec8944edaddf21bb997e4f076b87b\"" Jul 11 00:16:38.475836 containerd[1591]: time="2025-07-11T00:16:38.475773182Z" level=info msg="StartContainer for \"59b070247274e6952cee2db899e5c660232ec8944edaddf21bb997e4f076b87b\" returns successfully" Jul 11 00:16:38.542991 kubelet[2787]: E0711 00:16:38.542655 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:39.677495 kubelet[2787]: E0711 00:16:39.677427 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:39.782055 kubelet[2787]: I0711 00:16:39.781930 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5m9ld" podStartSLOduration=6.7818984239999995 podStartE2EDuration="6.781898424s" podCreationTimestamp="2025-07-11 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:16:38.581702732 +0000 UTC m=+10.243591201" watchObservedRunningTime="2025-07-11 00:16:39.781898424 +0000 UTC m=+11.443786894" Jul 11 00:16:43.673233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292102875.mount: Deactivated successfully. Jul 11 00:16:46.675174 containerd[1591]: time="2025-07-11T00:16:46.675079431Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:46.677233 containerd[1591]: time="2025-07-11T00:16:46.676609665Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 11 00:16:46.679853 containerd[1591]: time="2025-07-11T00:16:46.678773286Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:46.680687 containerd[1591]: time="2025-07-11T00:16:46.680614883Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.60674752s" Jul 11 00:16:46.680784 containerd[1591]: time="2025-07-11T00:16:46.680694534Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 11 00:16:46.681800 containerd[1591]: time="2025-07-11T00:16:46.681766786Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:16:46.685160 containerd[1591]: time="2025-07-11T00:16:46.685112718Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:16:46.709168 containerd[1591]: time="2025-07-11T00:16:46.709059942Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\"" Jul 11 00:16:46.709950 containerd[1591]: time="2025-07-11T00:16:46.709896625Z" level=info msg="StartContainer for \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\"" Jul 11 00:16:46.783798 containerd[1591]: time="2025-07-11T00:16:46.783739578Z" level=info msg="StartContainer for \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\" returns successfully" Jul 11 00:16:47.288297 containerd[1591]: time="2025-07-11T00:16:47.288185319Z" level=info msg="shim disconnected" id=fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d namespace=k8s.io Jul 11 00:16:47.288297 containerd[1591]: time="2025-07-11T00:16:47.288284288Z" level=warning msg="cleaning up after shim disconnected" id=fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d namespace=k8s.io Jul 11 00:16:47.288297 containerd[1591]: time="2025-07-11T00:16:47.288299977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:16:47.310421 containerd[1591]: time="2025-07-11T00:16:47.309615411Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:16:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 11 00:16:47.570064 kubelet[2787]: E0711 00:16:47.569806 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:47.572811 containerd[1591]: time="2025-07-11T00:16:47.572755000Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:16:47.615899 containerd[1591]: time="2025-07-11T00:16:47.615769747Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\"" Jul 11 00:16:47.616659 containerd[1591]: time="2025-07-11T00:16:47.616596563Z" level=info msg="StartContainer for \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\"" Jul 11 00:16:47.702541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d-rootfs.mount: Deactivated successfully. Jul 11 00:16:47.862820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:16:47.863408 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:47.863514 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:16:47.870109 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:16:47.894030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:47.978909 containerd[1591]: time="2025-07-11T00:16:47.978844126Z" level=info msg="StartContainer for \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\" returns successfully" Jul 11 00:16:47.998785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e-rootfs.mount: Deactivated successfully. Jul 11 00:16:48.574235 containerd[1591]: time="2025-07-11T00:16:48.574151336Z" level=info msg="shim disconnected" id=3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e namespace=k8s.io Jul 11 00:16:48.574235 containerd[1591]: time="2025-07-11T00:16:48.574216089Z" level=warning msg="cleaning up after shim disconnected" id=3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e namespace=k8s.io Jul 11 00:16:48.574235 containerd[1591]: time="2025-07-11T00:16:48.574227141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:16:48.576464 kubelet[2787]: E0711 00:16:48.576404 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:49.598603 kubelet[2787]: E0711 00:16:49.598560 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:49.600477 containerd[1591]: time="2025-07-11T00:16:49.600393096Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:16:51.204513 containerd[1591]: time="2025-07-11T00:16:51.204276368Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\"" Jul 11 00:16:51.211922 containerd[1591]: time="2025-07-11T00:16:51.206467852Z" level=info msg="StartContainer for \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\"" Jul 11 00:16:51.305737 containerd[1591]: time="2025-07-11T00:16:51.305402574Z" level=info msg="StartContainer for \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\" returns successfully" Jul 11 00:16:51.336396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30-rootfs.mount: Deactivated successfully. Jul 11 00:16:51.348132 containerd[1591]: time="2025-07-11T00:16:51.348050206Z" level=info msg="shim disconnected" id=ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30 namespace=k8s.io Jul 11 00:16:51.348132 containerd[1591]: time="2025-07-11T00:16:51.348120340Z" level=warning msg="cleaning up after shim disconnected" id=ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30 namespace=k8s.io Jul 11 00:16:51.348132 containerd[1591]: time="2025-07-11T00:16:51.348134176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:16:51.614088 kubelet[2787]: E0711 00:16:51.612146 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:51.619189 containerd[1591]: time="2025-07-11T00:16:51.619094818Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:16:52.032721 containerd[1591]: time="2025-07-11T00:16:52.029719812Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\"" Jul 11 00:16:52.033436 containerd[1591]: time="2025-07-11T00:16:52.033387538Z" level=info msg="StartContainer for \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\"" Jul 11 00:16:52.139122 containerd[1591]: time="2025-07-11T00:16:52.139045724Z" level=info msg="StartContainer for \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\" returns successfully" Jul 11 00:16:52.196533 containerd[1591]: time="2025-07-11T00:16:52.196426661Z" level=info msg="shim disconnected" id=cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288 namespace=k8s.io Jul 11 00:16:52.196533 containerd[1591]: time="2025-07-11T00:16:52.196520691Z" level=warning msg="cleaning up after shim disconnected" id=cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288 namespace=k8s.io Jul 11 00:16:52.196533 containerd[1591]: time="2025-07-11T00:16:52.196532753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:16:52.215636 containerd[1591]: time="2025-07-11T00:16:52.213627766Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:16:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 11 00:16:52.617731 kubelet[2787]: E0711 00:16:52.617657 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:52.620587 containerd[1591]: time="2025-07-11T00:16:52.620536096Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:16:52.788037 containerd[1591]: time="2025-07-11T00:16:52.787938099Z" level=info msg="CreateContainer within sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\"" Jul 11 00:16:52.791141 containerd[1591]: time="2025-07-11T00:16:52.791078812Z" level=info msg="StartContainer for \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\"" Jul 11 00:16:52.806203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288-rootfs.mount: Deactivated successfully. Jul 11 00:16:52.830561 containerd[1591]: time="2025-07-11T00:16:52.829247896Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:53.029998 containerd[1591]: time="2025-07-11T00:16:53.029896007Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 11 00:16:53.151228 containerd[1591]: time="2025-07-11T00:16:53.151110381Z" level=info msg="StartContainer for \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\" returns successfully" Jul 11 00:16:53.155600 containerd[1591]: time="2025-07-11T00:16:53.155373342Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:16:53.165912 containerd[1591]: time="2025-07-11T00:16:53.165850870Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.484042774s" Jul 11 00:16:53.165912 containerd[1591]: time="2025-07-11T00:16:53.165911616Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 11 00:16:53.170005 containerd[1591]: time="2025-07-11T00:16:53.169829469Z" level=info msg="CreateContainer within sandbox \"e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:16:53.219946 containerd[1591]: time="2025-07-11T00:16:53.219704582Z" level=info msg="CreateContainer within sandbox \"e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c\"" Jul 11 00:16:53.222130 containerd[1591]: time="2025-07-11T00:16:53.222081320Z" level=info msg="StartContainer for \"dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c\"" Jul 11 00:16:53.408343 kubelet[2787]: I0711 00:16:53.408164 2787 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:16:53.419735 containerd[1591]: time="2025-07-11T00:16:53.417160104Z" level=info msg="StartContainer for \"dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c\" returns successfully" Jul 11 00:16:53.622846 kubelet[2787]: E0711 00:16:53.622782 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:53.627350 kubelet[2787]: E0711 00:16:53.627315 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:53.668245 kubelet[2787]: I0711 00:16:53.668037 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxvn\" (UniqueName: \"kubernetes.io/projected/fbde4ebb-afa7-4f5c-a1f1-c731983d1fcb-kube-api-access-hnxvn\") pod \"coredns-7c65d6cfc9-v7rkq\" (UID: \"fbde4ebb-afa7-4f5c-a1f1-c731983d1fcb\") " pod="kube-system/coredns-7c65d6cfc9-v7rkq" Jul 11 00:16:53.668245 kubelet[2787]: I0711 00:16:53.668113 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbde4ebb-afa7-4f5c-a1f1-c731983d1fcb-config-volume\") pod \"coredns-7c65d6cfc9-v7rkq\" (UID: \"fbde4ebb-afa7-4f5c-a1f1-c731983d1fcb\") " pod="kube-system/coredns-7c65d6cfc9-v7rkq" Jul 11 00:16:53.668245 kubelet[2787]: I0711 00:16:53.668142 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f03ee22d-f054-4ff8-a34e-48c8821b897d-config-volume\") pod \"coredns-7c65d6cfc9-nm87t\" (UID: \"f03ee22d-f054-4ff8-a34e-48c8821b897d\") " pod="kube-system/coredns-7c65d6cfc9-nm87t" Jul 11 00:16:53.668245 kubelet[2787]: I0711 00:16:53.668165 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7wg6\" (UniqueName: \"kubernetes.io/projected/f03ee22d-f054-4ff8-a34e-48c8821b897d-kube-api-access-j7wg6\") pod \"coredns-7c65d6cfc9-nm87t\" (UID: \"f03ee22d-f054-4ff8-a34e-48c8821b897d\") " pod="kube-system/coredns-7c65d6cfc9-nm87t" Jul 11 00:16:54.630541 kubelet[2787]: E0711 00:16:54.630436 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:54.630541 kubelet[2787]: E0711 00:16:54.630539 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:55.304981 kubelet[2787]: I0711 00:16:55.304899 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-w784t" podStartSLOduration=7.302484367 podStartE2EDuration="22.304878691s" podCreationTimestamp="2025-07-11 00:16:33 +0000 UTC" firstStartedPulling="2025-07-11 00:16:38.16500227 +0000 UTC m=+9.826890729" lastFinishedPulling="2025-07-11 00:16:53.167396594 +0000 UTC m=+24.829285053" observedRunningTime="2025-07-11 00:16:55.30321962 +0000 UTC m=+26.965108080" watchObservedRunningTime="2025-07-11 00:16:55.304878691 +0000 UTC m=+26.966767170" Jul 11 00:16:55.346238 kubelet[2787]: E0711 00:16:55.346163 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:55.347635 kubelet[2787]: E0711 00:16:55.347543 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:55.352427 containerd[1591]: time="2025-07-11T00:16:55.352016055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v7rkq,Uid:fbde4ebb-afa7-4f5c-a1f1-c731983d1fcb,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:55.353108 containerd[1591]: time="2025-07-11T00:16:55.352817222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nm87t,Uid:f03ee22d-f054-4ff8-a34e-48c8821b897d,Namespace:kube-system,Attempt:0,}" Jul 11 00:16:55.633485 kubelet[2787]: E0711 00:16:55.633248 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:16:57.214189 kubelet[2787]: I0711 00:16:57.214106 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j7d4k" podStartSLOduration=15.605673827 podStartE2EDuration="24.214081438s" podCreationTimestamp="2025-07-11 00:16:33 +0000 UTC" firstStartedPulling="2025-07-11 00:16:38.073186778 +0000 UTC m=+9.735075237" lastFinishedPulling="2025-07-11 00:16:46.681594379 +0000 UTC m=+18.343482848" observedRunningTime="2025-07-11 00:16:57.213059692 +0000 UTC m=+28.874948151" watchObservedRunningTime="2025-07-11 00:16:57.214081438 +0000 UTC m=+28.875969897" Jul 11 00:16:58.436571 systemd-networkd[1244]: cilium_host: Link UP Jul 11 00:16:58.437301 systemd-networkd[1244]: cilium_net: Link UP Jul 11 00:16:58.437574 systemd-networkd[1244]: cilium_net: Gained carrier Jul 11 00:16:58.437872 systemd-networkd[1244]: cilium_host: Gained carrier Jul 11 00:16:58.591240 systemd-networkd[1244]: cilium_vxlan: Link UP Jul 11 00:16:58.591253 systemd-networkd[1244]: cilium_vxlan: Gained carrier Jul 11 00:16:58.840289 systemd-networkd[1244]: cilium_host: Gained IPv6LL Jul 11 00:16:58.860726 kernel: NET: Registered PF_ALG protocol family Jul 11 00:16:59.472849 systemd-networkd[1244]: cilium_net: Gained IPv6LL Jul 11 00:16:59.679426 systemd-networkd[1244]: lxc_health: Link UP Jul 11 00:16:59.684485 systemd-networkd[1244]: lxc_health: Gained carrier Jul 11 00:17:00.254904 systemd-networkd[1244]: lxc0b4ebe14b93d: Link UP Jul 11 00:17:00.260051 kubelet[2787]: E0711 00:17:00.259153 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:00.262786 kernel: eth0: renamed from tmpab3e1 Jul 11 00:17:00.265429 systemd-networkd[1244]: lxc0b4ebe14b93d: Gained carrier Jul 11 00:17:00.284560 systemd-networkd[1244]: lxcf1fe91a311a1: Link UP Jul 11 00:17:00.294864 kernel: eth0: renamed from tmpe4669 Jul 11 00:17:00.300927 systemd-networkd[1244]: lxcf1fe91a311a1: Gained carrier Jul 11 00:17:00.304238 systemd-networkd[1244]: cilium_vxlan: Gained IPv6LL Jul 11 00:17:00.644661 kubelet[2787]: E0711 00:17:00.644477 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:01.456612 systemd-networkd[1244]: lxc_health: Gained IPv6LL Jul 11 00:17:01.840077 systemd-networkd[1244]: lxcf1fe91a311a1: Gained IPv6LL Jul 11 00:17:02.162976 systemd-networkd[1244]: lxc0b4ebe14b93d: Gained IPv6LL Jul 11 00:17:04.552422 containerd[1591]: time="2025-07-11T00:17:04.552308284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:04.552422 containerd[1591]: time="2025-07-11T00:17:04.552386833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:04.552422 containerd[1591]: time="2025-07-11T00:17:04.552400970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:04.553125 containerd[1591]: time="2025-07-11T00:17:04.552511100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:04.582849 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:17:04.583540 containerd[1591]: time="2025-07-11T00:17:04.582795215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:04.583540 containerd[1591]: time="2025-07-11T00:17:04.582912267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:04.583540 containerd[1591]: time="2025-07-11T00:17:04.582940932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:04.583540 containerd[1591]: time="2025-07-11T00:17:04.583112579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:04.615859 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:17:04.622787 containerd[1591]: time="2025-07-11T00:17:04.622714929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v7rkq,Uid:fbde4ebb-afa7-4f5c-a1f1-c731983d1fcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e46690ecb7d91d1c8a8796859d8e69df01e0f98bf9cb6b83851dbd31bdb2d00d\"" Jul 11 00:17:04.623633 kubelet[2787]: E0711 00:17:04.623605 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:04.626201 containerd[1591]: time="2025-07-11T00:17:04.626162487Z" level=info msg="CreateContainer within sandbox \"e46690ecb7d91d1c8a8796859d8e69df01e0f98bf9cb6b83851dbd31bdb2d00d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:17:04.646096 containerd[1591]: time="2025-07-11T00:17:04.646047357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nm87t,Uid:f03ee22d-f054-4ff8-a34e-48c8821b897d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab3e1650acddfec8a4543dfe5a1f351928913b0a54b90f75b4bfddaeef51ae8e\"" Jul 11 00:17:04.647343 kubelet[2787]: E0711 00:17:04.647240 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:04.651321 containerd[1591]: time="2025-07-11T00:17:04.650957723Z" level=info msg="CreateContainer within sandbox \"ab3e1650acddfec8a4543dfe5a1f351928913b0a54b90f75b4bfddaeef51ae8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:17:05.556594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123090349.mount: Deactivated successfully. Jul 11 00:17:06.759807 containerd[1591]: time="2025-07-11T00:17:06.759723728Z" level=info msg="CreateContainer within sandbox \"e46690ecb7d91d1c8a8796859d8e69df01e0f98bf9cb6b83851dbd31bdb2d00d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dc5dd79be9162635c828eed98aa57992207ac334b3163d75e3302fb1b382995\"" Jul 11 00:17:06.760663 containerd[1591]: time="2025-07-11T00:17:06.760594688Z" level=info msg="StartContainer for \"3dc5dd79be9162635c828eed98aa57992207ac334b3163d75e3302fb1b382995\"" Jul 11 00:17:06.957874 containerd[1591]: time="2025-07-11T00:17:06.957808201Z" level=info msg="CreateContainer within sandbox \"ab3e1650acddfec8a4543dfe5a1f351928913b0a54b90f75b4bfddaeef51ae8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cfe9d7d35ec15870389db38dad2a0598a5db5f1a5385562b7003eb0e343a003\"" Jul 11 00:17:06.960808 containerd[1591]: time="2025-07-11T00:17:06.959856294Z" level=info msg="StartContainer for \"8cfe9d7d35ec15870389db38dad2a0598a5db5f1a5385562b7003eb0e343a003\"" Jul 11 00:17:07.294538 containerd[1591]: time="2025-07-11T00:17:07.294076379Z" level=info msg="StartContainer for \"3dc5dd79be9162635c828eed98aa57992207ac334b3163d75e3302fb1b382995\" returns successfully" Jul 11 00:17:07.294538 containerd[1591]: time="2025-07-11T00:17:07.294076469Z" level=info msg="StartContainer for \"8cfe9d7d35ec15870389db38dad2a0598a5db5f1a5385562b7003eb0e343a003\" returns successfully" Jul 11 00:17:07.685772 kubelet[2787]: E0711 00:17:07.685600 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:07.687103 kubelet[2787]: E0711 00:17:07.687078 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:07.816660 kubelet[2787]: I0711 00:17:07.816549 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-v7rkq" podStartSLOduration=34.816504796 podStartE2EDuration="34.816504796s" podCreationTimestamp="2025-07-11 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:07.816022275 +0000 UTC m=+39.477910745" watchObservedRunningTime="2025-07-11 00:17:07.816504796 +0000 UTC m=+39.478393285" Jul 11 00:17:07.841026 kubelet[2787]: I0711 00:17:07.840942 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nm87t" podStartSLOduration=34.840917545 podStartE2EDuration="34.840917545s" podCreationTimestamp="2025-07-11 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:07.840165803 +0000 UTC m=+39.502054262" watchObservedRunningTime="2025-07-11 00:17:07.840917545 +0000 UTC m=+39.502806004" Jul 11 00:17:08.690419 kubelet[2787]: E0711 00:17:08.689618 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:08.694000 kubelet[2787]: E0711 00:17:08.693962 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:09.691370 kubelet[2787]: E0711 00:17:09.691331 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:09.692066 kubelet[2787]: E0711 00:17:09.692048 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:10.692522 kubelet[2787]: E0711 00:17:10.692453 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:15.556085 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:33072.service - OpenSSH per-connection server daemon (10.0.0.1:33072). Jul 11 00:17:15.609843 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 33072 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:15.612348 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:15.619958 systemd-logind[1558]: New session 8 of user core. Jul 11 00:17:15.626997 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:17:16.303001 sshd[4176]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:16.308035 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:33072.service: Deactivated successfully. Jul 11 00:17:16.311449 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:17:16.311490 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:17:16.312728 systemd-logind[1558]: Removed session 8. Jul 11 00:17:21.317189 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:46704.service - OpenSSH per-connection server daemon (10.0.0.1:46704). Jul 11 00:17:21.354661 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 46704 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:21.357762 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:21.364997 systemd-logind[1558]: New session 9 of user core. Jul 11 00:17:21.375388 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:17:21.544991 sshd[4192]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:21.550209 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:46704.service: Deactivated successfully. Jul 11 00:17:21.553150 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:17:21.553264 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:17:21.554405 systemd-logind[1558]: Removed session 9. Jul 11 00:17:26.556936 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:43358.service - OpenSSH per-connection server daemon (10.0.0.1:43358). Jul 11 00:17:26.589712 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 43358 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:26.591690 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:26.596140 systemd-logind[1558]: New session 10 of user core. Jul 11 00:17:26.604085 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:17:26.803119 sshd[4208]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:26.810452 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:43358.service: Deactivated successfully. Jul 11 00:17:26.814739 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:17:26.815566 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:17:26.817271 systemd-logind[1558]: Removed session 10. Jul 11 00:17:31.814167 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:43374.service - OpenSSH per-connection server daemon (10.0.0.1:43374). Jul 11 00:17:31.845611 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 43374 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:31.848181 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:31.857102 systemd-logind[1558]: New session 11 of user core. Jul 11 00:17:31.867227 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:17:32.011696 sshd[4228]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:32.016655 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:43374.service: Deactivated successfully. Jul 11 00:17:32.020983 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:17:32.021310 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:17:32.022953 systemd-logind[1558]: Removed session 11. Jul 11 00:17:37.024065 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:45816.service - OpenSSH per-connection server daemon (10.0.0.1:45816). Jul 11 00:17:37.056814 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 45816 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:37.058954 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:37.064541 systemd-logind[1558]: New session 12 of user core. Jul 11 00:17:37.074289 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:17:37.199553 sshd[4245]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:37.205189 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:45816.service: Deactivated successfully. Jul 11 00:17:37.207858 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:17:37.208057 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:17:37.209663 systemd-logind[1558]: Removed session 12. Jul 11 00:17:42.217211 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:45828.service - OpenSSH per-connection server daemon (10.0.0.1:45828). Jul 11 00:17:42.266391 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 45828 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:42.269438 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:42.277456 systemd-logind[1558]: New session 13 of user core. Jul 11 00:17:42.285139 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:17:42.445263 sshd[4263]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:42.456635 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:45828.service: Deactivated successfully. Jul 11 00:17:42.461618 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:17:42.466813 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:17:42.468267 systemd-logind[1558]: Removed session 13. Jul 11 00:17:45.480387 kubelet[2787]: E0711 00:17:45.480292 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:47.457576 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:45718.service - OpenSSH per-connection server daemon (10.0.0.1:45718). Jul 11 00:17:47.479485 kubelet[2787]: E0711 00:17:47.479414 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:47.667162 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 45718 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:47.669871 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:47.686728 systemd-logind[1558]: New session 14 of user core. Jul 11 00:17:47.699302 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:17:47.875128 sshd[4279]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:47.891668 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:45732.service - OpenSSH per-connection server daemon (10.0.0.1:45732). Jul 11 00:17:47.893722 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:45718.service: Deactivated successfully. Jul 11 00:17:47.897928 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:17:47.900793 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:17:47.902323 systemd-logind[1558]: Removed session 14. Jul 11 00:17:47.927000 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 45732 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:47.929550 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:47.935960 systemd-logind[1558]: New session 15 of user core. Jul 11 00:17:47.945342 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:17:48.451280 sshd[4292]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:48.461938 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:45742.service - OpenSSH per-connection server daemon (10.0.0.1:45742). Jul 11 00:17:48.462510 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:45732.service: Deactivated successfully. Jul 11 00:17:48.465002 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:17:48.468039 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:17:48.469549 systemd-logind[1558]: Removed session 15. Jul 11 00:17:48.501290 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 45742 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:48.504892 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:48.510555 systemd-logind[1558]: New session 16 of user core. Jul 11 00:17:48.520306 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:17:48.920367 sshd[4306]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:48.926946 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:45742.service: Deactivated successfully. Jul 11 00:17:48.932324 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:17:48.932488 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:17:48.934157 systemd-logind[1558]: Removed session 16. Jul 11 00:17:53.933931 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:45746.service - OpenSSH per-connection server daemon (10.0.0.1:45746). Jul 11 00:17:53.965176 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 45746 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:53.967422 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:53.972326 systemd-logind[1558]: New session 17 of user core. Jul 11 00:17:53.983201 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:17:54.109612 sshd[4325]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:54.116479 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:45746.service: Deactivated successfully. Jul 11 00:17:54.120830 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:17:54.121807 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:17:54.123304 systemd-logind[1558]: Removed session 17. Jul 11 00:17:59.127201 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:46842.service - OpenSSH per-connection server daemon (10.0.0.1:46842). Jul 11 00:17:59.167766 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 46842 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:17:59.169969 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:17:59.176383 systemd-logind[1558]: New session 18 of user core. Jul 11 00:17:59.188566 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:17:59.319177 sshd[4340]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:59.325228 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:46842.service: Deactivated successfully. Jul 11 00:17:59.328305 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:17:59.328322 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:17:59.330025 systemd-logind[1558]: Removed session 18. Jul 11 00:18:02.480852 kubelet[2787]: E0711 00:18:02.480694 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:04.333294 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:46854.service - OpenSSH per-connection server daemon (10.0.0.1:46854). Jul 11 00:18:04.369814 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 46854 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:04.372343 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:04.379923 systemd-logind[1558]: New session 19 of user core. Jul 11 00:18:04.388742 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:18:04.552280 sshd[4355]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:04.576848 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:46870.service - OpenSSH per-connection server daemon (10.0.0.1:46870). Jul 11 00:18:04.577722 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:46854.service: Deactivated successfully. Jul 11 00:18:04.580971 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:18:04.583712 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:18:04.586527 systemd-logind[1558]: Removed session 19. Jul 11 00:18:04.617324 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 46870 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:04.619816 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:04.628840 systemd-logind[1558]: New session 20 of user core. Jul 11 00:18:04.639454 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:18:05.295493 sshd[4368]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:05.307181 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:46874.service - OpenSSH per-connection server daemon (10.0.0.1:46874). Jul 11 00:18:05.308157 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:46870.service: Deactivated successfully. Jul 11 00:18:05.312030 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:18:05.313587 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:18:05.315226 systemd-logind[1558]: Removed session 20. Jul 11 00:18:05.351791 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 46874 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:05.354390 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:05.360356 systemd-logind[1558]: New session 21 of user core. Jul 11 00:18:05.370239 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:18:06.482199 kubelet[2787]: E0711 00:18:06.481746 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:07.766555 sshd[4382]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:07.779526 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:47656.service - OpenSSH per-connection server daemon (10.0.0.1:47656). Jul 11 00:18:07.782343 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:46874.service: Deactivated successfully. Jul 11 00:18:07.788812 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:18:07.791229 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:18:07.794301 systemd-logind[1558]: Removed session 21. Jul 11 00:18:07.825637 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 47656 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:07.827464 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:07.836107 systemd-logind[1558]: New session 22 of user core. Jul 11 00:18:07.841241 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:18:08.359798 sshd[4402]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:08.369094 systemd[1]: Started sshd@22-10.0.0.53:22-10.0.0.1:47658.service - OpenSSH per-connection server daemon (10.0.0.1:47658). Jul 11 00:18:08.375381 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:47656.service: Deactivated successfully. Jul 11 00:18:08.389642 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:18:08.400974 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:18:08.403586 systemd-logind[1558]: Removed session 22. Jul 11 00:18:08.421869 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 47658 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:08.425232 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:08.436952 systemd-logind[1558]: New session 23 of user core. Jul 11 00:18:08.451611 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:18:08.482080 kubelet[2787]: E0711 00:18:08.482003 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:08.615594 sshd[4415]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:08.621668 systemd[1]: sshd@22-10.0.0.53:22-10.0.0.1:47658.service: Deactivated successfully. Jul 11 00:18:08.625310 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:18:08.625387 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:18:08.627238 systemd-logind[1558]: Removed session 23. Jul 11 00:18:13.631192 systemd[1]: Started sshd@23-10.0.0.53:22-10.0.0.1:47674.service - OpenSSH per-connection server daemon (10.0.0.1:47674). Jul 11 00:18:13.664034 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 47674 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:13.666075 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:13.670883 systemd-logind[1558]: New session 24 of user core. Jul 11 00:18:13.687260 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:18:13.856794 sshd[4435]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:13.862848 systemd[1]: sshd@23-10.0.0.53:22-10.0.0.1:47674.service: Deactivated successfully. Jul 11 00:18:13.866567 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:18:13.867840 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:18:13.869545 systemd-logind[1558]: Removed session 24. Jul 11 00:18:16.479819 kubelet[2787]: E0711 00:18:16.479771 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:18.868084 systemd[1]: Started sshd@24-10.0.0.53:22-10.0.0.1:49416.service - OpenSSH per-connection server daemon (10.0.0.1:49416). Jul 11 00:18:18.901448 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 49416 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:18.903468 sshd[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:18.908216 systemd-logind[1558]: New session 25 of user core. Jul 11 00:18:18.914975 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:18:19.264500 sshd[4451]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:19.270002 systemd[1]: sshd@24-10.0.0.53:22-10.0.0.1:49416.service: Deactivated successfully. Jul 11 00:18:19.272981 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:18:19.273987 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:18:19.275542 systemd-logind[1558]: Removed session 25. Jul 11 00:18:24.062142 systemd[1]: Started sshd@25-10.0.0.53:22-10.0.0.1:49426.service - OpenSSH per-connection server daemon (10.0.0.1:49426). Jul 11 00:18:24.092631 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 49426 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:24.137773 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:24.144360 systemd-logind[1558]: New session 26 of user core. Jul 11 00:18:24.159383 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:18:24.273171 sshd[4466]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:24.278090 systemd[1]: sshd@25-10.0.0.53:22-10.0.0.1:49426.service: Deactivated successfully. Jul 11 00:18:24.283106 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:18:24.284039 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:18:24.285337 systemd-logind[1558]: Removed session 26. Jul 11 00:18:24.480660 kubelet[2787]: E0711 00:18:24.480498 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:29.288095 systemd[1]: Started sshd@26-10.0.0.53:22-10.0.0.1:38034.service - OpenSSH per-connection server daemon (10.0.0.1:38034). Jul 11 00:18:29.325124 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 38034 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:29.327505 sshd[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:29.334843 systemd-logind[1558]: New session 27 of user core. Jul 11 00:18:29.343217 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 00:18:29.464991 sshd[4487]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:29.473352 systemd[1]: sshd@26-10.0.0.53:22-10.0.0.1:38034.service: Deactivated successfully. Jul 11 00:18:29.477964 systemd-logind[1558]: Session 27 logged out. Waiting for processes to exit. Jul 11 00:18:29.478973 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 00:18:29.482769 systemd-logind[1558]: Removed session 27. Jul 11 00:18:32.479703 kubelet[2787]: E0711 00:18:32.479600 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:34.475995 systemd[1]: Started sshd@27-10.0.0.53:22-10.0.0.1:38050.service - OpenSSH per-connection server daemon (10.0.0.1:38050). Jul 11 00:18:34.509762 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 38050 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:34.512119 sshd[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:34.518497 systemd-logind[1558]: New session 28 of user core. Jul 11 00:18:34.528235 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 11 00:18:34.665003 sshd[4505]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:34.670614 systemd[1]: sshd@27-10.0.0.53:22-10.0.0.1:38050.service: Deactivated successfully. Jul 11 00:18:34.673520 systemd-logind[1558]: Session 28 logged out. Waiting for processes to exit. Jul 11 00:18:34.673608 systemd[1]: session-28.scope: Deactivated successfully. Jul 11 00:18:34.675281 systemd-logind[1558]: Removed session 28. Jul 11 00:18:39.680065 systemd[1]: Started sshd@28-10.0.0.53:22-10.0.0.1:50766.service - OpenSSH per-connection server daemon (10.0.0.1:50766). Jul 11 00:18:39.712710 sshd[4523]: Accepted publickey for core from 10.0.0.1 port 50766 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:39.714631 sshd[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:39.719250 systemd-logind[1558]: New session 29 of user core. Jul 11 00:18:39.729993 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 11 00:18:40.016514 sshd[4523]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:40.023996 systemd[1]: Started sshd@29-10.0.0.53:22-10.0.0.1:50782.service - OpenSSH per-connection server daemon (10.0.0.1:50782). Jul 11 00:18:40.024635 systemd[1]: sshd@28-10.0.0.53:22-10.0.0.1:50766.service: Deactivated successfully. Jul 11 00:18:40.026834 systemd[1]: session-29.scope: Deactivated successfully. Jul 11 00:18:40.028760 systemd-logind[1558]: Session 29 logged out. Waiting for processes to exit. Jul 11 00:18:40.030436 systemd-logind[1558]: Removed session 29. Jul 11 00:18:40.055577 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 50782 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:40.057592 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:40.063811 systemd-logind[1558]: New session 30 of user core. Jul 11 00:18:40.078339 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 11 00:18:42.585147 systemd[1]: run-containerd-runc-k8s.io-6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081-runc.FZsxJ6.mount: Deactivated successfully. Jul 11 00:18:42.600546 containerd[1591]: time="2025-07-11T00:18:42.600457921Z" level=info msg="StopContainer for \"dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c\" with timeout 30 (s)" Jul 11 00:18:42.601474 containerd[1591]: time="2025-07-11T00:18:42.601405177Z" level=info msg="Stop container \"dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c\" with signal terminated" Jul 11 00:18:42.642251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c-rootfs.mount: Deactivated successfully. Jul 11 00:18:42.709003 containerd[1591]: time="2025-07-11T00:18:42.708905777Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:18:42.897483 containerd[1591]: time="2025-07-11T00:18:42.897330991Z" level=info msg="StopContainer for \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\" with timeout 2 (s)" Jul 11 00:18:42.897662 containerd[1591]: time="2025-07-11T00:18:42.897623238Z" level=info msg="Stop container \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\" with signal terminated" Jul 11 00:18:42.907860 systemd-networkd[1244]: lxc_health: Link DOWN Jul 11 00:18:42.907870 systemd-networkd[1244]: lxc_health: Lost carrier Jul 11 00:18:42.916935 containerd[1591]: time="2025-07-11T00:18:42.916850931Z" level=info msg="shim disconnected" id=dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c namespace=k8s.io Jul 11 00:18:42.916935 containerd[1591]: time="2025-07-11T00:18:42.916922737Z" level=warning msg="cleaning up after shim disconnected" id=dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c namespace=k8s.io Jul 11 00:18:42.916935 containerd[1591]: time="2025-07-11T00:18:42.916934420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:42.968846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081-rootfs.mount: Deactivated successfully. Jul 11 00:18:43.132560 containerd[1591]: time="2025-07-11T00:18:43.132462203Z" level=info msg="StopContainer for \"dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c\" returns successfully" Jul 11 00:18:43.138047 containerd[1591]: time="2025-07-11T00:18:43.137976026Z" level=info msg="StopPodSandbox for \"e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e\"" Jul 11 00:18:43.138207 containerd[1591]: time="2025-07-11T00:18:43.138069986Z" level=info msg="Container to stop \"dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:18:43.141326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e-shm.mount: Deactivated successfully. Jul 11 00:18:43.240704 containerd[1591]: time="2025-07-11T00:18:43.240589377Z" level=info msg="shim disconnected" id=6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081 namespace=k8s.io Jul 11 00:18:43.240704 containerd[1591]: time="2025-07-11T00:18:43.240703695Z" level=warning msg="cleaning up after shim disconnected" id=6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081 namespace=k8s.io Jul 11 00:18:43.240704 containerd[1591]: time="2025-07-11T00:18:43.240717582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:43.257539 containerd[1591]: time="2025-07-11T00:18:43.256980742Z" level=info msg="shim disconnected" id=e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e namespace=k8s.io Jul 11 00:18:43.257539 containerd[1591]: time="2025-07-11T00:18:43.257071415Z" level=warning msg="cleaning up after shim disconnected" id=e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e namespace=k8s.io Jul 11 00:18:43.257539 containerd[1591]: time="2025-07-11T00:18:43.257084329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:43.288698 containerd[1591]: time="2025-07-11T00:18:43.286949095Z" level=info msg="TearDown network for sandbox \"e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e\" successfully" Jul 11 00:18:43.288698 containerd[1591]: time="2025-07-11T00:18:43.287928743Z" level=info msg="StopPodSandbox for \"e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e\" returns successfully" Jul 11 00:18:43.291300 containerd[1591]: time="2025-07-11T00:18:43.291235809Z" level=info msg="StopContainer for \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\" returns successfully" Jul 11 00:18:43.292002 containerd[1591]: time="2025-07-11T00:18:43.291968165Z" level=info msg="StopPodSandbox for \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\"" Jul 11 00:18:43.292076 containerd[1591]: time="2025-07-11T00:18:43.292007449Z" level=info msg="Container to stop \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:18:43.292076 containerd[1591]: time="2025-07-11T00:18:43.292023611Z" level=info msg="Container to stop \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:18:43.292076 containerd[1591]: time="2025-07-11T00:18:43.292035633Z" level=info msg="Container to stop \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:18:43.292076 containerd[1591]: time="2025-07-11T00:18:43.292048288Z" level=info msg="Container to stop \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:18:43.292076 containerd[1591]: time="2025-07-11T00:18:43.292060150Z" level=info msg="Container to stop \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:18:43.358109 containerd[1591]: time="2025-07-11T00:18:43.357986838Z" level=info msg="shim disconnected" id=34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4 namespace=k8s.io Jul 11 00:18:43.358109 containerd[1591]: time="2025-07-11T00:18:43.358089894Z" level=warning msg="cleaning up after shim disconnected" id=34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4 namespace=k8s.io Jul 11 00:18:43.358109 containerd[1591]: time="2025-07-11T00:18:43.358109041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:43.396089 containerd[1591]: time="2025-07-11T00:18:43.395875473Z" level=info msg="TearDown network for sandbox \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" successfully" Jul 11 00:18:43.396089 containerd[1591]: time="2025-07-11T00:18:43.395934395Z" level=info msg="StopPodSandbox for \"34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4\" returns successfully" Jul 11 00:18:43.417704 kubelet[2787]: I0711 00:18:43.414491 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-config-path\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.417704 kubelet[2787]: I0711 00:18:43.414644 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-etc-cni-netd\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.417704 kubelet[2787]: I0711 00:18:43.415077 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed030c49-5a5e-44e0-bb68-d63d453cf142-hubble-tls\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.417704 kubelet[2787]: I0711 00:18:43.415119 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed030c49-5a5e-44e0-bb68-d63d453cf142-clustermesh-secrets\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.417704 kubelet[2787]: I0711 00:18:43.415201 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-host-proc-sys-kernel\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.417704 kubelet[2787]: I0711 00:18:43.415275 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-host-proc-sys-net\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.422527 kubelet[2787]: I0711 00:18:43.415309 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f0988df-97d2-48c0-9e63-ac9993404378-cilium-config-path\") pod \"6f0988df-97d2-48c0-9e63-ac9993404378\" (UID: \"6f0988df-97d2-48c0-9e63-ac9993404378\") " Jul 11 00:18:43.422527 kubelet[2787]: I0711 00:18:43.415389 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khr4k\" (UniqueName: \"kubernetes.io/projected/ed030c49-5a5e-44e0-bb68-d63d453cf142-kube-api-access-khr4k\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.422527 kubelet[2787]: I0711 00:18:43.415415 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cni-path\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.422527 kubelet[2787]: I0711 00:18:43.415497 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-lib-modules\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.422527 kubelet[2787]: I0711 00:18:43.415566 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-run\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.422527 kubelet[2787]: I0711 00:18:43.415604 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt2xg\" (UniqueName: \"kubernetes.io/projected/6f0988df-97d2-48c0-9e63-ac9993404378-kube-api-access-rt2xg\") pod \"6f0988df-97d2-48c0-9e63-ac9993404378\" (UID: \"6f0988df-97d2-48c0-9e63-ac9993404378\") " Jul 11 00:18:43.422783 kubelet[2787]: I0711 00:18:43.415718 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-bpf-maps\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.422783 kubelet[2787]: I0711 00:18:43.415746 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-hostproc\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.422783 kubelet[2787]: I0711 00:18:43.415766 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-xtables-lock\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.422783 kubelet[2787]: I0711 00:18:43.415795 2787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-cgroup\") pod \"ed030c49-5a5e-44e0-bb68-d63d453cf142\" (UID: \"ed030c49-5a5e-44e0-bb68-d63d453cf142\") " Jul 11 00:18:43.422783 kubelet[2787]: I0711 00:18:43.415909 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.422783 kubelet[2787]: I0711 00:18:43.418622 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.422984 kubelet[2787]: I0711 00:18:43.420620 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.422984 kubelet[2787]: I0711 00:18:43.420707 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.423215 kubelet[2787]: I0711 00:18:43.423167 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.423814 kubelet[2787]: I0711 00:18:43.423787 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cni-path" (OuterVolumeSpecName: "cni-path") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.423925 kubelet[2787]: I0711 00:18:43.423908 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.426240 kubelet[2787]: I0711 00:18:43.426155 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.426876 kubelet[2787]: I0711 00:18:43.426829 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-hostproc" (OuterVolumeSpecName: "hostproc") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.426993 kubelet[2787]: I0711 00:18:43.426970 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed030c49-5a5e-44e0-bb68-d63d453cf142-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:18:43.427057 kubelet[2787]: I0711 00:18:43.427015 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:18:43.430270 kubelet[2787]: I0711 00:18:43.430130 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed030c49-5a5e-44e0-bb68-d63d453cf142-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:18:43.430270 kubelet[2787]: I0711 00:18:43.430189 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f0988df-97d2-48c0-9e63-ac9993404378-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f0988df-97d2-48c0-9e63-ac9993404378" (UID: "6f0988df-97d2-48c0-9e63-ac9993404378"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:18:43.431018 kubelet[2787]: I0711 00:18:43.430975 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:18:43.431970 kubelet[2787]: I0711 00:18:43.431932 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed030c49-5a5e-44e0-bb68-d63d453cf142-kube-api-access-khr4k" (OuterVolumeSpecName: "kube-api-access-khr4k") pod "ed030c49-5a5e-44e0-bb68-d63d453cf142" (UID: "ed030c49-5a5e-44e0-bb68-d63d453cf142"). InnerVolumeSpecName "kube-api-access-khr4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:18:43.432385 kubelet[2787]: I0711 00:18:43.432359 2787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0988df-97d2-48c0-9e63-ac9993404378-kube-api-access-rt2xg" (OuterVolumeSpecName: "kube-api-access-rt2xg") pod "6f0988df-97d2-48c0-9e63-ac9993404378" (UID: "6f0988df-97d2-48c0-9e63-ac9993404378"). InnerVolumeSpecName "kube-api-access-rt2xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:18:43.516423 kubelet[2787]: I0711 00:18:43.516208 2787 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed030c49-5a5e-44e0-bb68-d63d453cf142-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516423 kubelet[2787]: I0711 00:18:43.516262 2787 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516423 kubelet[2787]: I0711 00:18:43.516279 2787 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516423 kubelet[2787]: I0711 00:18:43.516291 2787 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f0988df-97d2-48c0-9e63-ac9993404378-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516423 kubelet[2787]: I0711 00:18:43.516305 2787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khr4k\" (UniqueName: \"kubernetes.io/projected/ed030c49-5a5e-44e0-bb68-d63d453cf142-kube-api-access-khr4k\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516423 kubelet[2787]: I0711 00:18:43.516320 2787 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516423 kubelet[2787]: I0711 00:18:43.516333 2787 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516423 kubelet[2787]: I0711 00:18:43.516345 2787 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516958 kubelet[2787]: I0711 00:18:43.516356 2787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt2xg\" (UniqueName: \"kubernetes.io/projected/6f0988df-97d2-48c0-9e63-ac9993404378-kube-api-access-rt2xg\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516958 kubelet[2787]: I0711 00:18:43.516368 2787 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516958 kubelet[2787]: I0711 00:18:43.516379 2787 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516958 kubelet[2787]: I0711 00:18:43.516389 2787 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516958 kubelet[2787]: I0711 00:18:43.516401 2787 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516958 kubelet[2787]: I0711 00:18:43.516414 2787 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed030c49-5a5e-44e0-bb68-d63d453cf142-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516958 kubelet[2787]: I0711 00:18:43.516426 2787 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed030c49-5a5e-44e0-bb68-d63d453cf142-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.516958 kubelet[2787]: I0711 00:18:43.516439 2787 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed030c49-5a5e-44e0-bb68-d63d453cf142-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:18:43.579951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3eab1aa66e8408a002041b6c8f6cc82e52d40613d625296f6e4fbe448c2083e-rootfs.mount: Deactivated successfully. Jul 11 00:18:43.580180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4-rootfs.mount: Deactivated successfully. Jul 11 00:18:43.580362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34946f57679f708653d50a5a971f59eff9338f36147c356524b7b5e0ae27feb4-shm.mount: Deactivated successfully. Jul 11 00:18:43.580559 systemd[1]: var-lib-kubelet-pods-ed030c49\x2d5a5e\x2d44e0\x2dbb68\x2dd63d453cf142-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkhr4k.mount: Deactivated successfully. Jul 11 00:18:43.580772 systemd[1]: var-lib-kubelet-pods-6f0988df\x2d97d2\x2d48c0\x2d9e63\x2dac9993404378-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drt2xg.mount: Deactivated successfully. Jul 11 00:18:43.580957 systemd[1]: var-lib-kubelet-pods-ed030c49\x2d5a5e\x2d44e0\x2dbb68\x2dd63d453cf142-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:18:43.581163 systemd[1]: var-lib-kubelet-pods-ed030c49\x2d5a5e\x2d44e0\x2dbb68\x2dd63d453cf142-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:18:43.634532 kubelet[2787]: E0711 00:18:43.634467 2787 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:18:43.657204 systemd[1]: Started sshd@30-10.0.0.53:22-10.0.0.1:50786.service - OpenSSH per-connection server daemon (10.0.0.1:50786). Jul 11 00:18:43.662600 sshd[4535]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:43.668994 systemd[1]: sshd@29-10.0.0.53:22-10.0.0.1:50782.service: Deactivated successfully. Jul 11 00:18:43.669765 systemd-logind[1558]: Session 30 logged out. Waiting for processes to exit. Jul 11 00:18:43.672146 systemd[1]: session-30.scope: Deactivated successfully. Jul 11 00:18:43.673087 systemd-logind[1558]: Removed session 30. Jul 11 00:18:43.812246 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 50786 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:43.814993 sshd[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:43.825255 systemd-logind[1558]: New session 31 of user core. Jul 11 00:18:43.840156 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 11 00:18:44.142044 kubelet[2787]: I0711 00:18:44.141879 2787 scope.go:117] "RemoveContainer" containerID="dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c" Jul 11 00:18:44.147872 containerd[1591]: time="2025-07-11T00:18:44.147814903Z" level=info msg="RemoveContainer for \"dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c\"" Jul 11 00:18:44.157416 containerd[1591]: time="2025-07-11T00:18:44.156995448Z" level=info msg="RemoveContainer for \"dc328391c38b7d28aaf2f8ba3670a64e3538a521ed657bc7f0415d909316424c\" returns successfully" Jul 11 00:18:44.158016 kubelet[2787]: I0711 00:18:44.157429 2787 scope.go:117] "RemoveContainer" containerID="6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081" Jul 11 00:18:44.160123 containerd[1591]: time="2025-07-11T00:18:44.159732507Z" level=info msg="RemoveContainer for \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\"" Jul 11 00:18:44.172235 containerd[1591]: time="2025-07-11T00:18:44.172134936Z" level=info msg="RemoveContainer for \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\" returns successfully" Jul 11 00:18:44.175499 kubelet[2787]: I0711 00:18:44.175349 2787 scope.go:117] "RemoveContainer" containerID="cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288" Jul 11 00:18:44.179117 containerd[1591]: time="2025-07-11T00:18:44.179031255Z" level=info msg="RemoveContainer for \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\"" Jul 11 00:18:44.190720 containerd[1591]: time="2025-07-11T00:18:44.190578854Z" level=info msg="RemoveContainer for \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\" returns successfully" Jul 11 00:18:44.191116 kubelet[2787]: I0711 00:18:44.191023 2787 scope.go:117] "RemoveContainer" containerID="ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30" Jul 11 00:18:44.192786 containerd[1591]: time="2025-07-11T00:18:44.192728974Z" level=info msg="RemoveContainer for \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\"" Jul 11 00:18:44.199291 containerd[1591]: time="2025-07-11T00:18:44.199222826Z" level=info msg="RemoveContainer for \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\" returns successfully" Jul 11 00:18:44.199604 kubelet[2787]: I0711 00:18:44.199551 2787 scope.go:117] "RemoveContainer" containerID="3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e" Jul 11 00:18:44.204469 containerd[1591]: time="2025-07-11T00:18:44.204391972Z" level=info msg="RemoveContainer for \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\"" Jul 11 00:18:44.228383 containerd[1591]: time="2025-07-11T00:18:44.228183417Z" level=info msg="RemoveContainer for \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\" returns successfully" Jul 11 00:18:44.228735 kubelet[2787]: I0711 00:18:44.228633 2787 scope.go:117] "RemoveContainer" containerID="fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d" Jul 11 00:18:44.231176 containerd[1591]: time="2025-07-11T00:18:44.231123133Z" level=info msg="RemoveContainer for \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\"" Jul 11 00:18:44.237357 containerd[1591]: time="2025-07-11T00:18:44.237219126Z" level=info msg="RemoveContainer for \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\" returns successfully" Jul 11 00:18:44.237643 kubelet[2787]: I0711 00:18:44.237593 2787 scope.go:117] "RemoveContainer" containerID="6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081" Jul 11 00:18:44.238125 containerd[1591]: time="2025-07-11T00:18:44.238046745Z" level=error msg="ContainerStatus for \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\": not found" Jul 11 00:18:44.257405 kubelet[2787]: E0711 00:18:44.257029 2787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\": not found" containerID="6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081" Jul 11 00:18:44.257405 kubelet[2787]: I0711 00:18:44.257104 2787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081"} err="failed to get container status \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\": rpc error: code = NotFound desc = an error occurred when try to find container \"6588874db1ff53a3004b7edcea6f65fab6b8fc42817fbbb81243dc61224e4081\": not found" Jul 11 00:18:44.257405 kubelet[2787]: I0711 00:18:44.257210 2787 scope.go:117] "RemoveContainer" containerID="cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288" Jul 11 00:18:44.257906 containerd[1591]: time="2025-07-11T00:18:44.257815790Z" level=error msg="ContainerStatus for \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\": not found" Jul 11 00:18:44.258155 kubelet[2787]: E0711 00:18:44.258106 2787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\": not found" containerID="cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288" Jul 11 00:18:44.258155 kubelet[2787]: I0711 00:18:44.258134 2787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288"} err="failed to get container status \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\": rpc error: code = NotFound desc = an error occurred when try to find container \"cec2290c5a1c1d7c399e72d3c9c51d31450d2fd8cde1556a8ef2c868c71a0288\": not found" Jul 11 00:18:44.258155 kubelet[2787]: I0711 00:18:44.258153 2787 scope.go:117] "RemoveContainer" containerID="ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30" Jul 11 00:18:44.258617 containerd[1591]: time="2025-07-11T00:18:44.258573895Z" level=error msg="ContainerStatus for \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\": not found" Jul 11 00:18:44.258989 kubelet[2787]: E0711 00:18:44.258776 2787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\": not found" containerID="ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30" Jul 11 00:18:44.258989 kubelet[2787]: I0711 00:18:44.258812 2787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30"} err="failed to get container status \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac7b64208fc918300d9d56169985bd7ed50fc466b919f19589dd1f3829acbf30\": not found" Jul 11 00:18:44.258989 kubelet[2787]: I0711 00:18:44.258827 2787 scope.go:117] "RemoveContainer" containerID="3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e" Jul 11 00:18:44.260076 containerd[1591]: time="2025-07-11T00:18:44.260005064Z" level=error msg="ContainerStatus for \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\": not found" Jul 11 00:18:44.260320 kubelet[2787]: E0711 00:18:44.260190 2787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\": not found" containerID="3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e" Jul 11 00:18:44.260320 kubelet[2787]: I0711 00:18:44.260226 2787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e"} err="failed to get container status \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a511d61dc699a58b576000b917e52ef68781908a09fca088b2a052c97ebc97e\": not found" Jul 11 00:18:44.260320 kubelet[2787]: I0711 00:18:44.260250 2787 scope.go:117] "RemoveContainer" containerID="fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d" Jul 11 00:18:44.260816 containerd[1591]: time="2025-07-11T00:18:44.260503605Z" level=error msg="ContainerStatus for \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\": not found" Jul 11 00:18:44.260879 kubelet[2787]: E0711 00:18:44.260710 2787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\": not found" containerID="fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d" Jul 11 00:18:44.260879 kubelet[2787]: I0711 00:18:44.260731 2787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d"} err="failed to get container status \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd79cc1ef902fa4b864fdb1ae2f14fb7a1744ce44c0948bd2eaa131945c0495d\": not found" Jul 11 00:18:44.482243 kubelet[2787]: I0711 00:18:44.482111 2787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0988df-97d2-48c0-9e63-ac9993404378" path="/var/lib/kubelet/pods/6f0988df-97d2-48c0-9e63-ac9993404378/volumes" Jul 11 00:18:44.482948 kubelet[2787]: I0711 00:18:44.482917 2787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed030c49-5a5e-44e0-bb68-d63d453cf142" path="/var/lib/kubelet/pods/ed030c49-5a5e-44e0-bb68-d63d453cf142/volumes" Jul 11 00:18:45.642763 sshd[4705]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:45.656728 kubelet[2787]: E0711 00:18:45.655526 2787 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed030c49-5a5e-44e0-bb68-d63d453cf142" containerName="mount-cgroup" Jul 11 00:18:45.656728 kubelet[2787]: E0711 00:18:45.655565 2787 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed030c49-5a5e-44e0-bb68-d63d453cf142" containerName="mount-bpf-fs" Jul 11 00:18:45.656728 kubelet[2787]: E0711 00:18:45.655572 2787 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f0988df-97d2-48c0-9e63-ac9993404378" containerName="cilium-operator" Jul 11 00:18:45.656728 kubelet[2787]: E0711 00:18:45.655579 2787 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed030c49-5a5e-44e0-bb68-d63d453cf142" containerName="apply-sysctl-overwrites" Jul 11 00:18:45.658735 kubelet[2787]: E0711 00:18:45.656761 2787 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed030c49-5a5e-44e0-bb68-d63d453cf142" containerName="clean-cilium-state" Jul 11 00:18:45.658735 kubelet[2787]: E0711 00:18:45.657285 2787 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed030c49-5a5e-44e0-bb68-d63d453cf142" containerName="cilium-agent" Jul 11 00:18:45.658735 kubelet[2787]: I0711 00:18:45.657608 2787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed030c49-5a5e-44e0-bb68-d63d453cf142" containerName="cilium-agent" Jul 11 00:18:45.658735 kubelet[2787]: I0711 00:18:45.657623 2787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0988df-97d2-48c0-9e63-ac9993404378" containerName="cilium-operator" Jul 11 00:18:45.657080 systemd[1]: Started sshd@31-10.0.0.53:22-10.0.0.1:50788.service - OpenSSH per-connection server daemon (10.0.0.1:50788). Jul 11 00:18:45.659417 systemd[1]: sshd@30-10.0.0.53:22-10.0.0.1:50786.service: Deactivated successfully. Jul 11 00:18:45.669907 systemd[1]: session-31.scope: Deactivated successfully. Jul 11 00:18:45.676494 systemd-logind[1558]: Session 31 logged out. Waiting for processes to exit. Jul 11 00:18:45.685619 systemd-logind[1558]: Removed session 31. Jul 11 00:18:45.713462 sshd[4719]: Accepted publickey for core from 10.0.0.1 port 50788 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:45.716160 sshd[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:45.722264 systemd-logind[1558]: New session 32 of user core. Jul 11 00:18:45.731568 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 11 00:18:45.789732 sshd[4719]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:45.797989 systemd[1]: Started sshd@32-10.0.0.53:22-10.0.0.1:50794.service - OpenSSH per-connection server daemon (10.0.0.1:50794). Jul 11 00:18:45.798739 systemd[1]: sshd@31-10.0.0.53:22-10.0.0.1:50788.service: Deactivated successfully. Jul 11 00:18:45.802994 systemd-logind[1558]: Session 32 logged out. Waiting for processes to exit. Jul 11 00:18:45.803997 systemd[1]: session-32.scope: Deactivated successfully. Jul 11 00:18:45.807054 systemd-logind[1558]: Removed session 32. Jul 11 00:18:45.829967 kubelet[2787]: I0711 00:18:45.829898 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-cilium-cgroup\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.829967 kubelet[2787]: I0711 00:18:45.829966 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-lib-modules\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830242 kubelet[2787]: I0711 00:18:45.830000 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a560290-6b67-40c2-b194-1c2a764b9711-clustermesh-secrets\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830242 kubelet[2787]: I0711 00:18:45.830037 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-hostproc\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830242 kubelet[2787]: I0711 00:18:45.830135 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a560290-6b67-40c2-b194-1c2a764b9711-cilium-config-path\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830242 kubelet[2787]: I0711 00:18:45.830195 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-bpf-maps\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830242 kubelet[2787]: I0711 00:18:45.830226 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-host-proc-sys-kernel\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830242 kubelet[2787]: I0711 00:18:45.830243 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-cilium-run\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830469 kubelet[2787]: I0711 00:18:45.830260 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-cni-path\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830469 kubelet[2787]: I0711 00:18:45.830278 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a560290-6b67-40c2-b194-1c2a764b9711-cilium-ipsec-secrets\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830469 kubelet[2787]: I0711 00:18:45.830296 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-xtables-lock\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830469 kubelet[2787]: I0711 00:18:45.830327 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-host-proc-sys-net\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830469 kubelet[2787]: I0711 00:18:45.830343 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz4zz\" (UniqueName: \"kubernetes.io/projected/2a560290-6b67-40c2-b194-1c2a764b9711-kube-api-access-qz4zz\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830469 kubelet[2787]: I0711 00:18:45.830393 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a560290-6b67-40c2-b194-1c2a764b9711-etc-cni-netd\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.830647 kubelet[2787]: I0711 00:18:45.830423 2787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a560290-6b67-40c2-b194-1c2a764b9711-hubble-tls\") pod \"cilium-qwlf5\" (UID: \"2a560290-6b67-40c2-b194-1c2a764b9711\") " pod="kube-system/cilium-qwlf5" Jul 11 00:18:45.833238 sshd[4728]: Accepted publickey for core from 10.0.0.1 port 50794 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:18:45.835717 sshd[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:45.845702 systemd-logind[1558]: New session 33 of user core. Jul 11 00:18:45.857337 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 11 00:18:45.993713 kubelet[2787]: E0711 00:18:45.991453 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:45.994086 containerd[1591]: time="2025-07-11T00:18:45.992358307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwlf5,Uid:2a560290-6b67-40c2-b194-1c2a764b9711,Namespace:kube-system,Attempt:0,}" Jul 11 00:18:46.037447 containerd[1591]: time="2025-07-11T00:18:46.036215394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:46.037447 containerd[1591]: time="2025-07-11T00:18:46.037280435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:46.037447 containerd[1591]: time="2025-07-11T00:18:46.037304570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:46.041185 containerd[1591]: time="2025-07-11T00:18:46.038009304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:46.087633 containerd[1591]: time="2025-07-11T00:18:46.087579434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwlf5,Uid:2a560290-6b67-40c2-b194-1c2a764b9711,Namespace:kube-system,Attempt:0,} returns sandbox id \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\"" Jul 11 00:18:46.088750 kubelet[2787]: E0711 00:18:46.088716 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:46.091414 containerd[1591]: time="2025-07-11T00:18:46.091361796Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:18:46.120692 containerd[1591]: time="2025-07-11T00:18:46.120393493Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4522571dc407204aacf83ded94ba3428b25f2f213075aa5a33543bfc983f0be8\"" Jul 11 00:18:46.121935 containerd[1591]: time="2025-07-11T00:18:46.121884066Z" level=info msg="StartContainer for \"4522571dc407204aacf83ded94ba3428b25f2f213075aa5a33543bfc983f0be8\"" Jul 11 00:18:46.284750 containerd[1591]: time="2025-07-11T00:18:46.284479870Z" level=info msg="StartContainer for \"4522571dc407204aacf83ded94ba3428b25f2f213075aa5a33543bfc983f0be8\" returns successfully" Jul 11 00:18:46.352825 containerd[1591]: time="2025-07-11T00:18:46.352649033Z" level=info msg="shim disconnected" id=4522571dc407204aacf83ded94ba3428b25f2f213075aa5a33543bfc983f0be8 namespace=k8s.io Jul 11 00:18:46.352825 containerd[1591]: time="2025-07-11T00:18:46.352818116Z" level=warning msg="cleaning up after shim disconnected" id=4522571dc407204aacf83ded94ba3428b25f2f213075aa5a33543bfc983f0be8 namespace=k8s.io Jul 11 00:18:46.352825 containerd[1591]: time="2025-07-11T00:18:46.352837983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:47.171472 kubelet[2787]: E0711 00:18:47.171281 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:47.173871 containerd[1591]: time="2025-07-11T00:18:47.173817201Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:18:47.200395 containerd[1591]: time="2025-07-11T00:18:47.200316039Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef2750996c9f95597b380dbefad81e80757887e2394839619120c85cc5d26915\"" Jul 11 00:18:47.201243 containerd[1591]: time="2025-07-11T00:18:47.201199463Z" level=info msg="StartContainer for \"ef2750996c9f95597b380dbefad81e80757887e2394839619120c85cc5d26915\"" Jul 11 00:18:47.299239 containerd[1591]: time="2025-07-11T00:18:47.299104476Z" level=info msg="StartContainer for \"ef2750996c9f95597b380dbefad81e80757887e2394839619120c85cc5d26915\" returns successfully" Jul 11 00:18:47.348295 containerd[1591]: time="2025-07-11T00:18:47.348194060Z" level=info msg="shim disconnected" id=ef2750996c9f95597b380dbefad81e80757887e2394839619120c85cc5d26915 namespace=k8s.io Jul 11 00:18:47.348295 containerd[1591]: time="2025-07-11T00:18:47.348279021Z" level=warning msg="cleaning up after shim disconnected" id=ef2750996c9f95597b380dbefad81e80757887e2394839619120c85cc5d26915 namespace=k8s.io Jul 11 00:18:47.348295 containerd[1591]: time="2025-07-11T00:18:47.348291936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:47.940057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef2750996c9f95597b380dbefad81e80757887e2394839619120c85cc5d26915-rootfs.mount: Deactivated successfully. Jul 11 00:18:48.175606 kubelet[2787]: E0711 00:18:48.175290 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:48.177755 containerd[1591]: time="2025-07-11T00:18:48.177711219Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:18:48.636796 kubelet[2787]: E0711 00:18:48.636719 2787 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:18:48.793707 containerd[1591]: time="2025-07-11T00:18:48.793400044Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"465e490f1871b82f79168227b22774c7d9d819b31f60e0d43f1337b2c7a5f62e\"" Jul 11 00:18:48.795732 containerd[1591]: time="2025-07-11T00:18:48.794374071Z" level=info msg="StartContainer for \"465e490f1871b82f79168227b22774c7d9d819b31f60e0d43f1337b2c7a5f62e\"" Jul 11 00:18:49.035808 containerd[1591]: time="2025-07-11T00:18:49.035749934Z" level=info msg="StartContainer for \"465e490f1871b82f79168227b22774c7d9d819b31f60e0d43f1337b2c7a5f62e\" returns successfully" Jul 11 00:18:49.058585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-465e490f1871b82f79168227b22774c7d9d819b31f60e0d43f1337b2c7a5f62e-rootfs.mount: Deactivated successfully. Jul 11 00:18:49.181513 kubelet[2787]: E0711 00:18:49.181455 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:49.228614 containerd[1591]: time="2025-07-11T00:18:49.228529222Z" level=info msg="shim disconnected" id=465e490f1871b82f79168227b22774c7d9d819b31f60e0d43f1337b2c7a5f62e namespace=k8s.io Jul 11 00:18:49.228614 containerd[1591]: time="2025-07-11T00:18:49.228602091Z" level=warning msg="cleaning up after shim disconnected" id=465e490f1871b82f79168227b22774c7d9d819b31f60e0d43f1337b2c7a5f62e namespace=k8s.io Jul 11 00:18:49.228614 containerd[1591]: time="2025-07-11T00:18:49.228615006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:50.187018 kubelet[2787]: E0711 00:18:50.186933 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:50.188719 containerd[1591]: time="2025-07-11T00:18:50.188534197Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:18:50.363643 containerd[1591]: time="2025-07-11T00:18:50.363395410Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d44c75f0f766cad8d7dfe0181602136383a65c97693ce28739ce6a87a5bd5715\"" Jul 11 00:18:50.367211 containerd[1591]: time="2025-07-11T00:18:50.366943194Z" level=info msg="StartContainer for \"d44c75f0f766cad8d7dfe0181602136383a65c97693ce28739ce6a87a5bd5715\"" Jul 11 00:18:50.593793 containerd[1591]: time="2025-07-11T00:18:50.593733726Z" level=info msg="StartContainer for \"d44c75f0f766cad8d7dfe0181602136383a65c97693ce28739ce6a87a5bd5715\" returns successfully" Jul 11 00:18:50.612962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d44c75f0f766cad8d7dfe0181602136383a65c97693ce28739ce6a87a5bd5715-rootfs.mount: Deactivated successfully. Jul 11 00:18:50.937566 containerd[1591]: time="2025-07-11T00:18:50.937375898Z" level=info msg="shim disconnected" id=d44c75f0f766cad8d7dfe0181602136383a65c97693ce28739ce6a87a5bd5715 namespace=k8s.io Jul 11 00:18:50.937566 containerd[1591]: time="2025-07-11T00:18:50.937443427Z" level=warning msg="cleaning up after shim disconnected" id=d44c75f0f766cad8d7dfe0181602136383a65c97693ce28739ce6a87a5bd5715 namespace=k8s.io Jul 11 00:18:50.937566 containerd[1591]: time="2025-07-11T00:18:50.937453706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:51.194574 kubelet[2787]: E0711 00:18:51.194405 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:51.197217 containerd[1591]: time="2025-07-11T00:18:51.197169129Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:18:51.521548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4203935057.mount: Deactivated successfully. Jul 11 00:18:51.858719 containerd[1591]: time="2025-07-11T00:18:51.858461884Z" level=info msg="CreateContainer within sandbox \"44f793d84235f7c8653eff2ceab761dfdbb3f7bbd1df189f024e5eaa6c0b096f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a2ffa806579fe9e8ad4c2f46d5d02ac801e2280f35c6f16b2fe0c28368960826\"" Jul 11 00:18:51.860096 containerd[1591]: time="2025-07-11T00:18:51.860031618Z" level=info msg="StartContainer for \"a2ffa806579fe9e8ad4c2f46d5d02ac801e2280f35c6f16b2fe0c28368960826\"" Jul 11 00:18:52.160754 containerd[1591]: time="2025-07-11T00:18:52.160487093Z" level=info msg="StartContainer for \"a2ffa806579fe9e8ad4c2f46d5d02ac801e2280f35c6f16b2fe0c28368960826\" returns successfully" Jul 11 00:18:52.650739 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 11 00:18:52.949975 kubelet[2787]: I0711 00:18:52.949784 2787 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T00:18:52Z","lastTransitionTime":"2025-07-11T00:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 00:18:53.202613 kubelet[2787]: E0711 00:18:53.202476 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:54.204762 kubelet[2787]: E0711 00:18:54.204724 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:58.187309 systemd-networkd[1244]: lxc_health: Link UP Jul 11 00:18:58.194105 systemd-networkd[1244]: lxc_health: Gained carrier Jul 11 00:18:59.343972 systemd-networkd[1244]: lxc_health: Gained IPv6LL Jul 11 00:18:59.994821 kubelet[2787]: E0711 00:18:59.994533 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:00.218880 kubelet[2787]: E0711 00:19:00.218845 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:00.860636 kubelet[2787]: I0711 00:19:00.860131 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qwlf5" podStartSLOduration=15.86011294 podStartE2EDuration="15.86011294s" podCreationTimestamp="2025-07-11 00:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:18:54.771277998 +0000 UTC m=+146.433166457" watchObservedRunningTime="2025-07-11 00:19:00.86011294 +0000 UTC m=+152.522001399" Jul 11 00:19:01.220827 kubelet[2787]: E0711 00:19:01.220652 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:03.479638 kubelet[2787]: E0711 00:19:03.479596 2787 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:04.011458 sshd[4728]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:04.016046 systemd[1]: sshd@32-10.0.0.53:22-10.0.0.1:50794.service: Deactivated successfully. Jul 11 00:19:04.018287 systemd-logind[1558]: Session 33 logged out. Waiting for processes to exit. Jul 11 00:19:04.018363 systemd[1]: session-33.scope: Deactivated successfully. Jul 11 00:19:04.020342 systemd-logind[1558]: Removed session 33.