Jul 15 05:10:39.834565 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:10:39.834587 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:10:39.834596 kernel: BIOS-provided physical RAM map: Jul 15 05:10:39.834604 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jul 15 05:10:39.834610 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jul 15 05:10:39.834615 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 05:10:39.834621 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 15 05:10:39.834627 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 15 05:10:39.834633 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 05:10:39.834638 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 05:10:39.834644 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 05:10:39.834649 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 05:10:39.834657 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jul 15 05:10:39.834663 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 05:10:39.834669 kernel: NX (Execute Disable) protection: active Jul 15 05:10:39.834675 kernel: APIC: Static calls initialized Jul 15 05:10:39.834681 kernel: SMBIOS 2.8 present. Jul 15 05:10:39.834690 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jul 15 05:10:39.834696 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:10:39.834701 kernel: Hypervisor detected: KVM Jul 15 05:10:39.834707 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:10:39.834713 kernel: kvm-clock: using sched offset of 5143271886 cycles Jul 15 05:10:39.834719 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:10:39.834726 kernel: tsc: Detected 1999.998 MHz processor Jul 15 05:10:39.834732 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:10:39.834738 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:10:39.834745 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jul 15 05:10:39.834754 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 15 05:10:39.834760 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:10:39.834766 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 15 05:10:39.834772 kernel: Using GB pages for direct mapping Jul 15 05:10:39.834778 kernel: ACPI: Early table checksum verification disabled Jul 15 05:10:39.834784 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jul 15 05:10:39.834790 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:10:39.834797 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:10:39.834803 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:10:39.834811 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 15 05:10:39.834817 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:10:39.834823 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:10:39.834830 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:10:39.834847 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:10:39.834854 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jul 15 05:10:39.834863 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jul 15 05:10:39.834869 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 15 05:10:39.834876 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jul 15 05:10:39.834882 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jul 15 05:10:39.834888 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jul 15 05:10:39.834895 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jul 15 05:10:39.834901 kernel: No NUMA configuration found Jul 15 05:10:39.834907 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jul 15 05:10:39.834916 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jul 15 05:10:39.834923 kernel: Zone ranges: Jul 15 05:10:39.834929 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:10:39.834935 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 15 05:10:39.834942 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jul 15 05:10:39.834948 kernel: Device empty Jul 15 05:10:39.834954 kernel: Movable zone start for each node Jul 15 05:10:39.834961 kernel: Early memory node ranges Jul 15 05:10:39.834967 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 05:10:39.834973 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 15 05:10:39.834982 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jul 15 05:10:39.834988 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jul 15 05:10:39.834995 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:10:39.835001 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 05:10:39.835007 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 15 05:10:39.835013 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 05:10:39.835020 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:10:39.835027 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 05:10:39.835033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 05:10:39.835042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:10:39.835048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:10:39.835054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:10:39.835061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:10:39.835067 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:10:39.835073 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:10:39.835079 kernel: TSC deadline timer available Jul 15 05:10:39.835086 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:10:39.835092 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:10:39.835101 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:10:39.835107 kernel: CPU topo: Max. threads per core: 1 Jul 15 05:10:39.835113 kernel: CPU topo: Num. cores per package: 2 Jul 15 05:10:39.835120 kernel: CPU topo: Num. threads per package: 2 Jul 15 05:10:39.835126 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 05:10:39.835132 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:10:39.835139 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 05:10:39.835145 kernel: kvm-guest: setup PV sched yield Jul 15 05:10:39.835151 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 05:10:39.835157 kernel: Booting paravirtualized kernel on KVM Jul 15 05:10:39.835167 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:10:39.835173 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 05:10:39.835194 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 05:10:39.835997 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 05:10:39.836011 kernel: pcpu-alloc: [0] 0 1 Jul 15 05:10:39.836018 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:10:39.836025 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:10:39.836033 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:10:39.836045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:10:39.836051 kernel: random: crng init done Jul 15 05:10:39.836057 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 05:10:39.836064 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:10:39.836070 kernel: Fallback order for Node 0: 0 Jul 15 05:10:39.836077 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 15 05:10:39.836083 kernel: Policy zone: Normal Jul 15 05:10:39.836089 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:10:39.836096 kernel: software IO TLB: area num 2. Jul 15 05:10:39.836105 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 05:10:39.836112 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:10:39.836118 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:10:39.836125 kernel: Dynamic Preempt: voluntary Jul 15 05:10:39.836131 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:10:39.836138 kernel: rcu: RCU event tracing is enabled. Jul 15 05:10:39.836145 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 05:10:39.836152 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:10:39.836158 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:10:39.836167 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:10:39.836174 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:10:39.836199 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 05:10:39.836207 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:10:39.836222 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:10:39.836232 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 05:10:39.836239 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 05:10:39.836245 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:10:39.836252 kernel: Console: colour VGA+ 80x25 Jul 15 05:10:39.836258 kernel: printk: legacy console [tty0] enabled Jul 15 05:10:39.836265 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:10:39.836271 kernel: ACPI: Core revision 20240827 Jul 15 05:10:39.836281 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 05:10:39.836288 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:10:39.836295 kernel: x2apic enabled Jul 15 05:10:39.836302 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:10:39.836309 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 05:10:39.836318 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 05:10:39.836325 kernel: kvm-guest: setup PV IPIs Jul 15 05:10:39.836331 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 05:10:39.836339 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Jul 15 05:10:39.836345 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Jul 15 05:10:39.836352 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 05:10:39.836359 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 05:10:39.836365 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 05:10:39.836375 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:10:39.836382 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:10:39.836388 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:10:39.836395 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 15 05:10:39.836402 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 05:10:39.836408 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 05:10:39.836415 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 05:10:39.836422 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 05:10:39.836429 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 05:10:39.836438 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:10:39.836445 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:10:39.836451 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:10:39.836458 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 15 05:10:39.836465 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:10:39.836471 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jul 15 05:10:39.836478 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jul 15 05:10:39.836485 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:10:39.836494 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:10:39.836500 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:10:39.836507 kernel: landlock: Up and running. Jul 15 05:10:39.836513 kernel: SELinux: Initializing. Jul 15 05:10:39.836520 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:10:39.836527 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:10:39.836533 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jul 15 05:10:39.836540 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 05:10:39.836546 kernel: ... version: 0 Jul 15 05:10:39.836556 kernel: ... bit width: 48 Jul 15 05:10:39.836562 kernel: ... generic registers: 6 Jul 15 05:10:39.836569 kernel: ... value mask: 0000ffffffffffff Jul 15 05:10:39.836575 kernel: ... max period: 00007fffffffffff Jul 15 05:10:39.836582 kernel: ... fixed-purpose events: 0 Jul 15 05:10:39.836589 kernel: ... event mask: 000000000000003f Jul 15 05:10:39.836597 kernel: signal: max sigframe size: 3376 Jul 15 05:10:39.836603 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:10:39.836610 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:10:39.836617 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:10:39.836626 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:10:39.836633 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:10:39.836639 kernel: .... node #0, CPUs: #1 Jul 15 05:10:39.836646 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 05:10:39.836652 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Jul 15 05:10:39.836659 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 227288K reserved, 0K cma-reserved) Jul 15 05:10:39.836666 kernel: devtmpfs: initialized Jul 15 05:10:39.836673 kernel: x86/mm: Memory block size: 128MB Jul 15 05:10:39.836679 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:10:39.836689 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 05:10:39.836696 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:10:39.836702 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:10:39.836709 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:10:39.836716 kernel: audit: type=2000 audit(1752556238.242:1): state=initialized audit_enabled=0 res=1 Jul 15 05:10:39.836722 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:10:39.836729 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:10:39.836735 kernel: cpuidle: using governor menu Jul 15 05:10:39.836742 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:10:39.836751 kernel: dca service started, version 1.12.1 Jul 15 05:10:39.836757 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 15 05:10:39.836764 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 15 05:10:39.836771 kernel: PCI: Using configuration type 1 for base access Jul 15 05:10:39.836777 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:10:39.836784 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:10:39.836791 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:10:39.836797 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:10:39.836806 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:10:39.836813 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:10:39.836819 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:10:39.836826 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:10:39.836832 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 05:10:39.836839 kernel: ACPI: Interpreter enabled Jul 15 05:10:39.836845 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 05:10:39.836852 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:10:39.836859 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:10:39.836865 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:10:39.836874 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 05:10:39.836881 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:10:39.837064 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:10:39.838283 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 05:10:39.838404 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 05:10:39.838415 kernel: PCI host bridge to bus 0000:00 Jul 15 05:10:39.838540 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:10:39.838647 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:10:39.838745 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:10:39.838842 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 15 05:10:39.838939 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 05:10:39.839035 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jul 15 05:10:39.839133 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:10:39.839293 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:10:39.839423 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:10:39.839534 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 15 05:10:39.839654 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 15 05:10:39.839760 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 15 05:10:39.839865 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:10:39.839985 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jul 15 05:10:39.840098 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jul 15 05:10:39.840906 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 15 05:10:39.841024 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 05:10:39.841150 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 05:10:39.841299 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 15 05:10:39.841412 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 15 05:10:39.841520 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 05:10:39.841634 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 15 05:10:39.841753 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:10:39.841860 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 05:10:39.841977 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 05:10:39.842084 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jul 15 05:10:39.843369 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jul 15 05:10:39.843520 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 05:10:39.843631 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 15 05:10:39.843641 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:10:39.843649 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:10:39.843656 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:10:39.843663 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:10:39.843669 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 05:10:39.843676 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 05:10:39.843686 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 05:10:39.843693 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 05:10:39.843699 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 05:10:39.843706 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 05:10:39.843712 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 05:10:39.843719 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 05:10:39.843726 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 05:10:39.843732 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 05:10:39.843739 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 05:10:39.843748 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 05:10:39.843755 kernel: iommu: Default domain type: Translated Jul 15 05:10:39.843762 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:10:39.843768 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:10:39.843775 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:10:39.843782 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jul 15 05:10:39.843789 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 15 05:10:39.843898 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 05:10:39.844007 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 05:10:39.844118 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:10:39.844127 kernel: vgaarb: loaded Jul 15 05:10:39.844135 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 05:10:39.844142 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 05:10:39.844148 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:10:39.844155 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:10:39.844162 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:10:39.844168 kernel: pnp: PnP ACPI init Jul 15 05:10:39.846349 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 05:10:39.846365 kernel: pnp: PnP ACPI: found 5 devices Jul 15 05:10:39.846373 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:10:39.846381 kernel: NET: Registered PF_INET protocol family Jul 15 05:10:39.846388 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 05:10:39.846395 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 05:10:39.846402 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:10:39.846409 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:10:39.846420 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 05:10:39.846427 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 05:10:39.846434 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:10:39.846441 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:10:39.846447 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:10:39.846454 kernel: NET: Registered PF_XDP protocol family Jul 15 05:10:39.846560 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:10:39.846666 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:10:39.846766 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:10:39.846869 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 15 05:10:39.846967 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 05:10:39.847065 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jul 15 05:10:39.847074 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:10:39.847082 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 15 05:10:39.847089 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jul 15 05:10:39.847096 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Jul 15 05:10:39.847103 kernel: Initialise system trusted keyrings Jul 15 05:10:39.847110 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 05:10:39.847120 kernel: Key type asymmetric registered Jul 15 05:10:39.847126 kernel: Asymmetric key parser 'x509' registered Jul 15 05:10:39.847133 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:10:39.847140 kernel: io scheduler mq-deadline registered Jul 15 05:10:39.847147 kernel: io scheduler kyber registered Jul 15 05:10:39.847153 kernel: io scheduler bfq registered Jul 15 05:10:39.847160 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:10:39.847167 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 05:10:39.847174 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 05:10:39.847210 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:10:39.847217 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:10:39.847224 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:10:39.847231 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:10:39.847238 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:10:39.847358 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 15 05:10:39.847369 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 05:10:39.847469 kernel: rtc_cmos 00:03: registered as rtc0 Jul 15 05:10:39.847583 kernel: rtc_cmos 00:03: setting system clock to 2025-07-15T05:10:39 UTC (1752556239) Jul 15 05:10:39.847690 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 05:10:39.847699 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 05:10:39.847706 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:10:39.847713 kernel: Segment Routing with IPv6 Jul 15 05:10:39.847719 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:10:39.847726 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:10:39.847733 kernel: Key type dns_resolver registered Jul 15 05:10:39.847740 kernel: IPI shorthand broadcast: enabled Jul 15 05:10:39.847750 kernel: sched_clock: Marking stable (2295003445, 175722816)->(2487817977, -17091716) Jul 15 05:10:39.847757 kernel: registered taskstats version 1 Jul 15 05:10:39.847764 kernel: Loading compiled-in X.509 certificates Jul 15 05:10:39.847771 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:10:39.847778 kernel: Demotion targets for Node 0: null Jul 15 05:10:39.847784 kernel: Key type .fscrypt registered Jul 15 05:10:39.847791 kernel: Key type fscrypt-provisioning registered Jul 15 05:10:39.847797 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:10:39.847807 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:10:39.847813 kernel: ima: No architecture policies found Jul 15 05:10:39.847820 kernel: clk: Disabling unused clocks Jul 15 05:10:39.847826 kernel: Warning: unable to open an initial console. Jul 15 05:10:39.847833 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:10:39.847840 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:10:39.847847 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:10:39.847854 kernel: Run /init as init process Jul 15 05:10:39.847860 kernel: with arguments: Jul 15 05:10:39.847869 kernel: /init Jul 15 05:10:39.847876 kernel: with environment: Jul 15 05:10:39.847882 kernel: HOME=/ Jul 15 05:10:39.847889 kernel: TERM=linux Jul 15 05:10:39.847896 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:10:39.847921 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:10:39.847933 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:10:39.847942 systemd[1]: Detected virtualization kvm. Jul 15 05:10:39.847952 systemd[1]: Detected architecture x86-64. Jul 15 05:10:39.847959 systemd[1]: Running in initrd. Jul 15 05:10:39.847967 systemd[1]: No hostname configured, using default hostname. Jul 15 05:10:39.847974 systemd[1]: Hostname set to . Jul 15 05:10:39.847982 systemd[1]: Initializing machine ID from random generator. Jul 15 05:10:39.847990 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:10:39.847997 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:10:39.848005 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:10:39.848016 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:10:39.848023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:10:39.848031 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:10:39.848040 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:10:39.848048 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:10:39.848056 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:10:39.848063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:10:39.848073 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:10:39.848081 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:10:39.848088 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:10:39.848095 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:10:39.848103 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:10:39.848110 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:10:39.848118 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:10:39.848125 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:10:39.848135 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:10:39.848153 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:10:39.848160 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:10:39.848168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:10:39.848175 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:10:39.848323 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:10:39.848339 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:10:39.848347 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:10:39.848358 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:10:39.848366 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:10:39.848374 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:10:39.848383 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:10:39.848390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:10:39.848398 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:10:39.848408 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:10:39.848440 systemd-journald[207]: Collecting audit messages is disabled. Jul 15 05:10:39.848463 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:10:39.848471 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:10:39.848480 systemd-journald[207]: Journal started Jul 15 05:10:39.848498 systemd-journald[207]: Runtime Journal (/run/log/journal/1a8f2fe88fa54bd5aaee8bccc933d1fe) is 8M, max 78.5M, 70.5M free. Jul 15 05:10:39.822337 systemd-modules-load[208]: Inserted module 'overlay' Jul 15 05:10:39.853352 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:10:39.858219 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:10:39.860276 kernel: Bridge firewalling registered Jul 15 05:10:39.860246 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 15 05:10:39.862297 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:10:39.927239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:10:39.928139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:10:39.929264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:10:39.933061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:10:39.933761 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:10:39.939333 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:10:39.942453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:10:39.950112 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:10:39.961969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:10:39.964801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:10:39.967972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:10:39.970024 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:10:39.972302 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:10:39.986721 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:10:40.006261 systemd-resolved[245]: Positive Trust Anchors: Jul 15 05:10:40.006274 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:10:40.006296 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:10:40.011342 systemd-resolved[245]: Defaulting to hostname 'linux'. Jul 15 05:10:40.012220 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:10:40.012895 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:10:40.076234 kernel: SCSI subsystem initialized Jul 15 05:10:40.083211 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:10:40.094229 kernel: iscsi: registered transport (tcp) Jul 15 05:10:40.113220 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:10:40.113255 kernel: QLogic iSCSI HBA Driver Jul 15 05:10:40.134232 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:10:40.150599 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:10:40.152569 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:10:40.207941 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:10:40.210176 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:10:40.261220 kernel: raid6: avx2x4 gen() 33628 MB/s Jul 15 05:10:40.275211 kernel: raid6: avx2x2 gen() 34666 MB/s Jul 15 05:10:40.293409 kernel: raid6: avx2x1 gen() 24030 MB/s Jul 15 05:10:40.293426 kernel: raid6: using algorithm avx2x2 gen() 34666 MB/s Jul 15 05:10:40.312221 kernel: raid6: .... xor() 35726 MB/s, rmw enabled Jul 15 05:10:40.312239 kernel: raid6: using avx2x2 recovery algorithm Jul 15 05:10:40.328217 kernel: xor: automatically using best checksumming function avx Jul 15 05:10:40.457227 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:10:40.465842 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:10:40.467811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:10:40.497594 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 15 05:10:40.502625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:10:40.505087 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:10:40.529110 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Jul 15 05:10:40.558620 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:10:40.560760 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:10:40.623924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:10:40.626820 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:10:40.684231 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jul 15 05:10:40.690249 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:10:40.779442 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 05:10:40.791749 kernel: scsi host0: Virtio SCSI HBA Jul 15 05:10:40.799203 kernel: libata version 3.00 loaded. Jul 15 05:10:40.806135 kernel: AES CTR mode by8 optimization enabled Jul 15 05:10:40.806172 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 15 05:10:40.804004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:10:40.804099 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:10:40.809077 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:10:40.812465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:10:40.818055 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:10:40.866286 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 05:10:40.872006 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 05:10:40.872030 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 05:10:40.873990 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 05:10:40.874173 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 05:10:40.878232 kernel: scsi host1: ahci Jul 15 05:10:40.878641 kernel: scsi host2: ahci Jul 15 05:10:40.879013 kernel: scsi host3: ahci Jul 15 05:10:40.881309 kernel: scsi host4: ahci Jul 15 05:10:40.881467 kernel: scsi host5: ahci Jul 15 05:10:40.882723 kernel: scsi host6: ahci Jul 15 05:10:40.882857 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Jul 15 05:10:40.882868 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Jul 15 05:10:40.882882 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Jul 15 05:10:40.882891 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Jul 15 05:10:40.882899 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Jul 15 05:10:40.882906 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Jul 15 05:10:40.883234 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 15 05:10:40.884100 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jul 15 05:10:40.884459 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 15 05:10:40.884578 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 15 05:10:40.884695 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 15 05:10:40.896244 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:10:40.896283 kernel: GPT:9289727 != 167739391 Jul 15 05:10:40.896297 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:10:40.896310 kernel: GPT:9289727 != 167739391 Jul 15 05:10:40.896321 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:10:40.896334 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:10:40.896345 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 15 05:10:40.954709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:10:41.195905 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 05:10:41.195938 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 05:10:41.195949 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 05:10:41.196199 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 15 05:10:41.198715 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 05:10:41.199204 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 05:10:41.254297 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 15 05:10:41.261797 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 15 05:10:41.273569 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 15 05:10:41.274339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:10:41.281197 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 15 05:10:41.281727 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 15 05:10:41.284301 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:10:41.284836 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:10:41.286058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:10:41.289280 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:10:41.290417 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:10:41.306635 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:10:41.307929 disk-uuid[630]: Primary Header is updated. Jul 15 05:10:41.307929 disk-uuid[630]: Secondary Entries is updated. Jul 15 05:10:41.307929 disk-uuid[630]: Secondary Header is updated. Jul 15 05:10:41.316051 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:10:42.334224 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 05:10:42.334573 disk-uuid[637]: The operation has completed successfully. Jul 15 05:10:42.377330 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:10:42.377440 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:10:42.401594 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:10:42.413686 sh[652]: Success Jul 15 05:10:42.430216 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:10:42.430249 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:10:42.431394 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:10:42.441226 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 05:10:42.484859 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:10:42.488305 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:10:42.499970 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:10:42.513193 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:10:42.513218 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (664) Jul 15 05:10:42.518358 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:10:42.518376 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:10:42.518385 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:10:42.527794 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:10:42.528694 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:10:42.529556 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:10:42.530174 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:10:42.534436 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:10:42.573244 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (692) Jul 15 05:10:42.578216 kernel: BTRFS info (device sda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:10:42.578247 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:10:42.578256 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 05:10:42.587236 kernel: BTRFS info (device sda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:10:42.588029 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:10:42.589804 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:10:42.692095 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:10:42.695511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:10:42.695724 ignition[747]: Ignition 2.21.0 Jul 15 05:10:42.695734 ignition[747]: Stage: fetch-offline Jul 15 05:10:42.695777 ignition[747]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:10:42.695788 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:10:42.695908 ignition[747]: parsed url from cmdline: "" Jul 15 05:10:42.695912 ignition[747]: no config URL provided Jul 15 05:10:42.695918 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:10:42.695927 ignition[747]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:10:42.695934 ignition[747]: failed to fetch config: resource requires networking Jul 15 05:10:42.696133 ignition[747]: Ignition finished successfully Jul 15 05:10:42.701277 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:10:42.736563 systemd-networkd[837]: lo: Link UP Jul 15 05:10:42.736575 systemd-networkd[837]: lo: Gained carrier Jul 15 05:10:42.738082 systemd-networkd[837]: Enumeration completed Jul 15 05:10:42.738326 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:10:42.738622 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:10:42.738627 systemd-networkd[837]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:10:42.740824 systemd-networkd[837]: eth0: Link UP Jul 15 05:10:42.740828 systemd-networkd[837]: eth0: Gained carrier Jul 15 05:10:42.740837 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:10:42.741156 systemd[1]: Reached target network.target - Network. Jul 15 05:10:42.743500 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 05:10:42.769454 ignition[841]: Ignition 2.21.0 Jul 15 05:10:42.769468 ignition[841]: Stage: fetch Jul 15 05:10:42.769574 ignition[841]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:10:42.769585 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:10:42.769822 ignition[841]: parsed url from cmdline: "" Jul 15 05:10:42.769827 ignition[841]: no config URL provided Jul 15 05:10:42.769836 ignition[841]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:10:42.769844 ignition[841]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:10:42.769867 ignition[841]: PUT http://169.254.169.254/v1/token: attempt #1 Jul 15 05:10:42.770165 ignition[841]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 15 05:10:42.970318 ignition[841]: PUT http://169.254.169.254/v1/token: attempt #2 Jul 15 05:10:42.970402 ignition[841]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 15 05:10:43.218304 systemd-networkd[837]: eth0: DHCPv4 address 172.236.104.60/24, gateway 172.236.104.1 acquired from 23.194.118.61 Jul 15 05:10:43.370826 ignition[841]: PUT http://169.254.169.254/v1/token: attempt #3 Jul 15 05:10:43.462010 ignition[841]: PUT result: OK Jul 15 05:10:43.462817 ignition[841]: GET http://169.254.169.254/v1/user-data: attempt #1 Jul 15 05:10:43.574904 ignition[841]: GET result: OK Jul 15 05:10:43.575095 ignition[841]: parsing config with SHA512: b8dc41cd735ed7e0cc88d7234b10def9ea596cb5d4426938c7587d39730c0ae3b57d13ae8f61b4d10c54c6879565eeac1b7d526eec3bb46549d7564a4555be7a Jul 15 05:10:43.578709 unknown[841]: fetched base config from "system" Jul 15 05:10:43.579308 unknown[841]: fetched base config from "system" Jul 15 05:10:43.579317 unknown[841]: fetched user config from "akamai" Jul 15 05:10:43.579553 ignition[841]: fetch: fetch complete Jul 15 05:10:43.579558 ignition[841]: fetch: fetch passed Jul 15 05:10:43.579608 ignition[841]: Ignition finished successfully Jul 15 05:10:43.583429 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 05:10:43.584837 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:10:43.624124 ignition[849]: Ignition 2.21.0 Jul 15 05:10:43.624141 ignition[849]: Stage: kargs Jul 15 05:10:43.633542 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:10:43.624319 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:10:43.624330 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:10:43.625063 ignition[849]: kargs: kargs passed Jul 15 05:10:43.625101 ignition[849]: Ignition finished successfully Jul 15 05:10:43.645341 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:10:43.667415 ignition[856]: Ignition 2.21.0 Jul 15 05:10:43.667426 ignition[856]: Stage: disks Jul 15 05:10:43.667519 ignition[856]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:10:43.667529 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:10:43.668402 ignition[856]: disks: disks passed Jul 15 05:10:43.668435 ignition[856]: Ignition finished successfully Jul 15 05:10:43.671363 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:10:43.672861 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:10:43.673467 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:10:43.674626 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:10:43.675815 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:10:43.676807 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:10:43.678679 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:10:43.704399 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:10:43.708849 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:10:43.711285 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:10:43.813211 kernel: EXT4-fs (sda9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:10:43.814389 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:10:43.815786 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:10:43.817630 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:10:43.820292 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:10:43.821492 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:10:43.821539 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:10:43.821565 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:10:43.833155 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:10:43.835524 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:10:43.843210 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (872) Jul 15 05:10:43.845223 kernel: BTRFS info (device sda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:10:43.845243 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:10:43.847997 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 05:10:43.852337 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:10:43.884745 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:10:43.890037 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:10:43.896135 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:10:43.900409 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:10:43.994439 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:10:43.997178 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:10:43.998718 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:10:44.017118 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:10:44.019687 kernel: BTRFS info (device sda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:10:44.034988 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:10:44.045013 ignition[985]: INFO : Ignition 2.21.0 Jul 15 05:10:44.045013 ignition[985]: INFO : Stage: mount Jul 15 05:10:44.045013 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:10:44.045013 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:10:44.045013 ignition[985]: INFO : mount: mount passed Jul 15 05:10:44.045013 ignition[985]: INFO : Ignition finished successfully Jul 15 05:10:44.048662 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:10:44.050913 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:10:44.713464 systemd-networkd[837]: eth0: Gained IPv6LL Jul 15 05:10:44.815895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:10:44.836241 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (998) Jul 15 05:10:44.839432 kernel: BTRFS info (device sda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:10:44.839491 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:10:44.841731 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 05:10:44.845512 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:10:44.874568 ignition[1014]: INFO : Ignition 2.21.0 Jul 15 05:10:44.874568 ignition[1014]: INFO : Stage: files Jul 15 05:10:44.876421 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:10:44.876421 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:10:44.876421 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:10:44.876421 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:10:44.876421 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:10:44.880996 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:10:44.881636 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:10:44.881636 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:10:44.881616 unknown[1014]: wrote ssh authorized keys file for user: core Jul 15 05:10:44.883483 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 05:10:44.883483 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 05:10:45.172596 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:10:45.887996 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 05:10:45.889528 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 05:10:45.889528 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 05:10:46.225276 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 05:10:46.283021 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 05:10:46.283909 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:10:46.283909 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:10:46.283909 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:10:46.283909 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:10:46.283909 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:10:46.283909 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:10:46.283909 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:10:46.283909 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:10:46.296281 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:10:46.296281 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:10:46.296281 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:10:46.296281 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:10:46.296281 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:10:46.296281 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 05:10:46.656829 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 05:10:46.872690 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:10:46.872690 ignition[1014]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 05:10:46.874466 ignition[1014]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:10:46.875344 ignition[1014]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:10:46.875344 ignition[1014]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 05:10:46.875344 ignition[1014]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 05:10:46.875344 ignition[1014]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 15 05:10:46.881019 ignition[1014]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 15 05:10:46.881019 ignition[1014]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 05:10:46.881019 ignition[1014]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:10:46.881019 ignition[1014]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:10:46.881019 ignition[1014]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:10:46.881019 ignition[1014]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:10:46.881019 ignition[1014]: INFO : files: files passed Jul 15 05:10:46.881019 ignition[1014]: INFO : Ignition finished successfully Jul 15 05:10:46.878954 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:10:46.881376 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:10:46.885887 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:10:46.897430 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:10:46.897546 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:10:46.904825 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:10:46.904825 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:10:46.906691 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:10:46.908628 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:10:46.909895 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:10:46.911173 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:10:46.964247 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:10:46.964362 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:10:46.965826 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:10:46.966581 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:10:46.967769 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:10:46.968371 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:10:46.991610 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:10:46.993620 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:10:47.010110 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:10:47.010878 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:10:47.012053 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:10:47.013163 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:10:47.013320 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:10:47.014487 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:10:47.015268 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:10:47.016379 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:10:47.017429 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:10:47.018414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:10:47.019545 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:10:47.020708 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:10:47.021827 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:10:47.022997 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:10:47.024107 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:10:47.025288 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:10:47.026337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:10:47.026473 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:10:47.027617 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:10:47.028402 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:10:47.029410 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:10:47.029691 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:10:47.030574 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:10:47.030706 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:10:47.032118 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:10:47.032278 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:10:47.033472 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:10:47.033607 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:10:47.036279 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:10:47.039347 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:10:47.039866 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:10:47.040009 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:10:47.041678 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:10:47.041812 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:10:47.047666 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:10:47.050329 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:10:47.068212 ignition[1069]: INFO : Ignition 2.21.0 Jul 15 05:10:47.068212 ignition[1069]: INFO : Stage: umount Jul 15 05:10:47.071256 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:10:47.071256 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 15 05:10:47.070492 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:10:47.103368 ignition[1069]: INFO : umount: umount passed Jul 15 05:10:47.103368 ignition[1069]: INFO : Ignition finished successfully Jul 15 05:10:47.101750 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:10:47.101864 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:10:47.104400 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:10:47.104482 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:10:47.106165 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:10:47.107125 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:10:47.108105 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 05:10:47.108147 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 05:10:47.109298 systemd[1]: Stopped target network.target - Network. Jul 15 05:10:47.110296 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:10:47.110357 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:10:47.111356 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:10:47.112273 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:10:47.116242 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:10:47.116814 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:10:47.117960 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:10:47.118953 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:10:47.118999 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:10:47.119912 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:10:47.119953 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:10:47.120837 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:10:47.120889 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:10:47.121818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:10:47.121867 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:10:47.122889 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:10:47.123909 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:10:47.125323 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:10:47.125436 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:10:47.126502 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:10:47.126590 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:10:47.133067 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:10:47.133312 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:10:47.135827 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:10:47.136050 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:10:47.136164 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:10:47.139970 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:10:47.140711 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:10:47.141964 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:10:47.142016 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:10:47.143886 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:10:47.144473 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:10:47.144535 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:10:47.145126 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:10:47.145172 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:10:47.146619 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:10:47.146668 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:10:47.147381 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:10:47.147430 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:10:47.149978 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:10:47.152215 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:10:47.152281 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:10:47.166103 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:10:47.172380 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:10:47.173132 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:10:47.173173 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:10:47.173756 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:10:47.173788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:10:47.175156 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:10:47.175215 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:10:47.176799 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:10:47.176841 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:10:47.178026 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:10:47.178074 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:10:47.181270 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:10:47.182336 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:10:47.182380 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:10:47.183729 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:10:47.183774 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:10:47.185198 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 05:10:47.185242 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:10:47.186276 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:10:47.186317 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:10:47.187155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:10:47.187212 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:10:47.190412 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 05:10:47.190463 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 05:10:47.190500 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 05:10:47.190539 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:10:47.190863 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:10:47.190961 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:10:47.195631 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:10:47.195728 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:10:47.197291 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:10:47.198727 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:10:47.219797 systemd[1]: Switching root. Jul 15 05:10:47.262562 systemd-journald[207]: Journal stopped Jul 15 05:10:48.193890 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 15 05:10:48.193911 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:10:48.193921 kernel: SELinux: policy capability open_perms=1 Jul 15 05:10:48.193931 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:10:48.193938 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:10:48.193945 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:10:48.193953 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:10:48.193961 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:10:48.193968 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:10:48.193975 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:10:48.193985 kernel: audit: type=1403 audit(1752556247.395:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:10:48.193993 systemd[1]: Successfully loaded SELinux policy in 60.286ms. Jul 15 05:10:48.194002 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.082ms. Jul 15 05:10:48.194011 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:10:48.194020 systemd[1]: Detected virtualization kvm. Jul 15 05:10:48.194030 systemd[1]: Detected architecture x86-64. Jul 15 05:10:48.194038 systemd[1]: Detected first boot. Jul 15 05:10:48.194046 systemd[1]: Initializing machine ID from random generator. Jul 15 05:10:48.194055 zram_generator::config[1112]: No configuration found. Jul 15 05:10:48.194063 kernel: Guest personality initialized and is inactive Jul 15 05:10:48.194071 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:10:48.194078 kernel: Initialized host personality Jul 15 05:10:48.194088 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:10:48.194096 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:10:48.194105 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:10:48.194113 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:10:48.194121 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:10:48.194129 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:10:48.194137 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:10:48.194146 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:10:48.194155 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:10:48.194163 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:10:48.194171 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:10:48.196215 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:10:48.196235 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:10:48.196246 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:10:48.196260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:10:48.196270 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:10:48.196280 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:10:48.196291 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:10:48.196304 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:10:48.196314 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:10:48.196324 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:10:48.196334 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:10:48.196346 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:10:48.196356 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:10:48.196366 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:10:48.196375 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:10:48.196385 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:10:48.196395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:10:48.196405 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:10:48.196415 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:10:48.196426 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:10:48.196436 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:10:48.196446 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:10:48.196456 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:10:48.196466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:10:48.196478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:10:48.196488 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:10:48.196499 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:10:48.196511 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:10:48.196522 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:10:48.196534 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:10:48.196545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:10:48.196559 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:10:48.196573 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:10:48.196585 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:10:48.196597 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:10:48.196609 systemd[1]: Reached target machines.target - Containers. Jul 15 05:10:48.196621 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:10:48.196632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:10:48.196644 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:10:48.196656 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:10:48.196671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:10:48.196682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:10:48.196694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:10:48.196704 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:10:48.196714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:10:48.196724 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:10:48.196735 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:10:48.196744 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:10:48.196754 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:10:48.196765 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:10:48.196774 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:10:48.196783 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:10:48.196791 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:10:48.196799 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:10:48.196807 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:10:48.196816 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:10:48.196824 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:10:48.196834 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:10:48.196843 systemd[1]: Stopped verity-setup.service. Jul 15 05:10:48.196851 kernel: loop: module loaded Jul 15 05:10:48.196860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:10:48.196868 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:10:48.196876 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:10:48.196885 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:10:48.196893 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:10:48.196903 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:10:48.196911 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:10:48.196919 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:10:48.196927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:10:48.196936 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:10:48.196944 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:10:48.196953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:10:48.196961 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:10:48.196970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:10:48.196991 kernel: fuse: init (API version 7.41) Jul 15 05:10:48.197013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:10:48.197040 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:10:48.197062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:10:48.197088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:10:48.197111 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:10:48.197132 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:10:48.197155 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:10:48.197177 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:10:48.197406 systemd-journald[1200]: Collecting audit messages is disabled. Jul 15 05:10:48.197428 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:10:48.197441 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:10:48.197450 systemd-journald[1200]: Journal started Jul 15 05:10:48.197473 systemd-journald[1200]: Runtime Journal (/run/log/journal/d6daa8b08ce0463295957f6d8f4088ae) is 8M, max 78.5M, 70.5M free. Jul 15 05:10:47.882218 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:10:47.896038 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 05:10:47.896630 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:10:48.200206 kernel: ACPI: bus type drm_connector registered Jul 15 05:10:48.200226 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:10:48.203906 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:10:48.210213 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:10:48.212251 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:10:48.215215 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:10:48.222102 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:10:48.222127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:10:48.228410 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:10:48.228436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:10:48.234965 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:10:48.234993 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:10:48.240242 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:10:48.244221 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:10:48.250839 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:10:48.253335 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:10:48.255642 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:10:48.255865 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:10:48.256577 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:10:48.257591 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:10:48.262385 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:10:48.277012 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:10:48.279295 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:10:48.281332 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:10:48.294168 kernel: loop0: detected capacity change from 0 to 114000 Jul 15 05:10:48.301593 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:10:48.321125 systemd-journald[1200]: Time spent on flushing to /var/log/journal/d6daa8b08ce0463295957f6d8f4088ae is 32.832ms for 1007 entries. Jul 15 05:10:48.321125 systemd-journald[1200]: System Journal (/var/log/journal/d6daa8b08ce0463295957f6d8f4088ae) is 8M, max 195.6M, 187.6M free. Jul 15 05:10:48.366936 systemd-journald[1200]: Received client request to flush runtime journal. Jul 15 05:10:48.367079 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:10:48.367112 kernel: loop1: detected capacity change from 0 to 221472 Jul 15 05:10:48.323741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:10:48.337711 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 15 05:10:48.337724 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 15 05:10:48.340738 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:10:48.346415 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:10:48.349027 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:10:48.372551 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:10:48.400096 kernel: loop2: detected capacity change from 0 to 8 Jul 15 05:10:48.403297 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:10:48.408313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:10:48.421210 kernel: loop3: detected capacity change from 0 to 146488 Jul 15 05:10:48.445319 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jul 15 05:10:48.445543 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jul 15 05:10:48.448994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:10:48.467233 kernel: loop4: detected capacity change from 0 to 114000 Jul 15 05:10:48.482250 kernel: loop5: detected capacity change from 0 to 221472 Jul 15 05:10:48.511265 kernel: loop6: detected capacity change from 0 to 8 Jul 15 05:10:48.517213 kernel: loop7: detected capacity change from 0 to 146488 Jul 15 05:10:48.536895 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jul 15 05:10:48.538128 (sd-merge)[1263]: Merged extensions into '/usr'. Jul 15 05:10:48.544068 systemd[1]: Reload requested from client PID 1218 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:10:48.544177 systemd[1]: Reloading... Jul 15 05:10:48.631266 zram_generator::config[1285]: No configuration found. Jul 15 05:10:48.764518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:10:48.803484 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:10:48.836638 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:10:48.837140 systemd[1]: Reloading finished in 292 ms. Jul 15 05:10:48.850991 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:10:48.863374 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:10:48.877314 systemd[1]: Starting ensure-sysext.service... Jul 15 05:10:48.881347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:10:48.895809 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:10:48.901622 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:10:48.906062 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:10:48.906082 systemd[1]: Reloading... Jul 15 05:10:48.913750 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:10:48.914123 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:10:48.914513 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:10:48.915576 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:10:48.917387 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:10:48.917701 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 15 05:10:48.918159 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 15 05:10:48.927491 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:10:48.928230 systemd-tmpfiles[1333]: Skipping /boot Jul 15 05:10:48.948076 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jul 15 05:10:48.950484 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:10:48.950665 systemd-tmpfiles[1333]: Skipping /boot Jul 15 05:10:48.962216 zram_generator::config[1357]: No configuration found. Jul 15 05:10:49.137450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:10:49.209838 systemd[1]: Reloading finished in 303 ms. Jul 15 05:10:49.217576 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:10:49.219583 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:10:49.236804 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:10:49.239712 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:10:49.249222 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 15 05:10:49.249253 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:10:49.258978 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:10:49.262376 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:10:49.265743 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:10:49.267782 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:10:49.271794 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:10:49.282244 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:10:49.301308 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:10:49.302910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:10:49.303044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:10:49.305994 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:10:49.312513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:10:49.315268 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:10:49.315969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:10:49.316077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:10:49.316172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:10:49.325606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:10:49.325801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:10:49.325980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:10:49.326088 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:10:49.326628 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:10:49.332498 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:10:49.333130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:10:49.333776 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 05:10:49.334087 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 05:10:49.337584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:10:49.338533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:10:49.338619 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:10:49.338726 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:10:49.340351 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:10:49.348723 systemd[1]: Finished ensure-sysext.service. Jul 15 05:10:49.349734 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:10:49.351646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:10:49.352036 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:10:49.357580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:10:49.358326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:10:49.365556 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:10:49.370314 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 05:10:49.372608 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:10:49.393779 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:10:49.395505 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:10:49.420029 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:10:49.420540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:10:49.422140 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:10:49.423924 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:10:49.425710 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:10:49.430785 augenrules[1485]: No rules Jul 15 05:10:49.432370 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:10:49.432728 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:10:49.437118 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:10:49.438109 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:10:49.549658 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 15 05:10:49.552320 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:10:49.563607 kernel: EDAC MC: Ver: 3.0.0 Jul 15 05:10:49.569249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:10:49.592894 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:10:49.685908 systemd-resolved[1444]: Positive Trust Anchors: Jul 15 05:10:49.686166 systemd-resolved[1444]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:10:49.686244 systemd-resolved[1444]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:10:49.689758 systemd-resolved[1444]: Defaulting to hostname 'linux'. Jul 15 05:10:49.691489 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:10:49.692125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:10:49.694038 systemd-networkd[1443]: lo: Link UP Jul 15 05:10:49.694055 systemd-networkd[1443]: lo: Gained carrier Jul 15 05:10:49.700137 systemd-networkd[1443]: Enumeration completed Jul 15 05:10:49.700244 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:10:49.700812 systemd[1]: Reached target network.target - Network. Jul 15 05:10:49.703586 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:10:49.703603 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:10:49.705409 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:10:49.708374 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:10:49.713207 systemd-networkd[1443]: eth0: Link UP Jul 15 05:10:49.713406 systemd-networkd[1443]: eth0: Gained carrier Jul 15 05:10:49.713430 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:10:49.738978 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:10:49.770811 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 05:10:49.778387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:10:49.780123 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:10:49.780872 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:10:49.781495 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:10:49.782067 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:10:49.782814 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:10:49.783393 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:10:49.783430 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:10:49.784059 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:10:49.784756 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:10:49.785431 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:10:49.785991 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:10:49.788076 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:10:49.790303 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:10:49.793010 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:10:49.793803 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:10:49.794383 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:10:49.797374 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:10:49.798224 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:10:49.799576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:10:49.800976 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:10:49.801501 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:10:49.802030 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:10:49.802065 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:10:49.803043 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:10:49.806289 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 05:10:49.812289 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:10:49.813818 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:10:49.818357 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:10:49.821691 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:10:49.822272 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:10:49.832019 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:10:49.838477 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:10:49.841241 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:10:49.844735 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:10:49.849254 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:10:49.853201 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jul 15 05:10:49.854463 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:10:49.855483 oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jul 15 05:10:49.857374 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:10:49.859496 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting users, quitting Jul 15 05:10:49.859712 oslogin_cache_refresh[1533]: Failure getting users, quitting Jul 15 05:10:49.861220 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:10:49.861220 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing group entry cache Jul 15 05:10:49.861220 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting groups, quitting Jul 15 05:10:49.861220 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:10:49.860232 oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:10:49.860273 oslogin_cache_refresh[1533]: Refreshing group entry cache Jul 15 05:10:49.860914 oslogin_cache_refresh[1533]: Failure getting groups, quitting Jul 15 05:10:49.860924 oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:10:49.866219 jq[1531]: false Jul 15 05:10:49.869966 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:10:49.873254 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:10:49.877472 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:10:49.886417 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:10:49.887445 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:10:49.887726 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:10:49.887849 extend-filesystems[1532]: Found /dev/sda6 Jul 15 05:10:49.888053 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:10:49.889338 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:10:49.891650 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:10:49.893008 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:10:49.914660 jq[1544]: true Jul 15 05:10:49.918508 extend-filesystems[1532]: Found /dev/sda9 Jul 15 05:10:49.931822 extend-filesystems[1532]: Checking size of /dev/sda9 Jul 15 05:10:49.943554 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:10:49.949332 coreos-metadata[1528]: Jul 15 05:10:49.947 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 15 05:10:49.954071 update_engine[1542]: I20250715 05:10:49.950774 1542 main.cc:92] Flatcar Update Engine starting Jul 15 05:10:49.957754 tar[1549]: linux-amd64/helm Jul 15 05:10:49.958288 jq[1562]: true Jul 15 05:10:49.980911 extend-filesystems[1532]: Resized partition /dev/sda9 Jul 15 05:10:49.983683 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:10:49.983948 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:10:49.989391 extend-filesystems[1578]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:10:50.004980 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jul 15 05:10:50.006856 dbus-daemon[1529]: [system] SELinux support is enabled Jul 15 05:10:50.019217 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:10:50.024586 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:10:50.024851 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:10:50.025727 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:10:50.025749 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:10:50.038957 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:10:50.041529 update_engine[1542]: I20250715 05:10:50.041479 1542 update_check_scheduler.cc:74] Next update check in 9m19s Jul 15 05:10:50.042971 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:10:50.076589 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:10:50.076614 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:10:50.083808 systemd-logind[1540]: New seat seat0. Jul 15 05:10:50.086071 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:10:50.112140 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:10:50.112826 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:10:50.119296 systemd[1]: Starting sshkeys.service... Jul 15 05:10:50.161294 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 05:10:50.167248 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 05:10:50.208256 systemd-networkd[1443]: eth0: DHCPv4 address 172.236.104.60/24, gateway 172.236.104.1 acquired from 23.194.118.61 Jul 15 05:10:50.208578 dbus-daemon[1529]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1443 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 05:10:50.210551 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jul 15 05:10:50.220241 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:10:50.226313 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 05:10:50.244155 containerd[1565]: time="2025-07-15T05:10:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:10:50.244730 containerd[1565]: time="2025-07-15T05:10:50.244637582Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256086294Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.9µs" Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256116604Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256144324Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256322294Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256337634Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256359984Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256431064Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256441744Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256640644Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256654034Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256674314Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:10:50.256737 containerd[1565]: time="2025-07-15T05:10:50.256682864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:10:50.256949 containerd[1565]: time="2025-07-15T05:10:50.256776064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:10:50.257120 containerd[1565]: time="2025-07-15T05:10:50.257022785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:10:50.257120 containerd[1565]: time="2025-07-15T05:10:50.257060455Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:10:50.257120 containerd[1565]: time="2025-07-15T05:10:50.257070045Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:10:50.257174 containerd[1565]: time="2025-07-15T05:10:50.257140465Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:10:50.258125 containerd[1565]: time="2025-07-15T05:10:50.257567115Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:10:50.258125 containerd[1565]: time="2025-07-15T05:10:50.257683145Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271451179Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271559939Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271608359Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271621949Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271633009Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271641299Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271655579Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271665869Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271674969Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271683979Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271696039Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271705859Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271813889Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:10:50.275216 containerd[1565]: time="2025-07-15T05:10:50.271835869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271847759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271856519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271865819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271876189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271885389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271899279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271915079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271924259Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271937259Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.271992070Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.272002680Z" level=info msg="Start snapshots syncer" Jul 15 05:10:50.275439 containerd[1565]: time="2025-07-15T05:10:50.272013010Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:10:50.275638 containerd[1565]: time="2025-07-15T05:10:50.272459630Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:10:50.275638 containerd[1565]: time="2025-07-15T05:10:50.272496220Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272550580Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272656910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272682160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272691240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272699150Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272710470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272719720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272728530Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272749740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272758670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272766990Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:10:50.275734 containerd[1565]: time="2025-07-15T05:10:50.272794730Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.282623560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.282654390Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.282673110Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.282685180Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.282736410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.282749730Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.282771560Z" level=info msg="runtime interface created" Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.282777120Z" level=info msg="created NRI interface" Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.282980291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.283584041Z" level=info msg="Connect containerd service" Jul 15 05:10:50.284896 containerd[1565]: time="2025-07-15T05:10:50.283648091Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:10:50.286372 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jul 15 05:10:50.295836 containerd[1565]: time="2025-07-15T05:10:50.295765923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:10:50.298707 extend-filesystems[1578]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 15 05:10:50.298707 extend-filesystems[1578]: old_desc_blocks = 1, new_desc_blocks = 10 Jul 15 05:10:50.298707 extend-filesystems[1578]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jul 15 05:10:50.308250 extend-filesystems[1532]: Resized filesystem in /dev/sda9 Jul 15 05:10:50.310242 coreos-metadata[1602]: Jul 15 05:10:50.302 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 15 05:10:50.299816 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:10:50.300408 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:10:50.340668 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:10:50.345323 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:10:50.370606 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:10:50.371264 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:10:50.377883 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:10:50.394974 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 05:10:50.412567 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 05:10:50.417861 dbus-daemon[1529]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1607 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 05:10:50.421511 coreos-metadata[1602]: Jul 15 05:10:50.421 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jul 15 05:10:50.425571 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 05:10:50.434401 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:10:50.437734 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:10:51.527053 systemd-timesyncd[1472]: Contacted time server 66.59.198.178:123 (0.flatcar.pool.ntp.org). Jul 15 05:10:51.527160 systemd-timesyncd[1472]: Initial clock synchronization to Tue 2025-07-15 05:10:51.526957 UTC. Jul 15 05:10:51.527674 systemd-resolved[1444]: Clock change detected. Flushing caches. Jul 15 05:10:51.529540 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:10:51.530609 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:10:51.537631 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:10:51.576953 containerd[1565]: time="2025-07-15T05:10:51.576874701Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:10:51.576953 containerd[1565]: time="2025-07-15T05:10:51.576952042Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:10:51.577014 containerd[1565]: time="2025-07-15T05:10:51.576982732Z" level=info msg="Start subscribing containerd event" Jul 15 05:10:51.577033 containerd[1565]: time="2025-07-15T05:10:51.577010212Z" level=info msg="Start recovering state" Jul 15 05:10:51.581209 containerd[1565]: time="2025-07-15T05:10:51.581176896Z" level=info msg="Start event monitor" Jul 15 05:10:51.581241 containerd[1565]: time="2025-07-15T05:10:51.581226046Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:10:51.581241 containerd[1565]: time="2025-07-15T05:10:51.581239296Z" level=info msg="Start streaming server" Jul 15 05:10:51.581288 containerd[1565]: time="2025-07-15T05:10:51.581257626Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:10:51.581288 containerd[1565]: time="2025-07-15T05:10:51.581266976Z" level=info msg="runtime interface starting up..." Jul 15 05:10:51.581288 containerd[1565]: time="2025-07-15T05:10:51.581274376Z" level=info msg="starting plugins..." Jul 15 05:10:51.582247 containerd[1565]: time="2025-07-15T05:10:51.582217807Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:10:51.584314 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:10:51.585057 containerd[1565]: time="2025-07-15T05:10:51.585030670Z" level=info msg="containerd successfully booted in 0.259316s" Jul 15 05:10:51.625419 polkitd[1640]: Started polkitd version 126 Jul 15 05:10:51.629373 polkitd[1640]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 05:10:51.629618 polkitd[1640]: Loading rules from directory /run/polkit-1/rules.d Jul 15 05:10:51.629662 polkitd[1640]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:10:51.629866 polkitd[1640]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 05:10:51.629892 polkitd[1640]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 05:10:51.629927 polkitd[1640]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 05:10:51.630536 polkitd[1640]: Finished loading, compiling and executing 2 rules Jul 15 05:10:51.631310 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 05:10:51.631527 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 05:10:51.631746 polkitd[1640]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 05:10:51.640711 systemd-hostnamed[1607]: Hostname set to <172-236-104-60> (transient) Jul 15 05:10:51.640812 systemd-resolved[1444]: System hostname changed to '172-236-104-60'. Jul 15 05:10:51.652977 coreos-metadata[1602]: Jul 15 05:10:51.652 INFO Fetch successful Jul 15 05:10:51.674944 update-ssh-keys[1659]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:10:51.676466 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 05:10:51.681390 systemd[1]: Finished sshkeys.service. Jul 15 05:10:51.706263 tar[1549]: linux-amd64/LICENSE Jul 15 05:10:51.706263 tar[1549]: linux-amd64/README.md Jul 15 05:10:51.723371 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:10:52.043392 coreos-metadata[1528]: Jul 15 05:10:52.043 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jul 15 05:10:52.144799 coreos-metadata[1528]: Jul 15 05:10:52.144 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jul 15 05:10:52.430579 coreos-metadata[1528]: Jul 15 05:10:52.430 INFO Fetch successful Jul 15 05:10:52.430742 coreos-metadata[1528]: Jul 15 05:10:52.430 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jul 15 05:10:52.687459 coreos-metadata[1528]: Jul 15 05:10:52.687 INFO Fetch successful Jul 15 05:10:52.709322 systemd-networkd[1443]: eth0: Gained IPv6LL Jul 15 05:10:52.712884 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:10:52.715590 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:10:52.720210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:10:52.724331 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:10:52.752406 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:10:52.787019 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 05:10:52.788789 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:10:53.584861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:10:53.586053 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:10:53.588188 systemd[1]: Startup finished in 2.366s (kernel) + 7.752s (initrd) + 5.168s (userspace) = 15.288s. Jul 15 05:10:53.591326 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:10:54.062330 kubelet[1703]: E0715 05:10:54.062216 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:10:54.067539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:10:54.067727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:10:54.068556 systemd[1]: kubelet.service: Consumed 835ms CPU time, 263.6M memory peak. Jul 15 05:10:54.816921 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:10:54.818241 systemd[1]: Started sshd@0-172.236.104.60:22-139.178.68.195:57348.service - OpenSSH per-connection server daemon (139.178.68.195:57348). Jul 15 05:10:55.164847 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 57348 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:10:55.166898 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:55.172342 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:10:55.173662 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:10:55.184729 systemd-logind[1540]: New session 1 of user core. Jul 15 05:10:55.193269 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:10:55.196063 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:10:55.208382 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:10:55.211322 systemd-logind[1540]: New session c1 of user core. Jul 15 05:10:55.330086 systemd[1720]: Queued start job for default target default.target. Jul 15 05:10:55.338185 systemd[1720]: Created slice app.slice - User Application Slice. Jul 15 05:10:55.338208 systemd[1720]: Reached target paths.target - Paths. Jul 15 05:10:55.338284 systemd[1720]: Reached target timers.target - Timers. Jul 15 05:10:55.339410 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:10:55.347740 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:10:55.347927 systemd[1720]: Reached target sockets.target - Sockets. Jul 15 05:10:55.348009 systemd[1720]: Reached target basic.target - Basic System. Jul 15 05:10:55.348095 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:10:55.349034 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:10:55.349417 systemd[1720]: Reached target default.target - Main User Target. Jul 15 05:10:55.349524 systemd[1720]: Startup finished in 131ms. Jul 15 05:10:55.610416 systemd[1]: Started sshd@1-172.236.104.60:22-139.178.68.195:57358.service - OpenSSH per-connection server daemon (139.178.68.195:57358). Jul 15 05:10:55.954130 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 57358 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:10:55.955238 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:55.959323 systemd-logind[1540]: New session 2 of user core. Jul 15 05:10:55.966176 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:10:56.205713 sshd[1734]: Connection closed by 139.178.68.195 port 57358 Jul 15 05:10:56.206296 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:56.215740 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:10:56.215868 systemd[1]: sshd@1-172.236.104.60:22-139.178.68.195:57358.service: Deactivated successfully. Jul 15 05:10:56.217492 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:10:56.218571 systemd-logind[1540]: Removed session 2. Jul 15 05:10:56.278823 systemd[1]: Started sshd@2-172.236.104.60:22-139.178.68.195:57370.service - OpenSSH per-connection server daemon (139.178.68.195:57370). Jul 15 05:10:56.632277 sshd[1740]: Accepted publickey for core from 139.178.68.195 port 57370 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:10:56.634409 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:56.640575 systemd-logind[1540]: New session 3 of user core. Jul 15 05:10:56.646203 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:10:56.881230 sshd[1743]: Connection closed by 139.178.68.195 port 57370 Jul 15 05:10:56.881854 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:56.886760 systemd[1]: sshd@2-172.236.104.60:22-139.178.68.195:57370.service: Deactivated successfully. Jul 15 05:10:56.889307 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:10:56.890189 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:10:56.891412 systemd-logind[1540]: Removed session 3. Jul 15 05:10:56.943230 systemd[1]: Started sshd@3-172.236.104.60:22-139.178.68.195:57382.service - OpenSSH per-connection server daemon (139.178.68.195:57382). Jul 15 05:10:57.300711 sshd[1749]: Accepted publickey for core from 139.178.68.195 port 57382 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:10:57.302883 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:57.308464 systemd-logind[1540]: New session 4 of user core. Jul 15 05:10:57.314266 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:10:57.555901 sshd[1752]: Connection closed by 139.178.68.195 port 57382 Jul 15 05:10:57.556468 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:57.561799 systemd[1]: sshd@3-172.236.104.60:22-139.178.68.195:57382.service: Deactivated successfully. Jul 15 05:10:57.563960 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:10:57.564749 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:10:57.565844 systemd-logind[1540]: Removed session 4. Jul 15 05:10:57.621298 systemd[1]: Started sshd@4-172.236.104.60:22-139.178.68.195:57398.service - OpenSSH per-connection server daemon (139.178.68.195:57398). Jul 15 05:10:57.970487 sshd[1758]: Accepted publickey for core from 139.178.68.195 port 57398 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:10:57.972394 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:57.977591 systemd-logind[1540]: New session 5 of user core. Jul 15 05:10:57.984196 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:10:58.183803 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:10:58.184093 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:10:58.200123 sudo[1762]: pam_unix(sudo:session): session closed for user root Jul 15 05:10:58.252182 sshd[1761]: Connection closed by 139.178.68.195 port 57398 Jul 15 05:10:58.252858 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:58.258199 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:10:58.258436 systemd[1]: sshd@4-172.236.104.60:22-139.178.68.195:57398.service: Deactivated successfully. Jul 15 05:10:58.260448 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:10:58.261959 systemd-logind[1540]: Removed session 5. Jul 15 05:10:58.316673 systemd[1]: Started sshd@5-172.236.104.60:22-139.178.68.195:57408.service - OpenSSH per-connection server daemon (139.178.68.195:57408). Jul 15 05:10:58.668074 sshd[1768]: Accepted publickey for core from 139.178.68.195 port 57408 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:10:58.669760 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:58.675438 systemd-logind[1540]: New session 6 of user core. Jul 15 05:10:58.681217 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:10:58.869373 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:10:58.869663 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:10:58.876134 sudo[1773]: pam_unix(sudo:session): session closed for user root Jul 15 05:10:58.881548 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:10:58.881787 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:10:58.890446 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:10:58.933856 augenrules[1795]: No rules Jul 15 05:10:58.935726 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:10:58.936190 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:10:58.938348 sudo[1772]: pam_unix(sudo:session): session closed for user root Jul 15 05:10:58.990686 sshd[1771]: Connection closed by 139.178.68.195 port 57408 Jul 15 05:10:58.991284 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:58.995826 systemd[1]: sshd@5-172.236.104.60:22-139.178.68.195:57408.service: Deactivated successfully. Jul 15 05:10:58.997717 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:10:59.001168 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:10:59.002776 systemd-logind[1540]: Removed session 6. Jul 15 05:10:59.052800 systemd[1]: Started sshd@6-172.236.104.60:22-139.178.68.195:57414.service - OpenSSH per-connection server daemon (139.178.68.195:57414). Jul 15 05:10:59.405217 sshd[1804]: Accepted publickey for core from 139.178.68.195 port 57414 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:10:59.406567 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:59.410830 systemd-logind[1540]: New session 7 of user core. Jul 15 05:10:59.416181 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:10:59.610227 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:10:59.610578 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:10:59.856144 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:10:59.870361 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:11:00.059112 dockerd[1826]: time="2025-07-15T05:11:00.058957212Z" level=info msg="Starting up" Jul 15 05:11:00.062937 dockerd[1826]: time="2025-07-15T05:11:00.062838466Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:11:00.079062 dockerd[1826]: time="2025-07-15T05:11:00.079022692Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:11:00.124198 dockerd[1826]: time="2025-07-15T05:11:00.124054347Z" level=info msg="Loading containers: start." Jul 15 05:11:00.136112 kernel: Initializing XFRM netlink socket Jul 15 05:11:00.384189 systemd-networkd[1443]: docker0: Link UP Jul 15 05:11:00.387133 dockerd[1826]: time="2025-07-15T05:11:00.387043550Z" level=info msg="Loading containers: done." Jul 15 05:11:00.401767 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1786198988-merged.mount: Deactivated successfully. Jul 15 05:11:00.404606 dockerd[1826]: time="2025-07-15T05:11:00.404565287Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:11:00.404760 dockerd[1826]: time="2025-07-15T05:11:00.404632367Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:11:00.404760 dockerd[1826]: time="2025-07-15T05:11:00.404712307Z" level=info msg="Initializing buildkit" Jul 15 05:11:00.425217 dockerd[1826]: time="2025-07-15T05:11:00.425193148Z" level=info msg="Completed buildkit initialization" Jul 15 05:11:00.434020 dockerd[1826]: time="2025-07-15T05:11:00.433977687Z" level=info msg="Daemon has completed initialization" Jul 15 05:11:00.434269 dockerd[1826]: time="2025-07-15T05:11:00.434197687Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:11:00.434313 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:11:01.033829 containerd[1565]: time="2025-07-15T05:11:01.033779186Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 05:11:01.891108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752012256.mount: Deactivated successfully. Jul 15 05:11:02.925047 containerd[1565]: time="2025-07-15T05:11:02.924962137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:02.925787 containerd[1565]: time="2025-07-15T05:11:02.925764948Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 15 05:11:02.926462 containerd[1565]: time="2025-07-15T05:11:02.926276408Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:02.927856 containerd[1565]: time="2025-07-15T05:11:02.927835280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:02.928604 containerd[1565]: time="2025-07-15T05:11:02.928579321Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.894745895s" Jul 15 05:11:02.928675 containerd[1565]: time="2025-07-15T05:11:02.928663061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 15 05:11:02.929428 containerd[1565]: time="2025-07-15T05:11:02.929401441Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 05:11:04.318131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:11:04.320210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:11:04.492149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:11:04.502649 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:11:04.571646 kubelet[2099]: E0715 05:11:04.571537 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:11:04.579799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:11:04.580168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:11:04.580742 systemd[1]: kubelet.service: Consumed 186ms CPU time, 109.3M memory peak. Jul 15 05:11:04.774100 containerd[1565]: time="2025-07-15T05:11:04.773510525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:04.774630 containerd[1565]: time="2025-07-15T05:11:04.774578566Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 15 05:11:04.774837 containerd[1565]: time="2025-07-15T05:11:04.774812636Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:04.776961 containerd[1565]: time="2025-07-15T05:11:04.776921879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:04.777793 containerd[1565]: time="2025-07-15T05:11:04.777765799Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.848337418s" Jul 15 05:11:04.777865 containerd[1565]: time="2025-07-15T05:11:04.777849749Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 15 05:11:04.778744 containerd[1565]: time="2025-07-15T05:11:04.778693220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 05:11:06.216245 containerd[1565]: time="2025-07-15T05:11:06.216168057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:06.217109 containerd[1565]: time="2025-07-15T05:11:06.217061428Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 15 05:11:06.217766 containerd[1565]: time="2025-07-15T05:11:06.217707219Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:06.219778 containerd[1565]: time="2025-07-15T05:11:06.219738931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:06.220729 containerd[1565]: time="2025-07-15T05:11:06.220595672Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.441870492s" Jul 15 05:11:06.220729 containerd[1565]: time="2025-07-15T05:11:06.220625072Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 15 05:11:06.221716 containerd[1565]: time="2025-07-15T05:11:06.221652793Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 05:11:07.434106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787563808.mount: Deactivated successfully. Jul 15 05:11:07.754735 containerd[1565]: time="2025-07-15T05:11:07.754295615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:07.755881 containerd[1565]: time="2025-07-15T05:11:07.755837637Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 15 05:11:07.756928 containerd[1565]: time="2025-07-15T05:11:07.756006917Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:07.757420 containerd[1565]: time="2025-07-15T05:11:07.757384368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:07.757924 containerd[1565]: time="2025-07-15T05:11:07.757902719Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.536110336s" Jul 15 05:11:07.757980 containerd[1565]: time="2025-07-15T05:11:07.757968469Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 15 05:11:07.758608 containerd[1565]: time="2025-07-15T05:11:07.758591000Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 05:11:08.446829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996208813.mount: Deactivated successfully. Jul 15 05:11:09.095732 containerd[1565]: time="2025-07-15T05:11:09.095654736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:09.096798 containerd[1565]: time="2025-07-15T05:11:09.096500367Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 05:11:09.097351 containerd[1565]: time="2025-07-15T05:11:09.097315388Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:09.099583 containerd[1565]: time="2025-07-15T05:11:09.099546590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:09.100581 containerd[1565]: time="2025-07-15T05:11:09.100543701Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.341867341s" Jul 15 05:11:09.100712 containerd[1565]: time="2025-07-15T05:11:09.100694291Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 05:11:09.101879 containerd[1565]: time="2025-07-15T05:11:09.101845532Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:11:09.780364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount326065547.mount: Deactivated successfully. Jul 15 05:11:09.785970 containerd[1565]: time="2025-07-15T05:11:09.785886766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:11:09.786864 containerd[1565]: time="2025-07-15T05:11:09.786836587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:11:09.787731 containerd[1565]: time="2025-07-15T05:11:09.787681208Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:11:09.791840 containerd[1565]: time="2025-07-15T05:11:09.791796002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:11:09.792696 containerd[1565]: time="2025-07-15T05:11:09.792367843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 690.48475ms" Jul 15 05:11:09.792696 containerd[1565]: time="2025-07-15T05:11:09.792400473Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:11:09.792996 containerd[1565]: time="2025-07-15T05:11:09.792975233Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 05:11:10.501315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271807225.mount: Deactivated successfully. Jul 15 05:11:11.719892 containerd[1565]: time="2025-07-15T05:11:11.719816680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:11.720957 containerd[1565]: time="2025-07-15T05:11:11.720703061Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 15 05:11:11.721632 containerd[1565]: time="2025-07-15T05:11:11.721593502Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:11.723942 containerd[1565]: time="2025-07-15T05:11:11.723920864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:11.725172 containerd[1565]: time="2025-07-15T05:11:11.725148425Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.932101581s" Jul 15 05:11:11.725247 containerd[1565]: time="2025-07-15T05:11:11.725230185Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 05:11:13.506314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:11:13.506437 systemd[1]: kubelet.service: Consumed 186ms CPU time, 109.3M memory peak. Jul 15 05:11:13.509953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:11:13.535217 systemd[1]: Reload requested from client PID 2257 ('systemctl') (unit session-7.scope)... Jul 15 05:11:13.535240 systemd[1]: Reloading... Jul 15 05:11:13.684157 zram_generator::config[2304]: No configuration found. Jul 15 05:11:13.785565 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:11:13.887450 systemd[1]: Reloading finished in 351 ms. Jul 15 05:11:13.950638 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:11:13.950735 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:11:13.951019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:11:13.951060 systemd[1]: kubelet.service: Consumed 134ms CPU time, 98.3M memory peak. Jul 15 05:11:13.952425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:11:14.133513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:11:14.137298 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:11:14.171105 kubelet[2355]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:11:14.171105 kubelet[2355]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 05:11:14.171105 kubelet[2355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:11:14.171105 kubelet[2355]: I0715 05:11:14.170942 2355 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:11:14.527665 kubelet[2355]: I0715 05:11:14.527547 2355 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 05:11:14.529102 kubelet[2355]: I0715 05:11:14.528158 2355 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:11:14.529102 kubelet[2355]: I0715 05:11:14.528489 2355 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 05:11:14.558252 kubelet[2355]: E0715 05:11:14.558224 2355 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.236.104.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.104.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:11:14.559670 kubelet[2355]: I0715 05:11:14.559656 2355 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:11:14.568298 kubelet[2355]: I0715 05:11:14.568280 2355 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:11:14.572777 kubelet[2355]: I0715 05:11:14.572756 2355 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:11:14.573430 kubelet[2355]: I0715 05:11:14.573408 2355 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 05:11:14.573604 kubelet[2355]: I0715 05:11:14.573574 2355 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:11:14.573738 kubelet[2355]: I0715 05:11:14.573600 2355 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-104-60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:11:14.573835 kubelet[2355]: I0715 05:11:14.573750 2355 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:11:14.573835 kubelet[2355]: I0715 05:11:14.573758 2355 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 05:11:14.573903 kubelet[2355]: I0715 05:11:14.573889 2355 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:11:14.576808 kubelet[2355]: I0715 05:11:14.576424 2355 kubelet.go:408] "Attempting to sync node with API server" Jul 15 05:11:14.576808 kubelet[2355]: I0715 05:11:14.576443 2355 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:11:14.576808 kubelet[2355]: I0715 05:11:14.576474 2355 kubelet.go:314] "Adding apiserver pod source" Jul 15 05:11:14.576808 kubelet[2355]: I0715 05:11:14.576495 2355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:11:14.581588 kubelet[2355]: W0715 05:11:14.581557 2355 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.104.60:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-104-60&limit=500&resourceVersion=0": dial tcp 172.236.104.60:6443: connect: connection refused Jul 15 05:11:14.581676 kubelet[2355]: E0715 05:11:14.581660 2355 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.236.104.60:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-104-60&limit=500&resourceVersion=0\": dial tcp 172.236.104.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:11:14.581774 kubelet[2355]: I0715 05:11:14.581761 2355 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:11:14.582153 kubelet[2355]: I0715 05:11:14.582140 2355 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:11:14.582257 kubelet[2355]: W0715 05:11:14.582246 2355 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:11:14.583987 kubelet[2355]: I0715 05:11:14.583959 2355 server.go:1274] "Started kubelet" Jul 15 05:11:14.585697 kubelet[2355]: W0715 05:11:14.584768 2355 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.104.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.236.104.60:6443: connect: connection refused Jul 15 05:11:14.585697 kubelet[2355]: E0715 05:11:14.584800 2355 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.236.104.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.104.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:11:14.585697 kubelet[2355]: I0715 05:11:14.584945 2355 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:11:14.585963 kubelet[2355]: I0715 05:11:14.585945 2355 server.go:449] "Adding debug handlers to kubelet server" Jul 15 05:11:14.593551 kubelet[2355]: I0715 05:11:14.593519 2355 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:11:14.593775 kubelet[2355]: I0715 05:11:14.593753 2355 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:11:14.594546 kubelet[2355]: I0715 05:11:14.594532 2355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:11:14.595010 kubelet[2355]: E0715 05:11:14.593940 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.104.60:6443/api/v1/namespaces/default/events\": dial tcp 172.236.104.60:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-104-60.185254a5737529a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-104-60,UID:172-236-104-60,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-104-60,},FirstTimestamp:2025-07-15 05:11:14.583939493 +0000 UTC m=+0.442678613,LastTimestamp:2025-07-15 05:11:14.583939493 +0000 UTC m=+0.442678613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-104-60,}" Jul 15 05:11:14.597295 kubelet[2355]: E0715 05:11:14.597273 2355 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:11:14.597425 kubelet[2355]: I0715 05:11:14.597407 2355 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:11:14.599858 kubelet[2355]: I0715 05:11:14.599838 2355 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 05:11:14.599933 kubelet[2355]: I0715 05:11:14.599927 2355 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 05:11:14.599979 kubelet[2355]: I0715 05:11:14.599962 2355 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:11:14.600593 kubelet[2355]: W0715 05:11:14.600563 2355 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.104.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.104.60:6443: connect: connection refused Jul 15 05:11:14.600629 kubelet[2355]: E0715 05:11:14.600600 2355 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.236.104.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.104.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:11:14.600756 kubelet[2355]: I0715 05:11:14.600739 2355 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:11:14.600818 kubelet[2355]: I0715 05:11:14.600798 2355 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:11:14.601712 kubelet[2355]: E0715 05:11:14.601688 2355 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-236-104-60\" not found" Jul 15 05:11:14.601772 kubelet[2355]: E0715 05:11:14.601748 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.104.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-104-60?timeout=10s\": dial tcp 172.236.104.60:6443: connect: connection refused" interval="200ms" Jul 15 05:11:14.602070 kubelet[2355]: I0715 05:11:14.602043 2355 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:11:14.611956 kubelet[2355]: I0715 05:11:14.611931 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:11:14.613801 kubelet[2355]: I0715 05:11:14.613048 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:11:14.613801 kubelet[2355]: I0715 05:11:14.613064 2355 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 05:11:14.613801 kubelet[2355]: I0715 05:11:14.613590 2355 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 05:11:14.613801 kubelet[2355]: E0715 05:11:14.613638 2355 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:11:14.621571 kubelet[2355]: W0715 05:11:14.621533 2355 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.104.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.104.60:6443: connect: connection refused Jul 15 05:11:14.621629 kubelet[2355]: E0715 05:11:14.621572 2355 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.236.104.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.104.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:11:14.630665 kubelet[2355]: I0715 05:11:14.630653 2355 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 05:11:14.630745 kubelet[2355]: I0715 05:11:14.630735 2355 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 05:11:14.630886 kubelet[2355]: I0715 05:11:14.630814 2355 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:11:14.632849 kubelet[2355]: I0715 05:11:14.632837 2355 policy_none.go:49] "None policy: Start" Jul 15 05:11:14.633645 kubelet[2355]: I0715 05:11:14.633634 2355 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 05:11:14.633793 kubelet[2355]: I0715 05:11:14.633726 2355 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:11:14.640113 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:11:14.649908 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:11:14.664553 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:11:14.665968 kubelet[2355]: I0715 05:11:14.665898 2355 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:11:14.666096 kubelet[2355]: I0715 05:11:14.666060 2355 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:11:14.666136 kubelet[2355]: I0715 05:11:14.666099 2355 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:11:14.666643 kubelet[2355]: I0715 05:11:14.666628 2355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:11:14.668174 kubelet[2355]: E0715 05:11:14.668157 2355 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-104-60\" not found" Jul 15 05:11:14.722466 systemd[1]: Created slice kubepods-burstable-pod58a9b7cf79a4b132047b0b72f3abf5ed.slice - libcontainer container kubepods-burstable-pod58a9b7cf79a4b132047b0b72f3abf5ed.slice. Jul 15 05:11:14.736391 systemd[1]: Created slice kubepods-burstable-podc4e37f08928fe309bc97f96a59e69008.slice - libcontainer container kubepods-burstable-podc4e37f08928fe309bc97f96a59e69008.slice. Jul 15 05:11:14.740328 systemd[1]: Created slice kubepods-burstable-pod9c24505b6d56799d2cfb3395af5501c9.slice - libcontainer container kubepods-burstable-pod9c24505b6d56799d2cfb3395af5501c9.slice. Jul 15 05:11:14.768244 kubelet[2355]: I0715 05:11:14.768206 2355 kubelet_node_status.go:72] "Attempting to register node" node="172-236-104-60" Jul 15 05:11:14.768489 kubelet[2355]: E0715 05:11:14.768470 2355 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.236.104.60:6443/api/v1/nodes\": dial tcp 172.236.104.60:6443: connect: connection refused" node="172-236-104-60" Jul 15 05:11:14.802176 kubelet[2355]: E0715 05:11:14.802100 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.104.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-104-60?timeout=10s\": dial tcp 172.236.104.60:6443: connect: connection refused" interval="400ms" Jul 15 05:11:14.901528 kubelet[2355]: I0715 05:11:14.901495 2355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-flexvolume-dir\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:14.901528 kubelet[2355]: I0715 05:11:14.901518 2355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c24505b6d56799d2cfb3395af5501c9-kubeconfig\") pod \"kube-scheduler-172-236-104-60\" (UID: \"9c24505b6d56799d2cfb3395af5501c9\") " pod="kube-system/kube-scheduler-172-236-104-60" Jul 15 05:11:14.901603 kubelet[2355]: I0715 05:11:14.901548 2355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-kubeconfig\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:14.901603 kubelet[2355]: I0715 05:11:14.901568 2355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:14.901603 kubelet[2355]: I0715 05:11:14.901586 2355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58a9b7cf79a4b132047b0b72f3abf5ed-ca-certs\") pod \"kube-apiserver-172-236-104-60\" (UID: \"58a9b7cf79a4b132047b0b72f3abf5ed\") " pod="kube-system/kube-apiserver-172-236-104-60" Jul 15 05:11:14.901603 kubelet[2355]: I0715 05:11:14.901598 2355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58a9b7cf79a4b132047b0b72f3abf5ed-k8s-certs\") pod \"kube-apiserver-172-236-104-60\" (UID: \"58a9b7cf79a4b132047b0b72f3abf5ed\") " pod="kube-system/kube-apiserver-172-236-104-60" Jul 15 05:11:14.901689 kubelet[2355]: I0715 05:11:14.901610 2355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58a9b7cf79a4b132047b0b72f3abf5ed-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-104-60\" (UID: \"58a9b7cf79a4b132047b0b72f3abf5ed\") " pod="kube-system/kube-apiserver-172-236-104-60" Jul 15 05:11:14.901689 kubelet[2355]: I0715 05:11:14.901621 2355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-ca-certs\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:14.901689 kubelet[2355]: I0715 05:11:14.901632 2355 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-k8s-certs\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:14.970036 kubelet[2355]: I0715 05:11:14.970019 2355 kubelet_node_status.go:72] "Attempting to register node" node="172-236-104-60" Jul 15 05:11:14.970326 kubelet[2355]: E0715 05:11:14.970265 2355 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.236.104.60:6443/api/v1/nodes\": dial tcp 172.236.104.60:6443: connect: connection refused" node="172-236-104-60" Jul 15 05:11:15.035139 kubelet[2355]: E0715 05:11:15.034835 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:15.035448 containerd[1565]: time="2025-07-15T05:11:15.035410345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-104-60,Uid:58a9b7cf79a4b132047b0b72f3abf5ed,Namespace:kube-system,Attempt:0,}" Jul 15 05:11:15.039352 kubelet[2355]: E0715 05:11:15.039253 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:15.040019 containerd[1565]: time="2025-07-15T05:11:15.039909949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-104-60,Uid:c4e37f08928fe309bc97f96a59e69008,Namespace:kube-system,Attempt:0,}" Jul 15 05:11:15.042571 kubelet[2355]: E0715 05:11:15.042493 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:15.043537 containerd[1565]: time="2025-07-15T05:11:15.043498703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-104-60,Uid:9c24505b6d56799d2cfb3395af5501c9,Namespace:kube-system,Attempt:0,}" Jul 15 05:11:15.077572 containerd[1565]: time="2025-07-15T05:11:15.077349487Z" level=info msg="connecting to shim 6173df3ef4e9a9a6023ecd9cc45984683d71495a417949d5cb7f9473fd5db31e" address="unix:///run/containerd/s/6cea49d1c669ccf6d2782baf431e7ba0ee6b66d2d8bb40be32a1c88e9924a9d5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:11:15.080885 containerd[1565]: time="2025-07-15T05:11:15.080852210Z" level=info msg="connecting to shim ff81923054612d70b14e7a04e0f58cb90d045df410fb271b823685634b457d9e" address="unix:///run/containerd/s/1ef25c0b43c605a5d0aa8d70135cb358160de61fe5083d432fd315edee1e9a0e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:11:15.082599 containerd[1565]: time="2025-07-15T05:11:15.082577742Z" level=info msg="connecting to shim 8933e9703a80d917b27f8f605539f88b8d3c37db09260136dfeacd023d61015d" address="unix:///run/containerd/s/1853d3c71ad22d6bccc12bc8383490e8a0970439603fd03ffc8d6c778e89b819" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:11:15.126220 systemd[1]: Started cri-containerd-ff81923054612d70b14e7a04e0f58cb90d045df410fb271b823685634b457d9e.scope - libcontainer container ff81923054612d70b14e7a04e0f58cb90d045df410fb271b823685634b457d9e. Jul 15 05:11:15.132156 systemd[1]: Started cri-containerd-6173df3ef4e9a9a6023ecd9cc45984683d71495a417949d5cb7f9473fd5db31e.scope - libcontainer container 6173df3ef4e9a9a6023ecd9cc45984683d71495a417949d5cb7f9473fd5db31e. Jul 15 05:11:15.134624 systemd[1]: Started cri-containerd-8933e9703a80d917b27f8f605539f88b8d3c37db09260136dfeacd023d61015d.scope - libcontainer container 8933e9703a80d917b27f8f605539f88b8d3c37db09260136dfeacd023d61015d. Jul 15 05:11:15.203892 kubelet[2355]: E0715 05:11:15.203843 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.104.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-104-60?timeout=10s\": dial tcp 172.236.104.60:6443: connect: connection refused" interval="800ms" Jul 15 05:11:15.208215 containerd[1565]: time="2025-07-15T05:11:15.208159957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-104-60,Uid:c4e37f08928fe309bc97f96a59e69008,Namespace:kube-system,Attempt:0,} returns sandbox id \"8933e9703a80d917b27f8f605539f88b8d3c37db09260136dfeacd023d61015d\"" Jul 15 05:11:15.210302 kubelet[2355]: E0715 05:11:15.210060 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:15.218432 containerd[1565]: time="2025-07-15T05:11:15.218403238Z" level=info msg="CreateContainer within sandbox \"8933e9703a80d917b27f8f605539f88b8d3c37db09260136dfeacd023d61015d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:11:15.221373 containerd[1565]: time="2025-07-15T05:11:15.221314581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-104-60,Uid:58a9b7cf79a4b132047b0b72f3abf5ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"6173df3ef4e9a9a6023ecd9cc45984683d71495a417949d5cb7f9473fd5db31e\"" Jul 15 05:11:15.224888 kubelet[2355]: E0715 05:11:15.224817 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:15.232683 containerd[1565]: time="2025-07-15T05:11:15.232618802Z" level=info msg="CreateContainer within sandbox \"6173df3ef4e9a9a6023ecd9cc45984683d71495a417949d5cb7f9473fd5db31e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:11:15.234479 containerd[1565]: time="2025-07-15T05:11:15.234456354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-104-60,Uid:9c24505b6d56799d2cfb3395af5501c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff81923054612d70b14e7a04e0f58cb90d045df410fb271b823685634b457d9e\"" Jul 15 05:11:15.235803 kubelet[2355]: E0715 05:11:15.235775 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:15.236048 containerd[1565]: time="2025-07-15T05:11:15.235879205Z" level=info msg="Container 1d2b46bb3d0db5d2aecbe2a8a789422f4161c591039b801fb8d062c7c260867b: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:15.238526 containerd[1565]: time="2025-07-15T05:11:15.238508068Z" level=info msg="CreateContainer within sandbox \"ff81923054612d70b14e7a04e0f58cb90d045df410fb271b823685634b457d9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:11:15.242099 containerd[1565]: time="2025-07-15T05:11:15.242057821Z" level=info msg="CreateContainer within sandbox \"8933e9703a80d917b27f8f605539f88b8d3c37db09260136dfeacd023d61015d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d2b46bb3d0db5d2aecbe2a8a789422f4161c591039b801fb8d062c7c260867b\"" Jul 15 05:11:15.242901 containerd[1565]: time="2025-07-15T05:11:15.242846532Z" level=info msg="StartContainer for \"1d2b46bb3d0db5d2aecbe2a8a789422f4161c591039b801fb8d062c7c260867b\"" Jul 15 05:11:15.244586 containerd[1565]: time="2025-07-15T05:11:15.244512644Z" level=info msg="Container 06fa1547d26b8b93857b56e0a389ba92f8f4c3c4ba10484b98a4ca396ced478a: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:15.244848 containerd[1565]: time="2025-07-15T05:11:15.244830254Z" level=info msg="connecting to shim 1d2b46bb3d0db5d2aecbe2a8a789422f4161c591039b801fb8d062c7c260867b" address="unix:///run/containerd/s/1853d3c71ad22d6bccc12bc8383490e8a0970439603fd03ffc8d6c778e89b819" protocol=ttrpc version=3 Jul 15 05:11:15.254754 containerd[1565]: time="2025-07-15T05:11:15.254654144Z" level=info msg="Container d9f1b8916f9b8dc5fcddd0ecaba9637782f75db82f78a24cf44162a06d61c6a4: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:15.258751 containerd[1565]: time="2025-07-15T05:11:15.258731518Z" level=info msg="CreateContainer within sandbox \"6173df3ef4e9a9a6023ecd9cc45984683d71495a417949d5cb7f9473fd5db31e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"06fa1547d26b8b93857b56e0a389ba92f8f4c3c4ba10484b98a4ca396ced478a\"" Jul 15 05:11:15.259247 containerd[1565]: time="2025-07-15T05:11:15.259222238Z" level=info msg="StartContainer for \"06fa1547d26b8b93857b56e0a389ba92f8f4c3c4ba10484b98a4ca396ced478a\"" Jul 15 05:11:15.260252 containerd[1565]: time="2025-07-15T05:11:15.260214959Z" level=info msg="connecting to shim 06fa1547d26b8b93857b56e0a389ba92f8f4c3c4ba10484b98a4ca396ced478a" address="unix:///run/containerd/s/6cea49d1c669ccf6d2782baf431e7ba0ee6b66d2d8bb40be32a1c88e9924a9d5" protocol=ttrpc version=3 Jul 15 05:11:15.265326 containerd[1565]: time="2025-07-15T05:11:15.265304385Z" level=info msg="CreateContainer within sandbox \"ff81923054612d70b14e7a04e0f58cb90d045df410fb271b823685634b457d9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d9f1b8916f9b8dc5fcddd0ecaba9637782f75db82f78a24cf44162a06d61c6a4\"" Jul 15 05:11:15.266988 containerd[1565]: time="2025-07-15T05:11:15.266943016Z" level=info msg="StartContainer for \"d9f1b8916f9b8dc5fcddd0ecaba9637782f75db82f78a24cf44162a06d61c6a4\"" Jul 15 05:11:15.267836 containerd[1565]: time="2025-07-15T05:11:15.267810927Z" level=info msg="connecting to shim d9f1b8916f9b8dc5fcddd0ecaba9637782f75db82f78a24cf44162a06d61c6a4" address="unix:///run/containerd/s/1ef25c0b43c605a5d0aa8d70135cb358160de61fe5083d432fd315edee1e9a0e" protocol=ttrpc version=3 Jul 15 05:11:15.277235 systemd[1]: Started cri-containerd-1d2b46bb3d0db5d2aecbe2a8a789422f4161c591039b801fb8d062c7c260867b.scope - libcontainer container 1d2b46bb3d0db5d2aecbe2a8a789422f4161c591039b801fb8d062c7c260867b. Jul 15 05:11:15.282162 systemd[1]: Started cri-containerd-06fa1547d26b8b93857b56e0a389ba92f8f4c3c4ba10484b98a4ca396ced478a.scope - libcontainer container 06fa1547d26b8b93857b56e0a389ba92f8f4c3c4ba10484b98a4ca396ced478a. Jul 15 05:11:15.304196 systemd[1]: Started cri-containerd-d9f1b8916f9b8dc5fcddd0ecaba9637782f75db82f78a24cf44162a06d61c6a4.scope - libcontainer container d9f1b8916f9b8dc5fcddd0ecaba9637782f75db82f78a24cf44162a06d61c6a4. Jul 15 05:11:15.357447 containerd[1565]: time="2025-07-15T05:11:15.356088265Z" level=info msg="StartContainer for \"1d2b46bb3d0db5d2aecbe2a8a789422f4161c591039b801fb8d062c7c260867b\" returns successfully" Jul 15 05:11:15.376781 kubelet[2355]: I0715 05:11:15.376591 2355 kubelet_node_status.go:72] "Attempting to register node" node="172-236-104-60" Jul 15 05:11:15.377241 kubelet[2355]: E0715 05:11:15.377217 2355 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.236.104.60:6443/api/v1/nodes\": dial tcp 172.236.104.60:6443: connect: connection refused" node="172-236-104-60" Jul 15 05:11:15.378468 containerd[1565]: time="2025-07-15T05:11:15.378373018Z" level=info msg="StartContainer for \"06fa1547d26b8b93857b56e0a389ba92f8f4c3c4ba10484b98a4ca396ced478a\" returns successfully" Jul 15 05:11:15.407422 containerd[1565]: time="2025-07-15T05:11:15.407316347Z" level=info msg="StartContainer for \"d9f1b8916f9b8dc5fcddd0ecaba9637782f75db82f78a24cf44162a06d61c6a4\" returns successfully" Jul 15 05:11:15.431322 kubelet[2355]: W0715 05:11:15.431239 2355 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.104.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.104.60:6443: connect: connection refused Jul 15 05:11:15.431322 kubelet[2355]: E0715 05:11:15.431330 2355 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.236.104.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.104.60:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:11:15.638295 kubelet[2355]: E0715 05:11:15.635690 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:15.641477 kubelet[2355]: E0715 05:11:15.641446 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:15.642047 kubelet[2355]: E0715 05:11:15.642021 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:16.185231 kubelet[2355]: I0715 05:11:16.185187 2355 kubelet_node_status.go:72] "Attempting to register node" node="172-236-104-60" Jul 15 05:11:16.645251 kubelet[2355]: E0715 05:11:16.645125 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:16.804559 kubelet[2355]: E0715 05:11:16.804510 2355 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-104-60\" not found" node="172-236-104-60" Jul 15 05:11:16.879667 kubelet[2355]: I0715 05:11:16.879636 2355 kubelet_node_status.go:75] "Successfully registered node" node="172-236-104-60" Jul 15 05:11:16.923495 kubelet[2355]: E0715 05:11:16.922832 2355 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{172-236-104-60.185254a5737529a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-104-60,UID:172-236-104-60,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-104-60,},FirstTimestamp:2025-07-15 05:11:14.583939493 +0000 UTC m=+0.442678613,LastTimestamp:2025-07-15 05:11:14.583939493 +0000 UTC m=+0.442678613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-104-60,}" Jul 15 05:11:17.584815 kubelet[2355]: I0715 05:11:17.584774 2355 apiserver.go:52] "Watching apiserver" Jul 15 05:11:17.600058 kubelet[2355]: I0715 05:11:17.600022 2355 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 05:11:17.648358 kubelet[2355]: E0715 05:11:17.648326 2355 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-236-104-60\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-104-60" Jul 15 05:11:17.648744 kubelet[2355]: E0715 05:11:17.648529 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:18.582815 systemd[1]: Reload requested from client PID 2627 ('systemctl') (unit session-7.scope)... Jul 15 05:11:18.583171 systemd[1]: Reloading... Jul 15 05:11:18.666114 zram_generator::config[2686]: No configuration found. Jul 15 05:11:18.722306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:11:18.822752 systemd[1]: Reloading finished in 239 ms. Jul 15 05:11:18.858356 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:11:18.880655 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:11:18.880926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:11:18.880970 systemd[1]: kubelet.service: Consumed 811ms CPU time, 130.8M memory peak. Jul 15 05:11:18.883206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:11:19.033783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:11:19.042593 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:11:19.084164 kubelet[2722]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:11:19.084164 kubelet[2722]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 05:11:19.084164 kubelet[2722]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:11:19.085094 kubelet[2722]: I0715 05:11:19.084603 2722 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:11:19.089663 kubelet[2722]: I0715 05:11:19.089648 2722 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 05:11:19.089729 kubelet[2722]: I0715 05:11:19.089721 2722 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:11:19.089898 kubelet[2722]: I0715 05:11:19.089888 2722 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 05:11:19.090757 kubelet[2722]: I0715 05:11:19.090744 2722 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 05:11:19.092202 kubelet[2722]: I0715 05:11:19.092183 2722 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:11:19.095399 kubelet[2722]: I0715 05:11:19.095383 2722 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:11:19.098553 kubelet[2722]: I0715 05:11:19.098538 2722 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:11:19.098644 kubelet[2722]: I0715 05:11:19.098630 2722 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 05:11:19.098754 kubelet[2722]: I0715 05:11:19.098729 2722 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:11:19.098871 kubelet[2722]: I0715 05:11:19.098752 2722 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-104-60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:11:19.098946 kubelet[2722]: I0715 05:11:19.098881 2722 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:11:19.098946 kubelet[2722]: I0715 05:11:19.098888 2722 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 05:11:19.098946 kubelet[2722]: I0715 05:11:19.098913 2722 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:11:19.099032 kubelet[2722]: I0715 05:11:19.099022 2722 kubelet.go:408] "Attempting to sync node with API server" Jul 15 05:11:19.099057 kubelet[2722]: I0715 05:11:19.099035 2722 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:11:19.100124 kubelet[2722]: I0715 05:11:19.100113 2722 kubelet.go:314] "Adding apiserver pod source" Jul 15 05:11:19.100165 kubelet[2722]: I0715 05:11:19.100133 2722 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:11:19.100753 kubelet[2722]: I0715 05:11:19.100735 2722 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:11:19.101069 kubelet[2722]: I0715 05:11:19.100986 2722 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:11:19.102255 kubelet[2722]: I0715 05:11:19.102235 2722 server.go:1274] "Started kubelet" Jul 15 05:11:19.104335 kubelet[2722]: I0715 05:11:19.104314 2722 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:11:19.111227 kubelet[2722]: I0715 05:11:19.110253 2722 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:11:19.111744 kubelet[2722]: I0715 05:11:19.111728 2722 server.go:449] "Adding debug handlers to kubelet server" Jul 15 05:11:19.114097 kubelet[2722]: I0715 05:11:19.113555 2722 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 05:11:19.114097 kubelet[2722]: E0715 05:11:19.113728 2722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-236-104-60\" not found" Jul 15 05:11:19.114206 kubelet[2722]: I0715 05:11:19.113056 2722 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:11:19.114388 kubelet[2722]: I0715 05:11:19.114378 2722 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:11:19.116103 kubelet[2722]: I0715 05:11:19.114709 2722 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:11:19.116436 kubelet[2722]: I0715 05:11:19.116414 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:11:19.117555 kubelet[2722]: I0715 05:11:19.117296 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:11:19.117555 kubelet[2722]: I0715 05:11:19.117317 2722 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 05:11:19.117555 kubelet[2722]: I0715 05:11:19.117329 2722 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 05:11:19.117555 kubelet[2722]: E0715 05:11:19.117368 2722 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:11:19.117555 kubelet[2722]: I0715 05:11:19.115052 2722 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:11:19.118548 kubelet[2722]: I0715 05:11:19.114961 2722 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 05:11:19.120851 kubelet[2722]: I0715 05:11:19.119621 2722 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:11:19.120982 kubelet[2722]: I0715 05:11:19.120965 2722 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:11:19.122207 kubelet[2722]: E0715 05:11:19.121963 2722 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:11:19.124553 kubelet[2722]: I0715 05:11:19.124514 2722 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:11:19.157262 kubelet[2722]: I0715 05:11:19.157243 2722 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 05:11:19.157384 kubelet[2722]: I0715 05:11:19.157374 2722 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 05:11:19.157441 kubelet[2722]: I0715 05:11:19.157433 2722 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:11:19.157578 kubelet[2722]: I0715 05:11:19.157567 2722 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:11:19.157634 kubelet[2722]: I0715 05:11:19.157615 2722 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:11:19.157671 kubelet[2722]: I0715 05:11:19.157665 2722 policy_none.go:49] "None policy: Start" Jul 15 05:11:19.158113 kubelet[2722]: I0715 05:11:19.158103 2722 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 05:11:19.158195 kubelet[2722]: I0715 05:11:19.158186 2722 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:11:19.158315 kubelet[2722]: I0715 05:11:19.158306 2722 state_mem.go:75] "Updated machine memory state" Jul 15 05:11:19.162722 kubelet[2722]: I0715 05:11:19.162709 2722 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:11:19.163091 kubelet[2722]: I0715 05:11:19.163064 2722 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:11:19.163234 kubelet[2722]: I0715 05:11:19.163184 2722 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:11:19.163388 kubelet[2722]: I0715 05:11:19.163378 2722 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:11:19.265853 kubelet[2722]: I0715 05:11:19.265834 2722 kubelet_node_status.go:72] "Attempting to register node" node="172-236-104-60" Jul 15 05:11:19.271113 kubelet[2722]: I0715 05:11:19.271071 2722 kubelet_node_status.go:111] "Node was previously registered" node="172-236-104-60" Jul 15 05:11:19.271294 kubelet[2722]: I0715 05:11:19.271122 2722 kubelet_node_status.go:75] "Successfully registered node" node="172-236-104-60" Jul 15 05:11:19.319595 kubelet[2722]: I0715 05:11:19.319507 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58a9b7cf79a4b132047b0b72f3abf5ed-k8s-certs\") pod \"kube-apiserver-172-236-104-60\" (UID: \"58a9b7cf79a4b132047b0b72f3abf5ed\") " pod="kube-system/kube-apiserver-172-236-104-60" Jul 15 05:11:19.319595 kubelet[2722]: I0715 05:11:19.319559 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-ca-certs\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:19.319595 kubelet[2722]: I0715 05:11:19.319580 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-flexvolume-dir\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:19.319595 kubelet[2722]: I0715 05:11:19.319599 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-kubeconfig\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:19.319856 kubelet[2722]: I0715 05:11:19.319618 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:19.319856 kubelet[2722]: I0715 05:11:19.319638 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58a9b7cf79a4b132047b0b72f3abf5ed-ca-certs\") pod \"kube-apiserver-172-236-104-60\" (UID: \"58a9b7cf79a4b132047b0b72f3abf5ed\") " pod="kube-system/kube-apiserver-172-236-104-60" Jul 15 05:11:19.319856 kubelet[2722]: I0715 05:11:19.319656 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58a9b7cf79a4b132047b0b72f3abf5ed-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-104-60\" (UID: \"58a9b7cf79a4b132047b0b72f3abf5ed\") " pod="kube-system/kube-apiserver-172-236-104-60" Jul 15 05:11:19.319856 kubelet[2722]: I0715 05:11:19.319671 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4e37f08928fe309bc97f96a59e69008-k8s-certs\") pod \"kube-controller-manager-172-236-104-60\" (UID: \"c4e37f08928fe309bc97f96a59e69008\") " pod="kube-system/kube-controller-manager-172-236-104-60" Jul 15 05:11:19.319856 kubelet[2722]: I0715 05:11:19.319690 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c24505b6d56799d2cfb3395af5501c9-kubeconfig\") pod \"kube-scheduler-172-236-104-60\" (UID: \"9c24505b6d56799d2cfb3395af5501c9\") " pod="kube-system/kube-scheduler-172-236-104-60" Jul 15 05:11:19.524142 kubelet[2722]: E0715 05:11:19.523397 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:19.524142 kubelet[2722]: E0715 05:11:19.523614 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:19.524515 kubelet[2722]: E0715 05:11:19.524443 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:19.580603 sudo[2754]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 05:11:19.580870 sudo[2754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 05:11:19.820487 sudo[2754]: pam_unix(sudo:session): session closed for user root Jul 15 05:11:20.100403 kubelet[2722]: I0715 05:11:20.100301 2722 apiserver.go:52] "Watching apiserver" Jul 15 05:11:20.119562 kubelet[2722]: I0715 05:11:20.119542 2722 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 05:11:20.141764 kubelet[2722]: E0715 05:11:20.141743 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:20.142204 kubelet[2722]: E0715 05:11:20.142189 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:20.146670 kubelet[2722]: E0715 05:11:20.146652 2722 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-236-104-60\" already exists" pod="kube-system/kube-apiserver-172-236-104-60" Jul 15 05:11:20.146749 kubelet[2722]: E0715 05:11:20.146734 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:20.168753 kubelet[2722]: I0715 05:11:20.168710 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-104-60" podStartSLOduration=1.168699287 podStartE2EDuration="1.168699287s" podCreationTimestamp="2025-07-15 05:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:11:20.168534657 +0000 UTC m=+1.119978361" watchObservedRunningTime="2025-07-15 05:11:20.168699287 +0000 UTC m=+1.120142991" Jul 15 05:11:20.168844 kubelet[2722]: I0715 05:11:20.168783 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-104-60" podStartSLOduration=1.1687800369999999 podStartE2EDuration="1.168780037s" podCreationTimestamp="2025-07-15 05:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:11:20.161122719 +0000 UTC m=+1.112566433" watchObservedRunningTime="2025-07-15 05:11:20.168780037 +0000 UTC m=+1.120223751" Jul 15 05:11:20.183635 kubelet[2722]: I0715 05:11:20.183603 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-104-60" podStartSLOduration=1.183592682 podStartE2EDuration="1.183592682s" podCreationTimestamp="2025-07-15 05:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:11:20.174760433 +0000 UTC m=+1.126204137" watchObservedRunningTime="2025-07-15 05:11:20.183592682 +0000 UTC m=+1.135036396" Jul 15 05:11:21.067808 sudo[1808]: pam_unix(sudo:session): session closed for user root Jul 15 05:11:21.120763 sshd[1807]: Connection closed by 139.178.68.195 port 57414 Jul 15 05:11:21.121059 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jul 15 05:11:21.125708 systemd[1]: sshd@6-172.236.104.60:22-139.178.68.195:57414.service: Deactivated successfully. Jul 15 05:11:21.128801 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:11:21.129417 systemd[1]: session-7.scope: Consumed 3.292s CPU time, 269.3M memory peak. Jul 15 05:11:21.132038 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:11:21.133894 systemd-logind[1540]: Removed session 7. Jul 15 05:11:21.141955 kubelet[2722]: E0715 05:11:21.141919 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:21.673800 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 05:11:23.404974 kubelet[2722]: E0715 05:11:23.404838 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:24.032028 kubelet[2722]: I0715 05:11:24.031967 2722 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:11:24.032505 containerd[1565]: time="2025-07-15T05:11:24.032441639Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:11:24.033213 kubelet[2722]: I0715 05:11:24.033022 2722 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:11:24.692553 kubelet[2722]: W0715 05:11:24.692471 2722 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172-236-104-60" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-236-104-60' and this object Jul 15 05:11:24.693706 kubelet[2722]: E0715 05:11:24.693479 2722 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-236-104-60\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-236-104-60' and this object" logger="UnhandledError" Jul 15 05:11:24.693706 kubelet[2722]: W0715 05:11:24.692755 2722 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172-236-104-60" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-236-104-60' and this object Jul 15 05:11:24.693706 kubelet[2722]: E0715 05:11:24.693521 2722 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172-236-104-60\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-236-104-60' and this object" logger="UnhandledError" Jul 15 05:11:24.693706 kubelet[2722]: W0715 05:11:24.692770 2722 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172-236-104-60" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-236-104-60' and this object Jul 15 05:11:24.693706 kubelet[2722]: E0715 05:11:24.693536 2722 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:172-236-104-60\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-236-104-60' and this object" logger="UnhandledError" Jul 15 05:11:24.695453 systemd[1]: Created slice kubepods-besteffort-pod7e3cc5de_0ad0_44ce_abda_940e14e58443.slice - libcontainer container kubepods-besteffort-pod7e3cc5de_0ad0_44ce_abda_940e14e58443.slice. Jul 15 05:11:24.706980 systemd[1]: Created slice kubepods-burstable-pod2c758d1a_b4c4_401e_885b_20402cb59af2.slice - libcontainer container kubepods-burstable-pod2c758d1a_b4c4_401e_885b_20402cb59af2.slice. Jul 15 05:11:24.756236 kubelet[2722]: I0715 05:11:24.756175 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e3cc5de-0ad0-44ce-abda-940e14e58443-kube-proxy\") pod \"kube-proxy-zhgxq\" (UID: \"7e3cc5de-0ad0-44ce-abda-940e14e58443\") " pod="kube-system/kube-proxy-zhgxq" Jul 15 05:11:24.756236 kubelet[2722]: I0715 05:11:24.756215 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-run\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756236 kubelet[2722]: I0715 05:11:24.756233 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c758d1a-b4c4-401e-885b-20402cb59af2-clustermesh-secrets\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756236 kubelet[2722]: I0715 05:11:24.756249 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-host-proc-sys-kernel\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756459 kubelet[2722]: I0715 05:11:24.756266 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e3cc5de-0ad0-44ce-abda-940e14e58443-xtables-lock\") pod \"kube-proxy-zhgxq\" (UID: \"7e3cc5de-0ad0-44ce-abda-940e14e58443\") " pod="kube-system/kube-proxy-zhgxq" Jul 15 05:11:24.756459 kubelet[2722]: I0715 05:11:24.756281 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-bpf-maps\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756459 kubelet[2722]: I0715 05:11:24.756295 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-cgroup\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756459 kubelet[2722]: I0715 05:11:24.756314 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgqhw\" (UniqueName: \"kubernetes.io/projected/7e3cc5de-0ad0-44ce-abda-940e14e58443-kube-api-access-cgqhw\") pod \"kube-proxy-zhgxq\" (UID: \"7e3cc5de-0ad0-44ce-abda-940e14e58443\") " pod="kube-system/kube-proxy-zhgxq" Jul 15 05:11:24.756459 kubelet[2722]: I0715 05:11:24.756332 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-config-path\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756555 kubelet[2722]: I0715 05:11:24.756352 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e3cc5de-0ad0-44ce-abda-940e14e58443-lib-modules\") pod \"kube-proxy-zhgxq\" (UID: \"7e3cc5de-0ad0-44ce-abda-940e14e58443\") " pod="kube-system/kube-proxy-zhgxq" Jul 15 05:11:24.756555 kubelet[2722]: I0715 05:11:24.756366 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cni-path\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756555 kubelet[2722]: I0715 05:11:24.756381 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-etc-cni-netd\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756555 kubelet[2722]: I0715 05:11:24.756397 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-hostproc\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756555 kubelet[2722]: I0715 05:11:24.756411 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-xtables-lock\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756555 kubelet[2722]: I0715 05:11:24.756435 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-lib-modules\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756668 kubelet[2722]: I0715 05:11:24.756449 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-host-proc-sys-net\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756668 kubelet[2722]: I0715 05:11:24.756463 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c758d1a-b4c4-401e-885b-20402cb59af2-hubble-tls\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:24.756668 kubelet[2722]: I0715 05:11:24.756481 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2r9b\" (UniqueName: \"kubernetes.io/projected/2c758d1a-b4c4-401e-885b-20402cb59af2-kube-api-access-w2r9b\") pod \"cilium-6kctw\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " pod="kube-system/cilium-6kctw" Jul 15 05:11:25.004390 kubelet[2722]: E0715 05:11:25.003305 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:25.004483 containerd[1565]: time="2025-07-15T05:11:25.003899690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhgxq,Uid:7e3cc5de-0ad0-44ce-abda-940e14e58443,Namespace:kube-system,Attempt:0,}" Jul 15 05:11:25.026211 containerd[1565]: time="2025-07-15T05:11:25.026165180Z" level=info msg="connecting to shim a44184954c1596fdb1a949f28ace934929bc485ff947cb2191830a5d50ba8415" address="unix:///run/containerd/s/2f9d89c8bdb326891f30bb37b34a17dad96b77544bcdd84d8a68e90f74d49687" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:11:25.056043 systemd[1]: Created slice kubepods-besteffort-pod2332240e_130b_43bd_a034_c25d57510d95.slice - libcontainer container kubepods-besteffort-pod2332240e_130b_43bd_a034_c25d57510d95.slice. Jul 15 05:11:25.058764 kubelet[2722]: I0715 05:11:25.058681 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2332240e-130b-43bd-a034-c25d57510d95-cilium-config-path\") pod \"cilium-operator-5d85765b45-zgq9q\" (UID: \"2332240e-130b-43bd-a034-c25d57510d95\") " pod="kube-system/cilium-operator-5d85765b45-zgq9q" Jul 15 05:11:25.058829 kubelet[2722]: I0715 05:11:25.058766 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khkh\" (UniqueName: \"kubernetes.io/projected/2332240e-130b-43bd-a034-c25d57510d95-kube-api-access-5khkh\") pod \"cilium-operator-5d85765b45-zgq9q\" (UID: \"2332240e-130b-43bd-a034-c25d57510d95\") " pod="kube-system/cilium-operator-5d85765b45-zgq9q" Jul 15 05:11:25.073428 systemd[1]: Started cri-containerd-a44184954c1596fdb1a949f28ace934929bc485ff947cb2191830a5d50ba8415.scope - libcontainer container a44184954c1596fdb1a949f28ace934929bc485ff947cb2191830a5d50ba8415. Jul 15 05:11:25.113335 containerd[1565]: time="2025-07-15T05:11:25.113299671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhgxq,Uid:7e3cc5de-0ad0-44ce-abda-940e14e58443,Namespace:kube-system,Attempt:0,} returns sandbox id \"a44184954c1596fdb1a949f28ace934929bc485ff947cb2191830a5d50ba8415\"" Jul 15 05:11:25.114861 kubelet[2722]: E0715 05:11:25.114394 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:25.118415 containerd[1565]: time="2025-07-15T05:11:25.118391393Z" level=info msg="CreateContainer within sandbox \"a44184954c1596fdb1a949f28ace934929bc485ff947cb2191830a5d50ba8415\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:11:25.141658 containerd[1565]: time="2025-07-15T05:11:25.141639564Z" level=info msg="Container 36adf5ba8839f5b1a4eacb11b29adaba9981826cd83b04d0aafe09185461e37d: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:25.148708 containerd[1565]: time="2025-07-15T05:11:25.148687885Z" level=info msg="CreateContainer within sandbox \"a44184954c1596fdb1a949f28ace934929bc485ff947cb2191830a5d50ba8415\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36adf5ba8839f5b1a4eacb11b29adaba9981826cd83b04d0aafe09185461e37d\"" Jul 15 05:11:25.150099 containerd[1565]: time="2025-07-15T05:11:25.149320031Z" level=info msg="StartContainer for \"36adf5ba8839f5b1a4eacb11b29adaba9981826cd83b04d0aafe09185461e37d\"" Jul 15 05:11:25.150697 containerd[1565]: time="2025-07-15T05:11:25.150679113Z" level=info msg="connecting to shim 36adf5ba8839f5b1a4eacb11b29adaba9981826cd83b04d0aafe09185461e37d" address="unix:///run/containerd/s/2f9d89c8bdb326891f30bb37b34a17dad96b77544bcdd84d8a68e90f74d49687" protocol=ttrpc version=3 Jul 15 05:11:25.185213 systemd[1]: Started cri-containerd-36adf5ba8839f5b1a4eacb11b29adaba9981826cd83b04d0aafe09185461e37d.scope - libcontainer container 36adf5ba8839f5b1a4eacb11b29adaba9981826cd83b04d0aafe09185461e37d. Jul 15 05:11:25.227655 containerd[1565]: time="2025-07-15T05:11:25.227624089Z" level=info msg="StartContainer for \"36adf5ba8839f5b1a4eacb11b29adaba9981826cd83b04d0aafe09185461e37d\" returns successfully" Jul 15 05:11:25.664258 kubelet[2722]: E0715 05:11:25.664210 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:25.665002 containerd[1565]: time="2025-07-15T05:11:25.664931141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zgq9q,Uid:2332240e-130b-43bd-a034-c25d57510d95,Namespace:kube-system,Attempt:0,}" Jul 15 05:11:25.679745 containerd[1565]: time="2025-07-15T05:11:25.679588202Z" level=info msg="connecting to shim 1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df" address="unix:///run/containerd/s/c33dd7bea6db09bcb8a92b995e36319298e10d0ad098e5ed1dc37ca0a6dbbc9a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:11:25.704194 systemd[1]: Started cri-containerd-1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df.scope - libcontainer container 1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df. Jul 15 05:11:25.755687 containerd[1565]: time="2025-07-15T05:11:25.755642917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zgq9q,Uid:2332240e-130b-43bd-a034-c25d57510d95,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df\"" Jul 15 05:11:25.756714 kubelet[2722]: E0715 05:11:25.756646 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:25.758772 containerd[1565]: time="2025-07-15T05:11:25.758736722Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 05:11:25.858401 kubelet[2722]: E0715 05:11:25.858355 2722 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 15 05:11:25.858471 kubelet[2722]: E0715 05:11:25.858458 2722 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c758d1a-b4c4-401e-885b-20402cb59af2-clustermesh-secrets podName:2c758d1a-b4c4-401e-885b-20402cb59af2 nodeName:}" failed. No retries permitted until 2025-07-15 05:11:26.358436827 +0000 UTC m=+7.309880541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/2c758d1a-b4c4-401e-885b-20402cb59af2-clustermesh-secrets") pod "cilium-6kctw" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2") : failed to sync secret cache: timed out waiting for the condition Jul 15 05:11:25.875177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount5471810.mount: Deactivated successfully. Jul 15 05:11:26.156001 kubelet[2722]: E0715 05:11:26.154804 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:26.175013 kubelet[2722]: I0715 05:11:26.174943 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhgxq" podStartSLOduration=2.17492523 podStartE2EDuration="2.17492523s" podCreationTimestamp="2025-07-15 05:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:11:26.174861651 +0000 UTC m=+7.126305365" watchObservedRunningTime="2025-07-15 05:11:26.17492523 +0000 UTC m=+7.126368944" Jul 15 05:11:26.511548 kubelet[2722]: E0715 05:11:26.510707 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:26.511924 containerd[1565]: time="2025-07-15T05:11:26.511112480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6kctw,Uid:2c758d1a-b4c4-401e-885b-20402cb59af2,Namespace:kube-system,Attempt:0,}" Jul 15 05:11:26.534043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195727003.mount: Deactivated successfully. Jul 15 05:11:26.543668 containerd[1565]: time="2025-07-15T05:11:26.543610599Z" level=info msg="connecting to shim 02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865" address="unix:///run/containerd/s/d617811375b2055f1bf4b502613cb543fc2695c6ae118a770a392593e91548e2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:11:26.569300 systemd[1]: Started cri-containerd-02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865.scope - libcontainer container 02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865. Jul 15 05:11:26.596609 containerd[1565]: time="2025-07-15T05:11:26.596566394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6kctw,Uid:2c758d1a-b4c4-401e-885b-20402cb59af2,Namespace:kube-system,Attempt:0,} returns sandbox id \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\"" Jul 15 05:11:26.597490 kubelet[2722]: E0715 05:11:26.597465 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:27.162001 kubelet[2722]: E0715 05:11:27.161950 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:27.400446 containerd[1565]: time="2025-07-15T05:11:27.400402730Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:27.401298 containerd[1565]: time="2025-07-15T05:11:27.401266994Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 15 05:11:27.402005 containerd[1565]: time="2025-07-15T05:11:27.401964211Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:27.403009 containerd[1565]: time="2025-07-15T05:11:27.402989053Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.644129804s" Jul 15 05:11:27.403066 containerd[1565]: time="2025-07-15T05:11:27.403054241Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 05:11:27.404476 containerd[1565]: time="2025-07-15T05:11:27.404324608Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 05:11:27.407306 containerd[1565]: time="2025-07-15T05:11:27.407264913Z" level=info msg="CreateContainer within sandbox \"1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 05:11:27.420541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509327038.mount: Deactivated successfully. Jul 15 05:11:27.421261 containerd[1565]: time="2025-07-15T05:11:27.421143518Z" level=info msg="Container 8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:27.425470 containerd[1565]: time="2025-07-15T05:11:27.425429769Z" level=info msg="CreateContainer within sandbox \"1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\"" Jul 15 05:11:27.425982 containerd[1565]: time="2025-07-15T05:11:27.425938450Z" level=info msg="StartContainer for \"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\"" Jul 15 05:11:27.427042 containerd[1565]: time="2025-07-15T05:11:27.426956951Z" level=info msg="connecting to shim 8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f" address="unix:///run/containerd/s/c33dd7bea6db09bcb8a92b995e36319298e10d0ad098e5ed1dc37ca0a6dbbc9a" protocol=ttrpc version=3 Jul 15 05:11:27.457194 systemd[1]: Started cri-containerd-8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f.scope - libcontainer container 8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f. Jul 15 05:11:27.487174 containerd[1565]: time="2025-07-15T05:11:27.487151552Z" level=info msg="StartContainer for \"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" returns successfully" Jul 15 05:11:28.167642 kubelet[2722]: E0715 05:11:28.167376 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:28.206820 kubelet[2722]: I0715 05:11:28.206754 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-zgq9q" podStartSLOduration=1.560965252 podStartE2EDuration="3.206735313s" podCreationTimestamp="2025-07-15 05:11:25 +0000 UTC" firstStartedPulling="2025-07-15 05:11:25.758042206 +0000 UTC m=+6.709485920" lastFinishedPulling="2025-07-15 05:11:27.403812267 +0000 UTC m=+8.355255981" observedRunningTime="2025-07-15 05:11:28.206295201 +0000 UTC m=+9.157738905" watchObservedRunningTime="2025-07-15 05:11:28.206735313 +0000 UTC m=+9.158179027" Jul 15 05:11:28.681012 kubelet[2722]: E0715 05:11:28.680954 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:28.981763 kubelet[2722]: E0715 05:11:28.981622 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:29.170663 kubelet[2722]: E0715 05:11:29.170626 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:29.173333 kubelet[2722]: E0715 05:11:29.171850 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:29.173333 kubelet[2722]: E0715 05:11:29.172973 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:31.726862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151736853.mount: Deactivated successfully. Jul 15 05:11:33.174139 containerd[1565]: time="2025-07-15T05:11:33.173732677Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:33.175299 containerd[1565]: time="2025-07-15T05:11:33.175057631Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 15 05:11:33.176056 containerd[1565]: time="2025-07-15T05:11:33.176023339Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:11:33.177514 containerd[1565]: time="2025-07-15T05:11:33.177361094Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.772967118s" Jul 15 05:11:33.177514 containerd[1565]: time="2025-07-15T05:11:33.177413613Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 05:11:33.180594 containerd[1565]: time="2025-07-15T05:11:33.180541715Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 05:11:33.197109 containerd[1565]: time="2025-07-15T05:11:33.195541982Z" level=info msg="Container d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:33.197467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295555350.mount: Deactivated successfully. Jul 15 05:11:33.203702 containerd[1565]: time="2025-07-15T05:11:33.203660843Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\"" Jul 15 05:11:33.204492 containerd[1565]: time="2025-07-15T05:11:33.204470013Z" level=info msg="StartContainer for \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\"" Jul 15 05:11:33.205715 containerd[1565]: time="2025-07-15T05:11:33.205696778Z" level=info msg="connecting to shim d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6" address="unix:///run/containerd/s/d617811375b2055f1bf4b502613cb543fc2695c6ae118a770a392593e91548e2" protocol=ttrpc version=3 Jul 15 05:11:33.229286 systemd[1]: Started cri-containerd-d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6.scope - libcontainer container d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6. Jul 15 05:11:33.260020 containerd[1565]: time="2025-07-15T05:11:33.259985666Z" level=info msg="StartContainer for \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\" returns successfully" Jul 15 05:11:33.274877 systemd[1]: cri-containerd-d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6.scope: Deactivated successfully. Jul 15 05:11:33.280068 containerd[1565]: time="2025-07-15T05:11:33.280017302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\" id:\"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\" pid:3189 exited_at:{seconds:1752556293 nanos:279610677}" Jul 15 05:11:33.280282 containerd[1565]: time="2025-07-15T05:11:33.280218140Z" level=info msg="received exit event container_id:\"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\" id:\"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\" pid:3189 exited_at:{seconds:1752556293 nanos:279610677}" Jul 15 05:11:33.410364 kubelet[2722]: E0715 05:11:33.410044 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:34.182952 kubelet[2722]: E0715 05:11:34.182893 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:34.186261 containerd[1565]: time="2025-07-15T05:11:34.186183670Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 05:11:34.192607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6-rootfs.mount: Deactivated successfully. Jul 15 05:11:34.200345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1057375149.mount: Deactivated successfully. Jul 15 05:11:34.202502 containerd[1565]: time="2025-07-15T05:11:34.201661405Z" level=info msg="Container e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:34.210496 containerd[1565]: time="2025-07-15T05:11:34.210439565Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\"" Jul 15 05:11:34.211797 containerd[1565]: time="2025-07-15T05:11:34.211595642Z" level=info msg="StartContainer for \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\"" Jul 15 05:11:34.215252 containerd[1565]: time="2025-07-15T05:11:34.215231390Z" level=info msg="connecting to shim e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13" address="unix:///run/containerd/s/d617811375b2055f1bf4b502613cb543fc2695c6ae118a770a392593e91548e2" protocol=ttrpc version=3 Jul 15 05:11:34.240190 systemd[1]: Started cri-containerd-e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13.scope - libcontainer container e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13. Jul 15 05:11:34.269833 containerd[1565]: time="2025-07-15T05:11:34.269800130Z" level=info msg="StartContainer for \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\" returns successfully" Jul 15 05:11:34.287743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:11:34.288273 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:11:34.288568 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:11:34.291940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:11:34.294162 systemd[1]: cri-containerd-e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13.scope: Deactivated successfully. Jul 15 05:11:34.295903 containerd[1565]: time="2025-07-15T05:11:34.295862774Z" level=info msg="received exit event container_id:\"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\" id:\"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\" pid:3234 exited_at:{seconds:1752556294 nanos:295423710}" Jul 15 05:11:34.296524 containerd[1565]: time="2025-07-15T05:11:34.296064752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\" id:\"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\" pid:3234 exited_at:{seconds:1752556294 nanos:295423710}" Jul 15 05:11:34.308242 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:11:35.191568 kubelet[2722]: E0715 05:11:35.189422 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:35.193316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13-rootfs.mount: Deactivated successfully. Jul 15 05:11:35.200917 containerd[1565]: time="2025-07-15T05:11:35.200864061Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 05:11:35.225098 containerd[1565]: time="2025-07-15T05:11:35.224015226Z" level=info msg="Container 8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:35.227950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699375227.mount: Deactivated successfully. Jul 15 05:11:35.236572 containerd[1565]: time="2025-07-15T05:11:35.236519103Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\"" Jul 15 05:11:35.237168 containerd[1565]: time="2025-07-15T05:11:35.237094407Z" level=info msg="StartContainer for \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\"" Jul 15 05:11:35.238469 containerd[1565]: time="2025-07-15T05:11:35.238436893Z" level=info msg="connecting to shim 8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca" address="unix:///run/containerd/s/d617811375b2055f1bf4b502613cb543fc2695c6ae118a770a392593e91548e2" protocol=ttrpc version=3 Jul 15 05:11:35.260207 systemd[1]: Started cri-containerd-8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca.scope - libcontainer container 8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca. Jul 15 05:11:35.300998 containerd[1565]: time="2025-07-15T05:11:35.300952191Z" level=info msg="StartContainer for \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\" returns successfully" Jul 15 05:11:35.304500 systemd[1]: cri-containerd-8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca.scope: Deactivated successfully. Jul 15 05:11:35.307305 containerd[1565]: time="2025-07-15T05:11:35.307277694Z" level=info msg="received exit event container_id:\"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\" id:\"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\" pid:3281 exited_at:{seconds:1752556295 nanos:306392763}" Jul 15 05:11:35.307520 containerd[1565]: time="2025-07-15T05:11:35.307477672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\" id:\"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\" pid:3281 exited_at:{seconds:1752556295 nanos:306392763}" Jul 15 05:11:35.329553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca-rootfs.mount: Deactivated successfully. Jul 15 05:11:36.196598 kubelet[2722]: E0715 05:11:36.195597 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:36.201102 containerd[1565]: time="2025-07-15T05:11:36.200665376Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 05:11:36.212198 containerd[1565]: time="2025-07-15T05:11:36.211830437Z" level=info msg="Container 39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:36.227118 containerd[1565]: time="2025-07-15T05:11:36.224951027Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\"" Jul 15 05:11:36.229278 containerd[1565]: time="2025-07-15T05:11:36.229260184Z" level=info msg="StartContainer for \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\"" Jul 15 05:11:36.230500 containerd[1565]: time="2025-07-15T05:11:36.230472322Z" level=info msg="connecting to shim 39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00" address="unix:///run/containerd/s/d617811375b2055f1bf4b502613cb543fc2695c6ae118a770a392593e91548e2" protocol=ttrpc version=3 Jul 15 05:11:36.254244 systemd[1]: Started cri-containerd-39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00.scope - libcontainer container 39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00. Jul 15 05:11:36.284028 systemd[1]: cri-containerd-39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00.scope: Deactivated successfully. Jul 15 05:11:36.285173 containerd[1565]: time="2025-07-15T05:11:36.284994935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\" id:\"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\" pid:3319 exited_at:{seconds:1752556296 nanos:284256282}" Jul 15 05:11:36.286802 containerd[1565]: time="2025-07-15T05:11:36.286781706Z" level=info msg="received exit event container_id:\"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\" id:\"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\" pid:3319 exited_at:{seconds:1752556296 nanos:284256282}" Jul 15 05:11:36.294499 containerd[1565]: time="2025-07-15T05:11:36.294461061Z" level=info msg="StartContainer for \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\" returns successfully" Jul 15 05:11:36.301114 update_engine[1542]: I20250715 05:11:36.300107 1542 update_attempter.cc:509] Updating boot flags... Jul 15 05:11:36.310770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00-rootfs.mount: Deactivated successfully. Jul 15 05:11:37.201509 kubelet[2722]: E0715 05:11:37.201453 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:37.205148 containerd[1565]: time="2025-07-15T05:11:37.205114714Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 05:11:37.220111 containerd[1565]: time="2025-07-15T05:11:37.219654280Z" level=info msg="Container e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:37.231462 containerd[1565]: time="2025-07-15T05:11:37.231425492Z" level=info msg="CreateContainer within sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\"" Jul 15 05:11:37.232213 containerd[1565]: time="2025-07-15T05:11:37.232176725Z" level=info msg="StartContainer for \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\"" Jul 15 05:11:37.232886 containerd[1565]: time="2025-07-15T05:11:37.232861958Z" level=info msg="connecting to shim e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e" address="unix:///run/containerd/s/d617811375b2055f1bf4b502613cb543fc2695c6ae118a770a392593e91548e2" protocol=ttrpc version=3 Jul 15 05:11:37.274207 systemd[1]: Started cri-containerd-e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e.scope - libcontainer container e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e. Jul 15 05:11:37.313110 containerd[1565]: time="2025-07-15T05:11:37.312983332Z" level=info msg="StartContainer for \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" returns successfully" Jul 15 05:11:37.382985 containerd[1565]: time="2025-07-15T05:11:37.382931540Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" id:\"0c00064ed8efce1032ea3cc6936033087164cb9e3dc9e8102b90255bf9595da1\" pid:3408 exited_at:{seconds:1752556297 nanos:382582633}" Jul 15 05:11:37.443340 kubelet[2722]: I0715 05:11:37.442146 2722 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 05:11:37.484412 systemd[1]: Created slice kubepods-burstable-pod928c65c8_320d_4677_9cd4_25fcccd3cce5.slice - libcontainer container kubepods-burstable-pod928c65c8_320d_4677_9cd4_25fcccd3cce5.slice. Jul 15 05:11:37.491035 systemd[1]: Created slice kubepods-burstable-pod897b879b_7707_406f_a850_0cdbbc709e5a.slice - libcontainer container kubepods-burstable-pod897b879b_7707_406f_a850_0cdbbc709e5a.slice. Jul 15 05:11:37.546406 kubelet[2722]: I0715 05:11:37.546374 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79dd8\" (UniqueName: \"kubernetes.io/projected/897b879b-7707-406f-a850-0cdbbc709e5a-kube-api-access-79dd8\") pod \"coredns-7c65d6cfc9-5m5xn\" (UID: \"897b879b-7707-406f-a850-0cdbbc709e5a\") " pod="kube-system/coredns-7c65d6cfc9-5m5xn" Jul 15 05:11:37.546623 kubelet[2722]: I0715 05:11:37.546609 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wwf\" (UniqueName: \"kubernetes.io/projected/928c65c8-320d-4677-9cd4-25fcccd3cce5-kube-api-access-c2wwf\") pod \"coredns-7c65d6cfc9-m8xfp\" (UID: \"928c65c8-320d-4677-9cd4-25fcccd3cce5\") " pod="kube-system/coredns-7c65d6cfc9-m8xfp" Jul 15 05:11:37.546729 kubelet[2722]: I0715 05:11:37.546717 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/897b879b-7707-406f-a850-0cdbbc709e5a-config-volume\") pod \"coredns-7c65d6cfc9-5m5xn\" (UID: \"897b879b-7707-406f-a850-0cdbbc709e5a\") " pod="kube-system/coredns-7c65d6cfc9-5m5xn" Jul 15 05:11:37.546833 kubelet[2722]: I0715 05:11:37.546821 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/928c65c8-320d-4677-9cd4-25fcccd3cce5-config-volume\") pod \"coredns-7c65d6cfc9-m8xfp\" (UID: \"928c65c8-320d-4677-9cd4-25fcccd3cce5\") " pod="kube-system/coredns-7c65d6cfc9-m8xfp" Jul 15 05:11:37.794209 kubelet[2722]: E0715 05:11:37.793703 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:37.795400 containerd[1565]: time="2025-07-15T05:11:37.795319671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m8xfp,Uid:928c65c8-320d-4677-9cd4-25fcccd3cce5,Namespace:kube-system,Attempt:0,}" Jul 15 05:11:37.797148 kubelet[2722]: E0715 05:11:37.797041 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:37.800136 containerd[1565]: time="2025-07-15T05:11:37.799287885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5m5xn,Uid:897b879b-7707-406f-a850-0cdbbc709e5a,Namespace:kube-system,Attempt:0,}" Jul 15 05:11:38.208878 kubelet[2722]: E0715 05:11:38.207959 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:38.234772 kubelet[2722]: I0715 05:11:38.234707 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6kctw" podStartSLOduration=7.655720628 podStartE2EDuration="14.234691421s" podCreationTimestamp="2025-07-15 05:11:24 +0000 UTC" firstStartedPulling="2025-07-15 05:11:26.599539606 +0000 UTC m=+7.550983320" lastFinishedPulling="2025-07-15 05:11:33.178510399 +0000 UTC m=+14.129954113" observedRunningTime="2025-07-15 05:11:38.23364233 +0000 UTC m=+19.185086044" watchObservedRunningTime="2025-07-15 05:11:38.234691421 +0000 UTC m=+19.186135125" Jul 15 05:11:39.210040 kubelet[2722]: E0715 05:11:39.209992 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:39.536830 systemd-networkd[1443]: cilium_host: Link UP Jul 15 05:11:39.538185 systemd-networkd[1443]: cilium_net: Link UP Jul 15 05:11:39.538368 systemd-networkd[1443]: cilium_net: Gained carrier Jul 15 05:11:39.538523 systemd-networkd[1443]: cilium_host: Gained carrier Jul 15 05:11:39.649763 systemd-networkd[1443]: cilium_vxlan: Link UP Jul 15 05:11:39.649851 systemd-networkd[1443]: cilium_vxlan: Gained carrier Jul 15 05:11:39.831138 kernel: NET: Registered PF_ALG protocol family Jul 15 05:11:40.132477 systemd-networkd[1443]: cilium_host: Gained IPv6LL Jul 15 05:11:40.213155 kubelet[2722]: E0715 05:11:40.212379 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:40.405911 systemd-networkd[1443]: lxc_health: Link UP Jul 15 05:11:40.408147 systemd-networkd[1443]: lxc_health: Gained carrier Jul 15 05:11:40.452552 systemd-networkd[1443]: cilium_net: Gained IPv6LL Jul 15 05:11:40.853596 kernel: eth0: renamed from tmp7bbcd Jul 15 05:11:40.860468 systemd-networkd[1443]: lxcc9f28ba2b2c0: Link UP Jul 15 05:11:40.862472 systemd-networkd[1443]: lxcc9f28ba2b2c0: Gained carrier Jul 15 05:11:40.863133 systemd-networkd[1443]: lxc2a12df6e1cc6: Link UP Jul 15 05:11:40.868201 kernel: eth0: renamed from tmp86ebe Jul 15 05:11:40.878967 systemd-networkd[1443]: lxc2a12df6e1cc6: Gained carrier Jul 15 05:11:41.215364 kubelet[2722]: E0715 05:11:41.215234 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:41.476494 systemd-networkd[1443]: lxc_health: Gained IPv6LL Jul 15 05:11:41.604198 systemd-networkd[1443]: cilium_vxlan: Gained IPv6LL Jul 15 05:11:41.988254 systemd-networkd[1443]: lxcc9f28ba2b2c0: Gained IPv6LL Jul 15 05:11:42.244267 systemd-networkd[1443]: lxc2a12df6e1cc6: Gained IPv6LL Jul 15 05:11:43.857341 containerd[1565]: time="2025-07-15T05:11:43.857167868Z" level=info msg="connecting to shim 86ebe58b7745f45b77da9e3491c3d17250dfcc017efcbea70bdfe8ee820fa5f4" address="unix:///run/containerd/s/c43778b2f5e534863f9e1c7d3c23950bd2bf10295b61618b535b73913681b2bb" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:11:43.867670 containerd[1565]: time="2025-07-15T05:11:43.867243189Z" level=info msg="connecting to shim 7bbcdaf3fc26a202f22707f2236e66a8f1d300dd596e523e19945e11b706fba5" address="unix:///run/containerd/s/02ddabfeb48e4df3d438aa6f82c1aaaed5b9b206f57b564934d25211bdf6a7fc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:11:43.916207 systemd[1]: Started cri-containerd-7bbcdaf3fc26a202f22707f2236e66a8f1d300dd596e523e19945e11b706fba5.scope - libcontainer container 7bbcdaf3fc26a202f22707f2236e66a8f1d300dd596e523e19945e11b706fba5. Jul 15 05:11:43.918800 systemd[1]: Started cri-containerd-86ebe58b7745f45b77da9e3491c3d17250dfcc017efcbea70bdfe8ee820fa5f4.scope - libcontainer container 86ebe58b7745f45b77da9e3491c3d17250dfcc017efcbea70bdfe8ee820fa5f4. Jul 15 05:11:44.008113 containerd[1565]: time="2025-07-15T05:11:44.007071134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5m5xn,Uid:897b879b-7707-406f-a850-0cdbbc709e5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"86ebe58b7745f45b77da9e3491c3d17250dfcc017efcbea70bdfe8ee820fa5f4\"" Jul 15 05:11:44.008832 kubelet[2722]: E0715 05:11:44.008812 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:44.013932 containerd[1565]: time="2025-07-15T05:11:44.013870537Z" level=info msg="CreateContainer within sandbox \"86ebe58b7745f45b77da9e3491c3d17250dfcc017efcbea70bdfe8ee820fa5f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:11:44.018269 containerd[1565]: time="2025-07-15T05:11:44.018118294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m8xfp,Uid:928c65c8-320d-4677-9cd4-25fcccd3cce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bbcdaf3fc26a202f22707f2236e66a8f1d300dd596e523e19945e11b706fba5\"" Jul 15 05:11:44.019576 kubelet[2722]: E0715 05:11:44.019454 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:44.022262 containerd[1565]: time="2025-07-15T05:11:44.022182911Z" level=info msg="CreateContainer within sandbox \"7bbcdaf3fc26a202f22707f2236e66a8f1d300dd596e523e19945e11b706fba5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:11:44.026902 containerd[1565]: time="2025-07-15T05:11:44.026491818Z" level=info msg="Container 9fa70424973457ce029881b787efe6c926d5cf85afda2cf22e8a2ddd81a12f6f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:44.033702 containerd[1565]: time="2025-07-15T05:11:44.033419529Z" level=info msg="CreateContainer within sandbox \"86ebe58b7745f45b77da9e3491c3d17250dfcc017efcbea70bdfe8ee820fa5f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9fa70424973457ce029881b787efe6c926d5cf85afda2cf22e8a2ddd81a12f6f\"" Jul 15 05:11:44.033911 containerd[1565]: time="2025-07-15T05:11:44.033877087Z" level=info msg="StartContainer for \"9fa70424973457ce029881b787efe6c926d5cf85afda2cf22e8a2ddd81a12f6f\"" Jul 15 05:11:44.034602 containerd[1565]: time="2025-07-15T05:11:44.034577984Z" level=info msg="connecting to shim 9fa70424973457ce029881b787efe6c926d5cf85afda2cf22e8a2ddd81a12f6f" address="unix:///run/containerd/s/c43778b2f5e534863f9e1c7d3c23950bd2bf10295b61618b535b73913681b2bb" protocol=ttrpc version=3 Jul 15 05:11:44.035106 containerd[1565]: time="2025-07-15T05:11:44.035034311Z" level=info msg="Container 9f8576195261c2c3aedae9fe3d7bc4ea27e12fb2b1d450f2a502699742cfa105: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:11:44.043831 containerd[1565]: time="2025-07-15T05:11:44.043729983Z" level=info msg="CreateContainer within sandbox \"7bbcdaf3fc26a202f22707f2236e66a8f1d300dd596e523e19945e11b706fba5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f8576195261c2c3aedae9fe3d7bc4ea27e12fb2b1d450f2a502699742cfa105\"" Jul 15 05:11:44.044881 containerd[1565]: time="2025-07-15T05:11:44.044844517Z" level=info msg="StartContainer for \"9f8576195261c2c3aedae9fe3d7bc4ea27e12fb2b1d450f2a502699742cfa105\"" Jul 15 05:11:44.046342 containerd[1565]: time="2025-07-15T05:11:44.046304769Z" level=info msg="connecting to shim 9f8576195261c2c3aedae9fe3d7bc4ea27e12fb2b1d450f2a502699742cfa105" address="unix:///run/containerd/s/02ddabfeb48e4df3d438aa6f82c1aaaed5b9b206f57b564934d25211bdf6a7fc" protocol=ttrpc version=3 Jul 15 05:11:44.061283 systemd[1]: Started cri-containerd-9fa70424973457ce029881b787efe6c926d5cf85afda2cf22e8a2ddd81a12f6f.scope - libcontainer container 9fa70424973457ce029881b787efe6c926d5cf85afda2cf22e8a2ddd81a12f6f. Jul 15 05:11:44.071302 systemd[1]: Started cri-containerd-9f8576195261c2c3aedae9fe3d7bc4ea27e12fb2b1d450f2a502699742cfa105.scope - libcontainer container 9f8576195261c2c3aedae9fe3d7bc4ea27e12fb2b1d450f2a502699742cfa105. Jul 15 05:11:44.108536 containerd[1565]: time="2025-07-15T05:11:44.108375688Z" level=info msg="StartContainer for \"9fa70424973457ce029881b787efe6c926d5cf85afda2cf22e8a2ddd81a12f6f\" returns successfully" Jul 15 05:11:44.120216 containerd[1565]: time="2025-07-15T05:11:44.120159434Z" level=info msg="StartContainer for \"9f8576195261c2c3aedae9fe3d7bc4ea27e12fb2b1d450f2a502699742cfa105\" returns successfully" Jul 15 05:11:44.222521 kubelet[2722]: E0715 05:11:44.222470 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:44.228149 kubelet[2722]: E0715 05:11:44.228123 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:44.236696 kubelet[2722]: I0715 05:11:44.236646 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5m5xn" podStartSLOduration=19.236633936 podStartE2EDuration="19.236633936s" podCreationTimestamp="2025-07-15 05:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:11:44.235102814 +0000 UTC m=+25.186546528" watchObservedRunningTime="2025-07-15 05:11:44.236633936 +0000 UTC m=+25.188077660" Jul 15 05:11:45.227629 kubelet[2722]: E0715 05:11:45.227361 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:45.227629 kubelet[2722]: E0715 05:11:45.227483 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:46.232451 kubelet[2722]: E0715 05:11:46.232135 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:53.063751 kubelet[2722]: I0715 05:11:53.063599 2722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:11:53.064456 kubelet[2722]: E0715 05:11:53.063971 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:11:53.076702 kubelet[2722]: I0715 05:11:53.076668 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-m8xfp" podStartSLOduration=28.076655144 podStartE2EDuration="28.076655144s" podCreationTimestamp="2025-07-15 05:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:11:44.257157833 +0000 UTC m=+25.208601547" watchObservedRunningTime="2025-07-15 05:11:53.076655144 +0000 UTC m=+34.028098848" Jul 15 05:11:53.244992 kubelet[2722]: E0715 05:11:53.244967 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:12:33.119152 kubelet[2722]: E0715 05:12:33.118381 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:12:35.119239 kubelet[2722]: E0715 05:12:35.118779 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:12:35.120025 kubelet[2722]: E0715 05:12:35.119741 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:12:40.118363 kubelet[2722]: E0715 05:12:40.118325 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:12:49.119461 kubelet[2722]: E0715 05:12:49.119053 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:01.338963 systemd[1]: Started sshd@7-172.236.104.60:22-139.178.68.195:32934.service - OpenSSH per-connection server daemon (139.178.68.195:32934). Jul 15 05:13:01.691135 sshd[4045]: Accepted publickey for core from 139.178.68.195 port 32934 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:01.692693 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:01.698283 systemd-logind[1540]: New session 8 of user core. Jul 15 05:13:01.705224 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:13:02.021442 sshd[4048]: Connection closed by 139.178.68.195 port 32934 Jul 15 05:13:02.022185 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:02.027121 systemd[1]: sshd@7-172.236.104.60:22-139.178.68.195:32934.service: Deactivated successfully. Jul 15 05:13:02.029318 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:13:02.030791 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:13:02.033594 systemd-logind[1540]: Removed session 8. Jul 15 05:13:07.089006 systemd[1]: Started sshd@8-172.236.104.60:22-139.178.68.195:32936.service - OpenSSH per-connection server daemon (139.178.68.195:32936). Jul 15 05:13:07.431530 sshd[4061]: Accepted publickey for core from 139.178.68.195 port 32936 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:07.432839 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:07.438289 systemd-logind[1540]: New session 9 of user core. Jul 15 05:13:07.443275 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:13:07.730075 sshd[4064]: Connection closed by 139.178.68.195 port 32936 Jul 15 05:13:07.730764 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:07.735176 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:13:07.735208 systemd[1]: sshd@8-172.236.104.60:22-139.178.68.195:32936.service: Deactivated successfully. Jul 15 05:13:07.737664 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:13:07.739359 systemd-logind[1540]: Removed session 9. Jul 15 05:13:08.118195 kubelet[2722]: E0715 05:13:08.118162 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:11.119018 kubelet[2722]: E0715 05:13:11.118225 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:12.118322 kubelet[2722]: E0715 05:13:12.118283 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:12.795562 systemd[1]: Started sshd@9-172.236.104.60:22-139.178.68.195:52314.service - OpenSSH per-connection server daemon (139.178.68.195:52314). Jul 15 05:13:13.147932 sshd[4077]: Accepted publickey for core from 139.178.68.195 port 52314 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:13.154160 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:13.160132 systemd-logind[1540]: New session 10 of user core. Jul 15 05:13:13.164221 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:13:13.455299 sshd[4080]: Connection closed by 139.178.68.195 port 52314 Jul 15 05:13:13.455865 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:13.461003 systemd[1]: sshd@9-172.236.104.60:22-139.178.68.195:52314.service: Deactivated successfully. Jul 15 05:13:13.463628 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:13:13.464563 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:13:13.466450 systemd-logind[1540]: Removed session 10. Jul 15 05:13:18.518943 systemd[1]: Started sshd@10-172.236.104.60:22-139.178.68.195:52324.service - OpenSSH per-connection server daemon (139.178.68.195:52324). Jul 15 05:13:18.866618 sshd[4093]: Accepted publickey for core from 139.178.68.195 port 52324 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:18.868024 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:18.873346 systemd-logind[1540]: New session 11 of user core. Jul 15 05:13:18.882214 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:13:19.173788 sshd[4096]: Connection closed by 139.178.68.195 port 52324 Jul 15 05:13:19.175252 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:19.180007 systemd[1]: sshd@10-172.236.104.60:22-139.178.68.195:52324.service: Deactivated successfully. Jul 15 05:13:19.180026 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:13:19.183596 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:13:19.186532 systemd-logind[1540]: Removed session 11. Jul 15 05:13:19.234628 systemd[1]: Started sshd@11-172.236.104.60:22-139.178.68.195:52340.service - OpenSSH per-connection server daemon (139.178.68.195:52340). Jul 15 05:13:19.583798 sshd[4111]: Accepted publickey for core from 139.178.68.195 port 52340 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:19.585295 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:19.590284 systemd-logind[1540]: New session 12 of user core. Jul 15 05:13:19.597232 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:13:19.916450 sshd[4114]: Connection closed by 139.178.68.195 port 52340 Jul 15 05:13:19.917236 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:19.923169 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:13:19.923962 systemd[1]: sshd@11-172.236.104.60:22-139.178.68.195:52340.service: Deactivated successfully. Jul 15 05:13:19.928717 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:13:19.930650 systemd-logind[1540]: Removed session 12. Jul 15 05:13:19.978967 systemd[1]: Started sshd@12-172.236.104.60:22-139.178.68.195:52350.service - OpenSSH per-connection server daemon (139.178.68.195:52350). Jul 15 05:13:20.324716 sshd[4124]: Accepted publickey for core from 139.178.68.195 port 52350 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:20.326010 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:20.331194 systemd-logind[1540]: New session 13 of user core. Jul 15 05:13:20.336210 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:13:20.632867 sshd[4127]: Connection closed by 139.178.68.195 port 52350 Jul 15 05:13:20.634659 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:20.640304 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:13:20.640754 systemd[1]: sshd@12-172.236.104.60:22-139.178.68.195:52350.service: Deactivated successfully. Jul 15 05:13:20.643426 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:13:20.645537 systemd-logind[1540]: Removed session 13. Jul 15 05:13:25.696529 systemd[1]: Started sshd@13-172.236.104.60:22-139.178.68.195:44650.service - OpenSSH per-connection server daemon (139.178.68.195:44650). Jul 15 05:13:26.037722 sshd[4141]: Accepted publickey for core from 139.178.68.195 port 44650 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:26.039305 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:26.044146 systemd-logind[1540]: New session 14 of user core. Jul 15 05:13:26.051193 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:13:26.342172 sshd[4144]: Connection closed by 139.178.68.195 port 44650 Jul 15 05:13:26.343099 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:26.347725 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:13:26.348587 systemd[1]: sshd@13-172.236.104.60:22-139.178.68.195:44650.service: Deactivated successfully. Jul 15 05:13:26.350806 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:13:26.352883 systemd-logind[1540]: Removed session 14. Jul 15 05:13:26.405347 systemd[1]: Started sshd@14-172.236.104.60:22-139.178.68.195:44658.service - OpenSSH per-connection server daemon (139.178.68.195:44658). Jul 15 05:13:26.755507 sshd[4157]: Accepted publickey for core from 139.178.68.195 port 44658 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:26.757127 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:26.761675 systemd-logind[1540]: New session 15 of user core. Jul 15 05:13:26.766218 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:13:27.079394 sshd[4160]: Connection closed by 139.178.68.195 port 44658 Jul 15 05:13:27.079858 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:27.084378 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:13:27.085056 systemd[1]: sshd@14-172.236.104.60:22-139.178.68.195:44658.service: Deactivated successfully. Jul 15 05:13:27.087551 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:13:27.089365 systemd-logind[1540]: Removed session 15. Jul 15 05:13:27.142500 systemd[1]: Started sshd@15-172.236.104.60:22-139.178.68.195:44670.service - OpenSSH per-connection server daemon (139.178.68.195:44670). Jul 15 05:13:27.485011 sshd[4170]: Accepted publickey for core from 139.178.68.195 port 44670 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:27.486601 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:27.491384 systemd-logind[1540]: New session 16 of user core. Jul 15 05:13:27.508220 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:13:28.795966 sshd[4173]: Connection closed by 139.178.68.195 port 44670 Jul 15 05:13:28.797243 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:28.801776 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:13:28.802400 systemd[1]: sshd@15-172.236.104.60:22-139.178.68.195:44670.service: Deactivated successfully. Jul 15 05:13:28.804908 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:13:28.807465 systemd-logind[1540]: Removed session 16. Jul 15 05:13:28.857670 systemd[1]: Started sshd@16-172.236.104.60:22-139.178.68.195:44674.service - OpenSSH per-connection server daemon (139.178.68.195:44674). Jul 15 05:13:29.200722 sshd[4190]: Accepted publickey for core from 139.178.68.195 port 44674 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:29.202287 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:29.207273 systemd-logind[1540]: New session 17 of user core. Jul 15 05:13:29.211194 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:13:29.611558 sshd[4193]: Connection closed by 139.178.68.195 port 44674 Jul 15 05:13:29.612233 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:29.616913 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:13:29.617600 systemd[1]: sshd@16-172.236.104.60:22-139.178.68.195:44674.service: Deactivated successfully. Jul 15 05:13:29.620692 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:13:29.622146 systemd-logind[1540]: Removed session 17. Jul 15 05:13:29.668588 systemd[1]: Started sshd@17-172.236.104.60:22-139.178.68.195:44688.service - OpenSSH per-connection server daemon (139.178.68.195:44688). Jul 15 05:13:30.014717 sshd[4203]: Accepted publickey for core from 139.178.68.195 port 44688 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:30.016225 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:30.021383 systemd-logind[1540]: New session 18 of user core. Jul 15 05:13:30.027201 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:13:30.312251 sshd[4206]: Connection closed by 139.178.68.195 port 44688 Jul 15 05:13:30.312823 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:30.316888 systemd[1]: sshd@17-172.236.104.60:22-139.178.68.195:44688.service: Deactivated successfully. Jul 15 05:13:30.318705 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:13:30.319675 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:13:30.320794 systemd-logind[1540]: Removed session 18. Jul 15 05:13:35.376592 systemd[1]: Started sshd@18-172.236.104.60:22-139.178.68.195:35186.service - OpenSSH per-connection server daemon (139.178.68.195:35186). Jul 15 05:13:35.722286 sshd[4221]: Accepted publickey for core from 139.178.68.195 port 35186 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:35.723505 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:35.727456 systemd-logind[1540]: New session 19 of user core. Jul 15 05:13:35.736214 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:13:36.019727 sshd[4224]: Connection closed by 139.178.68.195 port 35186 Jul 15 05:13:36.020410 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:36.024858 systemd[1]: sshd@18-172.236.104.60:22-139.178.68.195:35186.service: Deactivated successfully. Jul 15 05:13:36.026786 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:13:36.027900 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:13:36.029295 systemd-logind[1540]: Removed session 19. Jul 15 05:13:41.086758 systemd[1]: Started sshd@19-172.236.104.60:22-139.178.68.195:39366.service - OpenSSH per-connection server daemon (139.178.68.195:39366). Jul 15 05:13:41.119354 kubelet[2722]: E0715 05:13:41.118235 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:41.430157 sshd[4236]: Accepted publickey for core from 139.178.68.195 port 39366 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:41.431706 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:41.436137 systemd-logind[1540]: New session 20 of user core. Jul 15 05:13:41.441206 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:13:41.736251 sshd[4239]: Connection closed by 139.178.68.195 port 39366 Jul 15 05:13:41.737969 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:41.742704 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:13:41.743408 systemd[1]: sshd@19-172.236.104.60:22-139.178.68.195:39366.service: Deactivated successfully. Jul 15 05:13:41.745757 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:13:41.747820 systemd-logind[1540]: Removed session 20. Jul 15 05:13:44.118309 kubelet[2722]: E0715 05:13:44.118268 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:46.806209 systemd[1]: Started sshd@20-172.236.104.60:22-139.178.68.195:39382.service - OpenSSH per-connection server daemon (139.178.68.195:39382). Jul 15 05:13:47.148171 sshd[4251]: Accepted publickey for core from 139.178.68.195 port 39382 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:47.149068 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:47.154293 systemd-logind[1540]: New session 21 of user core. Jul 15 05:13:47.164223 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:13:47.448490 sshd[4254]: Connection closed by 139.178.68.195 port 39382 Jul 15 05:13:47.450171 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:47.453966 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:13:47.454608 systemd[1]: sshd@20-172.236.104.60:22-139.178.68.195:39382.service: Deactivated successfully. Jul 15 05:13:47.456829 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:13:47.458820 systemd-logind[1540]: Removed session 21. Jul 15 05:13:47.512457 systemd[1]: Started sshd@21-172.236.104.60:22-139.178.68.195:39398.service - OpenSSH per-connection server daemon (139.178.68.195:39398). Jul 15 05:13:47.862304 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 39398 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:47.863515 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:47.868129 systemd-logind[1540]: New session 22 of user core. Jul 15 05:13:47.874213 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:13:49.349589 containerd[1565]: time="2025-07-15T05:13:49.349032974Z" level=info msg="StopContainer for \"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" with timeout 30 (s)" Jul 15 05:13:49.351830 containerd[1565]: time="2025-07-15T05:13:49.351796453Z" level=info msg="Stop container \"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" with signal terminated" Jul 15 05:13:49.369181 containerd[1565]: time="2025-07-15T05:13:49.369134318Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:13:49.371621 systemd[1]: cri-containerd-8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f.scope: Deactivated successfully. Jul 15 05:13:49.374335 containerd[1565]: time="2025-07-15T05:13:49.372763534Z" level=info msg="received exit event container_id:\"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" id:\"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" pid:3124 exited_at:{seconds:1752556429 nanos:372318666}" Jul 15 05:13:49.375028 containerd[1565]: time="2025-07-15T05:13:49.374984086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" id:\"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" pid:3124 exited_at:{seconds:1752556429 nanos:372318666}" Jul 15 05:13:49.377533 containerd[1565]: time="2025-07-15T05:13:49.377500496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" id:\"ba5d8ae4a9754c64b43f8ed1e2ae1faa52897e4cda2e4e11a23bfd51c57f2859\" pid:4287 exited_at:{seconds:1752556429 nanos:377240138}" Jul 15 05:13:49.380665 containerd[1565]: time="2025-07-15T05:13:49.380624585Z" level=info msg="StopContainer for \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" with timeout 2 (s)" Jul 15 05:13:49.380982 containerd[1565]: time="2025-07-15T05:13:49.380916764Z" level=info msg="Stop container \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" with signal terminated" Jul 15 05:13:49.394733 systemd-networkd[1443]: lxc_health: Link DOWN Jul 15 05:13:49.394746 systemd-networkd[1443]: lxc_health: Lost carrier Jul 15 05:13:49.411982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f-rootfs.mount: Deactivated successfully. Jul 15 05:13:49.420220 systemd[1]: cri-containerd-e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e.scope: Deactivated successfully. Jul 15 05:13:49.420746 systemd[1]: cri-containerd-e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e.scope: Consumed 5.757s CPU time, 123M memory peak, 144K read from disk, 13.3M written to disk. Jul 15 05:13:49.421854 containerd[1565]: time="2025-07-15T05:13:49.421203722Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" id:\"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" pid:3378 exited_at:{seconds:1752556429 nanos:420030066}" Jul 15 05:13:49.421854 containerd[1565]: time="2025-07-15T05:13:49.421526460Z" level=info msg="received exit event container_id:\"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" id:\"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" pid:3378 exited_at:{seconds:1752556429 nanos:420030066}" Jul 15 05:13:49.427429 containerd[1565]: time="2025-07-15T05:13:49.427409798Z" level=info msg="StopContainer for \"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" returns successfully" Jul 15 05:13:49.428366 containerd[1565]: time="2025-07-15T05:13:49.428072846Z" level=info msg="StopPodSandbox for \"1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df\"" Jul 15 05:13:49.428697 containerd[1565]: time="2025-07-15T05:13:49.428577363Z" level=info msg="Container to stop \"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:13:49.439114 systemd[1]: cri-containerd-1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df.scope: Deactivated successfully. Jul 15 05:13:49.443203 containerd[1565]: time="2025-07-15T05:13:49.442997820Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df\" id:\"1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df\" pid:3035 exit_status:137 exited_at:{seconds:1752556429 nanos:442724350}" Jul 15 05:13:49.451915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e-rootfs.mount: Deactivated successfully. Jul 15 05:13:49.461598 containerd[1565]: time="2025-07-15T05:13:49.461579029Z" level=info msg="StopContainer for \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" returns successfully" Jul 15 05:13:49.462427 containerd[1565]: time="2025-07-15T05:13:49.462409126Z" level=info msg="StopPodSandbox for \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\"" Jul 15 05:13:49.462545 containerd[1565]: time="2025-07-15T05:13:49.462530085Z" level=info msg="Container to stop \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:13:49.462927 containerd[1565]: time="2025-07-15T05:13:49.462762584Z" level=info msg="Container to stop \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:13:49.462927 containerd[1565]: time="2025-07-15T05:13:49.462772514Z" level=info msg="Container to stop \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:13:49.462927 containerd[1565]: time="2025-07-15T05:13:49.462779724Z" level=info msg="Container to stop \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:13:49.462927 containerd[1565]: time="2025-07-15T05:13:49.462787364Z" level=info msg="Container to stop \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:13:49.484834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df-rootfs.mount: Deactivated successfully. Jul 15 05:13:49.485989 systemd[1]: cri-containerd-02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865.scope: Deactivated successfully. Jul 15 05:13:49.493100 containerd[1565]: time="2025-07-15T05:13:49.493037740Z" level=info msg="shim disconnected" id=1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df namespace=k8s.io Jul 15 05:13:49.493100 containerd[1565]: time="2025-07-15T05:13:49.493057970Z" level=warning msg="cleaning up after shim disconnected" id=1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df namespace=k8s.io Jul 15 05:13:49.493393 containerd[1565]: time="2025-07-15T05:13:49.493066310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 05:13:49.516577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865-rootfs.mount: Deactivated successfully. Jul 15 05:13:49.519456 containerd[1565]: time="2025-07-15T05:13:49.519306791Z" level=info msg="shim disconnected" id=02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865 namespace=k8s.io Jul 15 05:13:49.519456 containerd[1565]: time="2025-07-15T05:13:49.519344341Z" level=warning msg="cleaning up after shim disconnected" id=02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865 namespace=k8s.io Jul 15 05:13:49.519456 containerd[1565]: time="2025-07-15T05:13:49.519352671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 05:13:49.528780 containerd[1565]: time="2025-07-15T05:13:49.528700225Z" level=info msg="received exit event sandbox_id:\"1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df\" exit_status:137 exited_at:{seconds:1752556429 nanos:442724350}" Jul 15 05:13:49.528833 containerd[1565]: time="2025-07-15T05:13:49.528804845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" id:\"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" pid:3091 exit_status:137 exited_at:{seconds:1752556429 nanos:485412849}" Jul 15 05:13:49.529011 containerd[1565]: time="2025-07-15T05:13:49.528924655Z" level=info msg="received exit event sandbox_id:\"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" exit_status:137 exited_at:{seconds:1752556429 nanos:485412849}" Jul 15 05:13:49.529660 containerd[1565]: time="2025-07-15T05:13:49.529352103Z" level=info msg="TearDown network for sandbox \"1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df\" successfully" Jul 15 05:13:49.529734 containerd[1565]: time="2025-07-15T05:13:49.529720001Z" level=info msg="StopPodSandbox for \"1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df\" returns successfully" Jul 15 05:13:49.532144 containerd[1565]: time="2025-07-15T05:13:49.531211676Z" level=info msg="TearDown network for sandbox \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" successfully" Jul 15 05:13:49.532144 containerd[1565]: time="2025-07-15T05:13:49.531229566Z" level=info msg="StopPodSandbox for \"02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865\" returns successfully" Jul 15 05:13:49.532408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ad395467dfe405eb4c643f8e785241682b7e922cfb9a60a113476f86835c7df-shm.mount: Deactivated successfully. Jul 15 05:13:49.624931 kubelet[2722]: I0715 05:13:49.624057 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cni-path\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.624931 kubelet[2722]: I0715 05:13:49.624121 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-host-proc-sys-net\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.624931 kubelet[2722]: I0715 05:13:49.624147 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c758d1a-b4c4-401e-885b-20402cb59af2-clustermesh-secrets\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.624931 kubelet[2722]: I0715 05:13:49.624161 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-etc-cni-netd\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.624931 kubelet[2722]: I0715 05:13:49.624173 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-hostproc\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.624931 kubelet[2722]: I0715 05:13:49.624186 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-xtables-lock\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.625606 kubelet[2722]: I0715 05:13:49.624203 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c758d1a-b4c4-401e-885b-20402cb59af2-hubble-tls\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.625606 kubelet[2722]: I0715 05:13:49.624220 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2r9b\" (UniqueName: \"kubernetes.io/projected/2c758d1a-b4c4-401e-885b-20402cb59af2-kube-api-access-w2r9b\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.625606 kubelet[2722]: I0715 05:13:49.624234 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-run\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.625606 kubelet[2722]: I0715 05:13:49.624247 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-host-proc-sys-kernel\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.625606 kubelet[2722]: I0715 05:13:49.624260 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-bpf-maps\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.625606 kubelet[2722]: I0715 05:13:49.624272 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-cgroup\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.625741 kubelet[2722]: I0715 05:13:49.624290 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2332240e-130b-43bd-a034-c25d57510d95-cilium-config-path\") pod \"2332240e-130b-43bd-a034-c25d57510d95\" (UID: \"2332240e-130b-43bd-a034-c25d57510d95\") " Jul 15 05:13:49.625741 kubelet[2722]: I0715 05:13:49.624303 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-lib-modules\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.625741 kubelet[2722]: I0715 05:13:49.624318 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-config-path\") pod \"2c758d1a-b4c4-401e-885b-20402cb59af2\" (UID: \"2c758d1a-b4c4-401e-885b-20402cb59af2\") " Jul 15 05:13:49.625741 kubelet[2722]: I0715 05:13:49.624333 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5khkh\" (UniqueName: \"kubernetes.io/projected/2332240e-130b-43bd-a034-c25d57510d95-kube-api-access-5khkh\") pod \"2332240e-130b-43bd-a034-c25d57510d95\" (UID: \"2332240e-130b-43bd-a034-c25d57510d95\") " Jul 15 05:13:49.627435 kubelet[2722]: I0715 05:13:49.627282 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2332240e-130b-43bd-a034-c25d57510d95-kube-api-access-5khkh" (OuterVolumeSpecName: "kube-api-access-5khkh") pod "2332240e-130b-43bd-a034-c25d57510d95" (UID: "2332240e-130b-43bd-a034-c25d57510d95"). InnerVolumeSpecName "kube-api-access-5khkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 05:13:49.627435 kubelet[2722]: I0715 05:13:49.627352 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.627435 kubelet[2722]: I0715 05:13:49.627372 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.627435 kubelet[2722]: I0715 05:13:49.627387 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.627435 kubelet[2722]: I0715 05:13:49.627398 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.629144 kubelet[2722]: I0715 05:13:49.629071 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c758d1a-b4c4-401e-885b-20402cb59af2-kube-api-access-w2r9b" (OuterVolumeSpecName: "kube-api-access-w2r9b") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "kube-api-access-w2r9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 05:13:49.629245 kubelet[2722]: I0715 05:13:49.629229 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cni-path" (OuterVolumeSpecName: "cni-path") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.629509 kubelet[2722]: I0715 05:13:49.629301 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.630298 kubelet[2722]: I0715 05:13:49.630265 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2332240e-130b-43bd-a034-c25d57510d95-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2332240e-130b-43bd-a034-c25d57510d95" (UID: "2332240e-130b-43bd-a034-c25d57510d95"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 05:13:49.630343 kubelet[2722]: I0715 05:13:49.630307 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.631739 kubelet[2722]: I0715 05:13:49.631719 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c758d1a-b4c4-401e-885b-20402cb59af2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 05:13:49.631823 kubelet[2722]: I0715 05:13:49.631809 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.631881 kubelet[2722]: I0715 05:13:49.631869 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-hostproc" (OuterVolumeSpecName: "hostproc") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.632167 kubelet[2722]: I0715 05:13:49.631934 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:13:49.633583 kubelet[2722]: I0715 05:13:49.633554 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 05:13:49.634410 kubelet[2722]: I0715 05:13:49.634371 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c758d1a-b4c4-401e-885b-20402cb59af2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2c758d1a-b4c4-401e-885b-20402cb59af2" (UID: "2c758d1a-b4c4-401e-885b-20402cb59af2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 05:13:49.724924 kubelet[2722]: I0715 05:13:49.724878 2722 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-hostproc\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.724924 kubelet[2722]: I0715 05:13:49.724902 2722 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-xtables-lock\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.724924 kubelet[2722]: I0715 05:13:49.724911 2722 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c758d1a-b4c4-401e-885b-20402cb59af2-hubble-tls\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.724924 kubelet[2722]: I0715 05:13:49.724919 2722 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c758d1a-b4c4-401e-885b-20402cb59af2-clustermesh-secrets\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.724924 kubelet[2722]: I0715 05:13:49.724929 2722 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-etc-cni-netd\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.724924 kubelet[2722]: I0715 05:13:49.724938 2722 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2r9b\" (UniqueName: \"kubernetes.io/projected/2c758d1a-b4c4-401e-885b-20402cb59af2-kube-api-access-w2r9b\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.724924 kubelet[2722]: I0715 05:13:49.724946 2722 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-bpf-maps\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.725219 kubelet[2722]: I0715 05:13:49.724954 2722 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-cgroup\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.725219 kubelet[2722]: I0715 05:13:49.724961 2722 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-run\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.725219 kubelet[2722]: I0715 05:13:49.724969 2722 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-host-proc-sys-kernel\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.725219 kubelet[2722]: I0715 05:13:49.724977 2722 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2332240e-130b-43bd-a034-c25d57510d95-cilium-config-path\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.725219 kubelet[2722]: I0715 05:13:49.724986 2722 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-lib-modules\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.725219 kubelet[2722]: I0715 05:13:49.724993 2722 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c758d1a-b4c4-401e-885b-20402cb59af2-cilium-config-path\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.725219 kubelet[2722]: I0715 05:13:49.725001 2722 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5khkh\" (UniqueName: \"kubernetes.io/projected/2332240e-130b-43bd-a034-c25d57510d95-kube-api-access-5khkh\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.725219 kubelet[2722]: I0715 05:13:49.725008 2722 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-cni-path\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:49.725408 kubelet[2722]: I0715 05:13:49.725015 2722 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c758d1a-b4c4-401e-885b-20402cb59af2-host-proc-sys-net\") on node \"172-236-104-60\" DevicePath \"\"" Jul 15 05:13:50.118475 kubelet[2722]: E0715 05:13:50.118429 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:50.411864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02f84331cc29599a06fb1b80711c452a85596fac64a19deb744e32e5d2f52865-shm.mount: Deactivated successfully. Jul 15 05:13:50.412321 systemd[1]: var-lib-kubelet-pods-2c758d1a\x2db4c4\x2d401e\x2d885b\x2d20402cb59af2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 05:13:50.412399 systemd[1]: var-lib-kubelet-pods-2c758d1a\x2db4c4\x2d401e\x2d885b\x2d20402cb59af2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 05:13:50.412477 systemd[1]: var-lib-kubelet-pods-2332240e\x2d130b\x2d43bd\x2da034\x2dc25d57510d95-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5khkh.mount: Deactivated successfully. Jul 15 05:13:50.412546 systemd[1]: var-lib-kubelet-pods-2c758d1a\x2db4c4\x2d401e\x2d885b\x2d20402cb59af2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw2r9b.mount: Deactivated successfully. Jul 15 05:13:50.452566 kubelet[2722]: I0715 05:13:50.450727 2722 scope.go:117] "RemoveContainer" containerID="8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f" Jul 15 05:13:50.453264 containerd[1565]: time="2025-07-15T05:13:50.453162629Z" level=info msg="RemoveContainer for \"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\"" Jul 15 05:13:50.457563 systemd[1]: Removed slice kubepods-besteffort-pod2332240e_130b_43bd_a034_c25d57510d95.slice - libcontainer container kubepods-besteffort-pod2332240e_130b_43bd_a034_c25d57510d95.slice. Jul 15 05:13:50.460824 containerd[1565]: time="2025-07-15T05:13:50.460748041Z" level=info msg="RemoveContainer for \"8611e8d4028fb7f72d18a54f78febf17605604aea7d1c4cb636e522e044b035f\" returns successfully" Jul 15 05:13:50.464894 kubelet[2722]: I0715 05:13:50.463888 2722 scope.go:117] "RemoveContainer" containerID="e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e" Jul 15 05:13:50.464327 systemd[1]: Removed slice kubepods-burstable-pod2c758d1a_b4c4_401e_885b_20402cb59af2.slice - libcontainer container kubepods-burstable-pod2c758d1a_b4c4_401e_885b_20402cb59af2.slice. Jul 15 05:13:50.464402 systemd[1]: kubepods-burstable-pod2c758d1a_b4c4_401e_885b_20402cb59af2.slice: Consumed 5.854s CPU time, 123.4M memory peak, 144K read from disk, 13.3M written to disk. Jul 15 05:13:50.465976 containerd[1565]: time="2025-07-15T05:13:50.465944821Z" level=info msg="RemoveContainer for \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\"" Jul 15 05:13:50.471985 containerd[1565]: time="2025-07-15T05:13:50.471950059Z" level=info msg="RemoveContainer for \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" returns successfully" Jul 15 05:13:50.472107 kubelet[2722]: I0715 05:13:50.472057 2722 scope.go:117] "RemoveContainer" containerID="39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00" Jul 15 05:13:50.473897 containerd[1565]: time="2025-07-15T05:13:50.473876182Z" level=info msg="RemoveContainer for \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\"" Jul 15 05:13:50.477933 containerd[1565]: time="2025-07-15T05:13:50.477905117Z" level=info msg="RemoveContainer for \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\" returns successfully" Jul 15 05:13:50.478178 kubelet[2722]: I0715 05:13:50.478149 2722 scope.go:117] "RemoveContainer" containerID="8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca" Jul 15 05:13:50.481967 containerd[1565]: time="2025-07-15T05:13:50.481916692Z" level=info msg="RemoveContainer for \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\"" Jul 15 05:13:50.492385 containerd[1565]: time="2025-07-15T05:13:50.492360033Z" level=info msg="RemoveContainer for \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\" returns successfully" Jul 15 05:13:50.492578 kubelet[2722]: I0715 05:13:50.492544 2722 scope.go:117] "RemoveContainer" containerID="e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13" Jul 15 05:13:50.493839 containerd[1565]: time="2025-07-15T05:13:50.493750248Z" level=info msg="RemoveContainer for \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\"" Jul 15 05:13:50.496591 containerd[1565]: time="2025-07-15T05:13:50.496547658Z" level=info msg="RemoveContainer for \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\" returns successfully" Jul 15 05:13:50.496996 kubelet[2722]: I0715 05:13:50.496659 2722 scope.go:117] "RemoveContainer" containerID="d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6" Jul 15 05:13:50.497794 containerd[1565]: time="2025-07-15T05:13:50.497765173Z" level=info msg="RemoveContainer for \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\"" Jul 15 05:13:50.500855 containerd[1565]: time="2025-07-15T05:13:50.500817671Z" level=info msg="RemoveContainer for \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\" returns successfully" Jul 15 05:13:50.501116 kubelet[2722]: I0715 05:13:50.500973 2722 scope.go:117] "RemoveContainer" containerID="e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e" Jul 15 05:13:50.501305 containerd[1565]: time="2025-07-15T05:13:50.501272280Z" level=error msg="ContainerStatus for \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\": not found" Jul 15 05:13:50.501578 kubelet[2722]: E0715 05:13:50.501405 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\": not found" containerID="e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e" Jul 15 05:13:50.501578 kubelet[2722]: I0715 05:13:50.501426 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e"} err="failed to get container status \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e370c6cf19bf80145b3184a0779da81d9a09c11e2fa4613e6af67b83d6140b5e\": not found" Jul 15 05:13:50.501578 kubelet[2722]: I0715 05:13:50.501487 2722 scope.go:117] "RemoveContainer" containerID="39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00" Jul 15 05:13:50.501816 containerd[1565]: time="2025-07-15T05:13:50.501751708Z" level=error msg="ContainerStatus for \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\": not found" Jul 15 05:13:50.501905 kubelet[2722]: E0715 05:13:50.501878 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\": not found" containerID="39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00" Jul 15 05:13:50.501959 kubelet[2722]: I0715 05:13:50.501906 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00"} err="failed to get container status \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\": rpc error: code = NotFound desc = an error occurred when try to find container \"39f7c9359f44d11691414898044ca0eaedde949e301cd5056328e32dac9b2e00\": not found" Jul 15 05:13:50.501959 kubelet[2722]: I0715 05:13:50.501927 2722 scope.go:117] "RemoveContainer" containerID="8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca" Jul 15 05:13:50.502136 containerd[1565]: time="2025-07-15T05:13:50.502107577Z" level=error msg="ContainerStatus for \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\": not found" Jul 15 05:13:50.502259 kubelet[2722]: E0715 05:13:50.502237 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\": not found" containerID="8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca" Jul 15 05:13:50.502348 kubelet[2722]: I0715 05:13:50.502326 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca"} err="failed to get container status \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d2155b1fb726c0313f93673a53ac70c623af2ad7d6f87aa13d70d629df766ca\": not found" Jul 15 05:13:50.502348 kubelet[2722]: I0715 05:13:50.502345 2722 scope.go:117] "RemoveContainer" containerID="e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13" Jul 15 05:13:50.502547 containerd[1565]: time="2025-07-15T05:13:50.502487126Z" level=error msg="ContainerStatus for \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\": not found" Jul 15 05:13:50.502649 kubelet[2722]: E0715 05:13:50.502626 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\": not found" containerID="e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13" Jul 15 05:13:50.502713 kubelet[2722]: I0715 05:13:50.502648 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13"} err="failed to get container status \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\": rpc error: code = NotFound desc = an error occurred when try to find container \"e560b9107fa53814862cf95c2042d7fd026472df79350bbb9f49486ea74cda13\": not found" Jul 15 05:13:50.502713 kubelet[2722]: I0715 05:13:50.502660 2722 scope.go:117] "RemoveContainer" containerID="d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6" Jul 15 05:13:50.502856 containerd[1565]: time="2025-07-15T05:13:50.502785334Z" level=error msg="ContainerStatus for \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\": not found" Jul 15 05:13:50.502972 kubelet[2722]: E0715 05:13:50.502954 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\": not found" containerID="d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6" Jul 15 05:13:50.503065 kubelet[2722]: I0715 05:13:50.503037 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6"} err="failed to get container status \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d54d55a93531a290f82229e1fe804d7d1a2ff62a675be7e36f24251fe41c76e6\": not found" Jul 15 05:13:51.120560 kubelet[2722]: I0715 05:13:51.120511 2722 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2332240e-130b-43bd-a034-c25d57510d95" path="/var/lib/kubelet/pods/2332240e-130b-43bd-a034-c25d57510d95/volumes" Jul 15 05:13:51.121044 kubelet[2722]: I0715 05:13:51.121012 2722 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c758d1a-b4c4-401e-885b-20402cb59af2" path="/var/lib/kubelet/pods/2c758d1a-b4c4-401e-885b-20402cb59af2/volumes" Jul 15 05:13:51.355307 sshd[4269]: Connection closed by 139.178.68.195 port 39398 Jul 15 05:13:51.355875 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:51.361528 systemd[1]: sshd@21-172.236.104.60:22-139.178.68.195:39398.service: Deactivated successfully. Jul 15 05:13:51.363670 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:13:51.364524 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:13:51.366773 systemd-logind[1540]: Removed session 22. Jul 15 05:13:51.418353 systemd[1]: Started sshd@22-172.236.104.60:22-139.178.68.195:59712.service - OpenSSH per-connection server daemon (139.178.68.195:59712). Jul 15 05:13:51.770034 sshd[4417]: Accepted publickey for core from 139.178.68.195 port 59712 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:51.771515 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:51.776349 systemd-logind[1540]: New session 23 of user core. Jul 15 05:13:51.782266 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:13:52.117931 kubelet[2722]: E0715 05:13:52.117885 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:52.599773 kubelet[2722]: E0715 05:13:52.599712 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c758d1a-b4c4-401e-885b-20402cb59af2" containerName="mount-cgroup" Jul 15 05:13:52.600315 kubelet[2722]: E0715 05:13:52.599976 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c758d1a-b4c4-401e-885b-20402cb59af2" containerName="apply-sysctl-overwrites" Jul 15 05:13:52.600315 kubelet[2722]: E0715 05:13:52.599987 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c758d1a-b4c4-401e-885b-20402cb59af2" containerName="mount-bpf-fs" Jul 15 05:13:52.600315 kubelet[2722]: E0715 05:13:52.599993 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c758d1a-b4c4-401e-885b-20402cb59af2" containerName="cilium-agent" Jul 15 05:13:52.600315 kubelet[2722]: E0715 05:13:52.599999 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2332240e-130b-43bd-a034-c25d57510d95" containerName="cilium-operator" Jul 15 05:13:52.600315 kubelet[2722]: E0715 05:13:52.600026 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c758d1a-b4c4-401e-885b-20402cb59af2" containerName="clean-cilium-state" Jul 15 05:13:52.600315 kubelet[2722]: I0715 05:13:52.600050 2722 memory_manager.go:354] "RemoveStaleState removing state" podUID="2332240e-130b-43bd-a034-c25d57510d95" containerName="cilium-operator" Jul 15 05:13:52.600315 kubelet[2722]: I0715 05:13:52.600056 2722 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c758d1a-b4c4-401e-885b-20402cb59af2" containerName="cilium-agent" Jul 15 05:13:52.614248 systemd[1]: Created slice kubepods-burstable-pod68f813c2_fe5d_40e6_80ef_e74e5f99d0ac.slice - libcontainer container kubepods-burstable-pod68f813c2_fe5d_40e6_80ef_e74e5f99d0ac.slice. Jul 15 05:13:52.638112 sshd[4420]: Connection closed by 139.178.68.195 port 59712 Jul 15 05:13:52.638247 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:52.640053 kubelet[2722]: I0715 05:13:52.640008 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-xtables-lock\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.640053 kubelet[2722]: I0715 05:13:52.640047 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-host-proc-sys-kernel\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641182 kubelet[2722]: I0715 05:13:52.640064 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78vc\" (UniqueName: \"kubernetes.io/projected/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-kube-api-access-g78vc\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641182 kubelet[2722]: I0715 05:13:52.640748 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-cilium-run\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641182 kubelet[2722]: I0715 05:13:52.640855 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-cilium-cgroup\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641182 kubelet[2722]: I0715 05:13:52.640872 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-etc-cni-netd\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641182 kubelet[2722]: I0715 05:13:52.640940 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-hostproc\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641182 kubelet[2722]: I0715 05:13:52.640965 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-cni-path\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641367 kubelet[2722]: I0715 05:13:52.640980 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-cilium-ipsec-secrets\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641367 kubelet[2722]: I0715 05:13:52.640992 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-lib-modules\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641367 kubelet[2722]: I0715 05:13:52.641006 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-clustermesh-secrets\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641367 kubelet[2722]: I0715 05:13:52.641019 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-cilium-config-path\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641367 kubelet[2722]: I0715 05:13:52.641033 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-host-proc-sys-net\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641468 kubelet[2722]: I0715 05:13:52.641046 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-hubble-tls\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.641468 kubelet[2722]: I0715 05:13:52.641059 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68f813c2-fe5d-40e6-80ef-e74e5f99d0ac-bpf-maps\") pod \"cilium-hs6z7\" (UID: \"68f813c2-fe5d-40e6-80ef-e74e5f99d0ac\") " pod="kube-system/cilium-hs6z7" Jul 15 05:13:52.643480 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:13:52.645564 systemd[1]: sshd@22-172.236.104.60:22-139.178.68.195:59712.service: Deactivated successfully. Jul 15 05:13:52.647772 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:13:52.650342 systemd-logind[1540]: Removed session 23. Jul 15 05:13:52.703739 systemd[1]: Started sshd@23-172.236.104.60:22-139.178.68.195:59724.service - OpenSSH per-connection server daemon (139.178.68.195:59724). Jul 15 05:13:52.918176 kubelet[2722]: E0715 05:13:52.917376 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:52.918548 containerd[1565]: time="2025-07-15T05:13:52.918504825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hs6z7,Uid:68f813c2-fe5d-40e6-80ef-e74e5f99d0ac,Namespace:kube-system,Attempt:0,}" Jul 15 05:13:52.939107 containerd[1565]: time="2025-07-15T05:13:52.939054881Z" level=info msg="connecting to shim 38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832" address="unix:///run/containerd/s/8929875990a5b1ff9b7ef64be7dbccb496618e18d7b20db4676ccbfa5a08aa2c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:13:52.965212 systemd[1]: Started cri-containerd-38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832.scope - libcontainer container 38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832. Jul 15 05:13:52.993923 containerd[1565]: time="2025-07-15T05:13:52.993894534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hs6z7,Uid:68f813c2-fe5d-40e6-80ef-e74e5f99d0ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\"" Jul 15 05:13:52.995338 kubelet[2722]: E0715 05:13:52.995304 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:52.998411 containerd[1565]: time="2025-07-15T05:13:52.998352457Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 05:13:53.007150 containerd[1565]: time="2025-07-15T05:13:53.006320499Z" level=info msg="Container 296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:53.011525 containerd[1565]: time="2025-07-15T05:13:53.011498730Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66\"" Jul 15 05:13:53.012327 containerd[1565]: time="2025-07-15T05:13:53.012053148Z" level=info msg="StartContainer for \"296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66\"" Jul 15 05:13:53.012948 containerd[1565]: time="2025-07-15T05:13:53.012927745Z" level=info msg="connecting to shim 296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66" address="unix:///run/containerd/s/8929875990a5b1ff9b7ef64be7dbccb496618e18d7b20db4676ccbfa5a08aa2c" protocol=ttrpc version=3 Jul 15 05:13:53.033194 systemd[1]: Started cri-containerd-296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66.scope - libcontainer container 296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66. Jul 15 05:13:53.057435 sshd[4431]: Accepted publickey for core from 139.178.68.195 port 59724 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:53.059503 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:53.066776 containerd[1565]: time="2025-07-15T05:13:53.066700394Z" level=info msg="StartContainer for \"296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66\" returns successfully" Jul 15 05:13:53.067822 systemd-logind[1540]: New session 24 of user core. Jul 15 05:13:53.074225 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 05:13:53.076159 systemd[1]: cri-containerd-296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66.scope: Deactivated successfully. Jul 15 05:13:53.081535 containerd[1565]: time="2025-07-15T05:13:53.081500891Z" level=info msg="received exit event container_id:\"296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66\" id:\"296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66\" pid:4498 exited_at:{seconds:1752556433 nanos:80802874}" Jul 15 05:13:53.082136 containerd[1565]: time="2025-07-15T05:13:53.081802941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66\" id:\"296c1c65f74f507caef454d006f6ea0a2011c92b72f067b6cfef75624bd4cb66\" pid:4498 exited_at:{seconds:1752556433 nanos:80802874}" Jul 15 05:13:53.307044 sshd[4517]: Connection closed by 139.178.68.195 port 59724 Jul 15 05:13:53.307475 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Jul 15 05:13:53.312854 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Jul 15 05:13:53.313474 systemd[1]: sshd@23-172.236.104.60:22-139.178.68.195:59724.service: Deactivated successfully. Jul 15 05:13:53.315739 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 05:13:53.317735 systemd-logind[1540]: Removed session 24. Jul 15 05:13:53.374572 systemd[1]: Started sshd@24-172.236.104.60:22-139.178.68.195:59740.service - OpenSSH per-connection server daemon (139.178.68.195:59740). Jul 15 05:13:53.468026 kubelet[2722]: E0715 05:13:53.467436 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:53.471300 containerd[1565]: time="2025-07-15T05:13:53.471254845Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 05:13:53.478416 containerd[1565]: time="2025-07-15T05:13:53.478382829Z" level=info msg="Container 54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:53.482567 containerd[1565]: time="2025-07-15T05:13:53.482524955Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f\"" Jul 15 05:13:53.482994 containerd[1565]: time="2025-07-15T05:13:53.482953304Z" level=info msg="StartContainer for \"54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f\"" Jul 15 05:13:53.484218 containerd[1565]: time="2025-07-15T05:13:53.484133589Z" level=info msg="connecting to shim 54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f" address="unix:///run/containerd/s/8929875990a5b1ff9b7ef64be7dbccb496618e18d7b20db4676ccbfa5a08aa2c" protocol=ttrpc version=3 Jul 15 05:13:53.507608 systemd[1]: Started cri-containerd-54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f.scope - libcontainer container 54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f. Jul 15 05:13:53.542143 systemd[1]: cri-containerd-54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f.scope: Deactivated successfully. Jul 15 05:13:53.543396 containerd[1565]: time="2025-07-15T05:13:53.543331028Z" level=info msg="received exit event container_id:\"54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f\" id:\"54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f\" pid:4555 exited_at:{seconds:1752556433 nanos:542883521}" Jul 15 05:13:53.543692 containerd[1565]: time="2025-07-15T05:13:53.543666967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f\" id:\"54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f\" pid:4555 exited_at:{seconds:1752556433 nanos:542883521}" Jul 15 05:13:53.543800 containerd[1565]: time="2025-07-15T05:13:53.543785347Z" level=info msg="StartContainer for \"54fb291379dc35b6549bb078a3b26e779a0ef9361e16adf07bd9487f911ed89f\" returns successfully" Jul 15 05:13:53.729068 sshd[4538]: Accepted publickey for core from 139.178.68.195 port 59740 ssh2: RSA SHA256:KZphXg08OlSAhlzOUZpbcA3GQTI5I2T29BxrRBRxxP4 Jul 15 05:13:53.730979 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:13:53.736931 systemd-logind[1540]: New session 25 of user core. Jul 15 05:13:53.739199 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 05:13:54.199860 kubelet[2722]: E0715 05:13:54.199792 2722 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 05:13:54.471930 kubelet[2722]: E0715 05:13:54.471770 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:54.475221 containerd[1565]: time="2025-07-15T05:13:54.475180278Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 05:13:54.488534 containerd[1565]: time="2025-07-15T05:13:54.488495171Z" level=info msg="Container e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:54.497961 containerd[1565]: time="2025-07-15T05:13:54.497926137Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714\"" Jul 15 05:13:54.498778 containerd[1565]: time="2025-07-15T05:13:54.498526176Z" level=info msg="StartContainer for \"e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714\"" Jul 15 05:13:54.500174 containerd[1565]: time="2025-07-15T05:13:54.500129670Z" level=info msg="connecting to shim e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714" address="unix:///run/containerd/s/8929875990a5b1ff9b7ef64be7dbccb496618e18d7b20db4676ccbfa5a08aa2c" protocol=ttrpc version=3 Jul 15 05:13:54.524219 systemd[1]: Started cri-containerd-e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714.scope - libcontainer container e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714. Jul 15 05:13:54.568354 containerd[1565]: time="2025-07-15T05:13:54.568312151Z" level=info msg="StartContainer for \"e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714\" returns successfully" Jul 15 05:13:54.569908 systemd[1]: cri-containerd-e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714.scope: Deactivated successfully. Jul 15 05:13:54.571240 containerd[1565]: time="2025-07-15T05:13:54.571136701Z" level=info msg="received exit event container_id:\"e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714\" id:\"e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714\" pid:4605 exited_at:{seconds:1752556434 nanos:570682512}" Jul 15 05:13:54.571425 containerd[1565]: time="2025-07-15T05:13:54.571187631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714\" id:\"e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714\" pid:4605 exited_at:{seconds:1752556434 nanos:570682512}" Jul 15 05:13:54.593992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7e70c6ddffdf86c0a7b409611d6176a4992e0285b3c18610f804e9b0e69e714-rootfs.mount: Deactivated successfully. Jul 15 05:13:55.476444 kubelet[2722]: E0715 05:13:55.476377 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:55.480487 containerd[1565]: time="2025-07-15T05:13:55.479796099Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 05:13:55.496542 containerd[1565]: time="2025-07-15T05:13:55.493663741Z" level=info msg="Container 3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:55.502555 containerd[1565]: time="2025-07-15T05:13:55.502523301Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad\"" Jul 15 05:13:55.503154 containerd[1565]: time="2025-07-15T05:13:55.503116448Z" level=info msg="StartContainer for \"3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad\"" Jul 15 05:13:55.504201 containerd[1565]: time="2025-07-15T05:13:55.504175874Z" level=info msg="connecting to shim 3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad" address="unix:///run/containerd/s/8929875990a5b1ff9b7ef64be7dbccb496618e18d7b20db4676ccbfa5a08aa2c" protocol=ttrpc version=3 Jul 15 05:13:55.526212 systemd[1]: Started cri-containerd-3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad.scope - libcontainer container 3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad. Jul 15 05:13:55.553983 systemd[1]: cri-containerd-3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad.scope: Deactivated successfully. Jul 15 05:13:55.555454 containerd[1565]: time="2025-07-15T05:13:55.555419418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad\" id:\"3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad\" pid:4648 exited_at:{seconds:1752556435 nanos:555236659}" Jul 15 05:13:55.555607 containerd[1565]: time="2025-07-15T05:13:55.555567717Z" level=info msg="received exit event container_id:\"3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad\" id:\"3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad\" pid:4648 exited_at:{seconds:1752556435 nanos:555236659}" Jul 15 05:13:55.563104 containerd[1565]: time="2025-07-15T05:13:55.563056981Z" level=info msg="StartContainer for \"3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad\" returns successfully" Jul 15 05:13:55.582783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ed7c2dfabf5d1b691bc447f94602dd3aab8f078e7b8c9274918464c0f3a60ad-rootfs.mount: Deactivated successfully. Jul 15 05:13:56.485500 kubelet[2722]: E0715 05:13:56.485221 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:56.488405 containerd[1565]: time="2025-07-15T05:13:56.488332926Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 05:13:56.501006 containerd[1565]: time="2025-07-15T05:13:56.500969303Z" level=info msg="Container 7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:13:56.503935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938630553.mount: Deactivated successfully. Jul 15 05:13:56.513016 containerd[1565]: time="2025-07-15T05:13:56.512966752Z" level=info msg="CreateContainer within sandbox \"38f63e2e23619c90d4bfe252dff008d154193cf76bb8be5db67e8e4cd8eec832\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9\"" Jul 15 05:13:56.514144 containerd[1565]: time="2025-07-15T05:13:56.514108158Z" level=info msg="StartContainer for \"7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9\"" Jul 15 05:13:56.515328 containerd[1565]: time="2025-07-15T05:13:56.515293234Z" level=info msg="connecting to shim 7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9" address="unix:///run/containerd/s/8929875990a5b1ff9b7ef64be7dbccb496618e18d7b20db4676ccbfa5a08aa2c" protocol=ttrpc version=3 Jul 15 05:13:56.533200 systemd[1]: Started cri-containerd-7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9.scope - libcontainer container 7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9. Jul 15 05:13:56.570654 containerd[1565]: time="2025-07-15T05:13:56.570614836Z" level=info msg="StartContainer for \"7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9\" returns successfully" Jul 15 05:13:56.644011 containerd[1565]: time="2025-07-15T05:13:56.643969316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9\" id:\"8c65a329d56e0b0b9e33dfee8967659e991652849d6e6486b4b4f93082b2b92b\" pid:4718 exited_at:{seconds:1752556436 nanos:643554887}" Jul 15 05:13:57.030141 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 15 05:13:57.489755 kubelet[2722]: E0715 05:13:57.489631 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:57.502122 kubelet[2722]: I0715 05:13:57.501279 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hs6z7" podStartSLOduration=5.501266206 podStartE2EDuration="5.501266206s" podCreationTimestamp="2025-07-15 05:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:13:57.501155267 +0000 UTC m=+158.452598981" watchObservedRunningTime="2025-07-15 05:13:57.501266206 +0000 UTC m=+158.452709920" Jul 15 05:13:58.103473 containerd[1565]: time="2025-07-15T05:13:58.103383575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9\" id:\"47f95c92c362619409bf05af75938a4e8ed53afe013407753ef6c58803e2c594\" pid:4793 exit_status:1 exited_at:{seconds:1752556438 nanos:103115466}" Jul 15 05:13:58.918312 kubelet[2722]: E0715 05:13:58.918182 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:13:59.695584 systemd-networkd[1443]: lxc_health: Link UP Jul 15 05:13:59.702836 systemd-networkd[1443]: lxc_health: Gained carrier Jul 15 05:14:00.244320 containerd[1565]: time="2025-07-15T05:14:00.244280127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9\" id:\"f0d679e84bc3714242644c5ba38329b1d93b92a6a9769a703ad6740f779d1336\" pid:5225 exited_at:{seconds:1752556440 nanos:243638108}" Jul 15 05:14:00.919677 kubelet[2722]: E0715 05:14:00.919632 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:14:01.499683 kubelet[2722]: E0715 05:14:01.499525 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:14:01.766168 systemd-networkd[1443]: lxc_health: Gained IPv6LL Jul 15 05:14:02.403652 containerd[1565]: time="2025-07-15T05:14:02.403601799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9\" id:\"759d5a4b180c82613ccd89fde4291d74ee79537e6de162c63eb1e20b4062430b\" pid:5256 exited_at:{seconds:1752556442 nanos:403163550}" Jul 15 05:14:02.501143 kubelet[2722]: E0715 05:14:02.500958 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jul 15 05:14:04.496767 containerd[1565]: time="2025-07-15T05:14:04.496697536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9\" id:\"5322748b1c72eacc1c38c5d78b7afbe4fd1dcec46ec11a940c20e2eefec96db2\" pid:5299 exited_at:{seconds:1752556444 nanos:496381727}" Jul 15 05:14:06.596791 containerd[1565]: time="2025-07-15T05:14:06.596734781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7735ebcaa4d7f34f0e86ae436f900a1a82e53efcb1daf180e8a9a872e3e624f9\" id:\"93c37540e4431588891143ca089cb20664d4600477788a9aa1668ff017679d2b\" pid:5321 exited_at:{seconds:1752556446 nanos:595984083}" Jul 15 05:14:06.654163 sshd[4586]: Connection closed by 139.178.68.195 port 59740 Jul 15 05:14:06.654801 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Jul 15 05:14:06.660253 systemd[1]: sshd@24-172.236.104.60:22-139.178.68.195:59740.service: Deactivated successfully. Jul 15 05:14:06.663044 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 05:14:06.664737 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Jul 15 05:14:06.666296 systemd-logind[1540]: Removed session 25. Jul 15 05:14:10.118259 kubelet[2722]: E0715 05:14:10.118210 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21"