Jun 21 05:44:37.853781 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 05:44:37.853802 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:44:37.853810 kernel: BIOS-provided physical RAM map: Jun 21 05:44:37.853819 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jun 21 05:44:37.853825 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jun 21 05:44:37.853830 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 21 05:44:37.853836 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jun 21 05:44:37.853842 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jun 21 05:44:37.853848 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jun 21 05:44:37.853853 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jun 21 05:44:37.853859 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 05:44:37.853865 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 21 05:44:37.853872 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jun 21 05:44:37.853878 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 21 05:44:37.853885 kernel: NX (Execute Disable) protection: active Jun 21 05:44:37.853891 kernel: APIC: Static calls initialized Jun 21 05:44:37.853897 kernel: SMBIOS 2.8 present. Jun 21 05:44:37.853905 kernel: DMI: Linode Compute Instance, BIOS Not Specified Jun 21 05:44:37.853910 kernel: DMI: Memory slots populated: 1/1 Jun 21 05:44:37.853916 kernel: Hypervisor detected: KVM Jun 21 05:44:37.853922 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 21 05:44:37.853928 kernel: kvm-clock: using sched offset of 5575836305 cycles Jun 21 05:44:37.853934 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 21 05:44:37.853941 kernel: tsc: Detected 2000.000 MHz processor Jun 21 05:44:37.853947 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 05:44:37.853954 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 05:44:37.853960 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jun 21 05:44:37.853968 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 21 05:44:37.853974 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 05:44:37.853980 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jun 21 05:44:37.853986 kernel: Using GB pages for direct mapping Jun 21 05:44:37.853992 kernel: ACPI: Early table checksum verification disabled Jun 21 05:44:37.853999 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) Jun 21 05:44:37.854005 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:44:37.854011 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:44:37.854017 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:44:37.854025 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 21 05:44:37.854031 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:44:37.854037 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:44:37.854044 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:44:37.854053 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:44:37.854059 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jun 21 05:44:37.854067 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jun 21 05:44:37.854074 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 21 05:44:37.854080 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jun 21 05:44:37.854086 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jun 21 05:44:37.854093 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jun 21 05:44:37.854099 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jun 21 05:44:37.854105 kernel: No NUMA configuration found Jun 21 05:44:37.854112 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jun 21 05:44:37.854120 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jun 21 05:44:37.854126 kernel: Zone ranges: Jun 21 05:44:37.854133 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 05:44:37.854139 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 21 05:44:37.854145 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jun 21 05:44:37.854152 kernel: Device empty Jun 21 05:44:37.854158 kernel: Movable zone start for each node Jun 21 05:44:37.854164 kernel: Early memory node ranges Jun 21 05:44:37.854171 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 21 05:44:37.854177 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jun 21 05:44:37.854185 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jun 21 05:44:37.854191 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jun 21 05:44:37.854198 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 05:44:37.854204 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 21 05:44:37.854210 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jun 21 05:44:37.854217 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 21 05:44:37.854223 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 21 05:44:37.854229 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 05:44:37.854236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 21 05:44:37.854244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 21 05:44:37.854250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 05:44:37.854256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 21 05:44:37.854263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 21 05:44:37.854269 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 05:44:37.854275 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 21 05:44:37.854282 kernel: TSC deadline timer available Jun 21 05:44:37.854288 kernel: CPU topo: Max. logical packages: 1 Jun 21 05:44:37.854294 kernel: CPU topo: Max. logical dies: 1 Jun 21 05:44:37.854302 kernel: CPU topo: Max. dies per package: 1 Jun 21 05:44:37.854308 kernel: CPU topo: Max. threads per core: 1 Jun 21 05:44:37.854315 kernel: CPU topo: Num. cores per package: 2 Jun 21 05:44:37.854321 kernel: CPU topo: Num. threads per package: 2 Jun 21 05:44:37.854327 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 05:44:37.854333 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 21 05:44:37.854340 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 21 05:44:37.854346 kernel: kvm-guest: setup PV sched yield Jun 21 05:44:37.854352 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jun 21 05:44:37.854358 kernel: Booting paravirtualized kernel on KVM Jun 21 05:44:37.854367 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 05:44:37.854373 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 05:44:37.854380 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 05:44:37.854386 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 05:44:37.854392 kernel: pcpu-alloc: [0] 0 1 Jun 21 05:44:37.854398 kernel: kvm-guest: PV spinlocks enabled Jun 21 05:44:37.854405 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 21 05:44:37.854412 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:44:37.854421 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 05:44:37.854427 kernel: random: crng init done Jun 21 05:44:37.854433 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 05:44:37.854440 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 05:44:37.854446 kernel: Fallback order for Node 0: 0 Jun 21 05:44:37.854452 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jun 21 05:44:37.854459 kernel: Policy zone: Normal Jun 21 05:44:37.854465 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 05:44:37.854471 kernel: software IO TLB: area num 2. Jun 21 05:44:37.854480 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 05:44:37.854486 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 05:44:37.854492 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 05:44:37.854499 kernel: Dynamic Preempt: voluntary Jun 21 05:44:37.854505 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 05:44:37.854512 kernel: rcu: RCU event tracing is enabled. Jun 21 05:44:37.854519 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 05:44:37.854525 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 05:44:37.854532 kernel: Rude variant of Tasks RCU enabled. Jun 21 05:44:37.854540 kernel: Tracing variant of Tasks RCU enabled. Jun 21 05:44:37.854547 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 05:44:37.854553 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 05:44:37.854559 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:44:37.854572 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:44:37.854580 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 05:44:37.854587 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 21 05:44:37.854593 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 05:44:37.854600 kernel: Console: colour VGA+ 80x25 Jun 21 05:44:37.854607 kernel: printk: legacy console [tty0] enabled Jun 21 05:44:37.854613 kernel: printk: legacy console [ttyS0] enabled Jun 21 05:44:37.854620 kernel: ACPI: Core revision 20240827 Jun 21 05:44:37.854628 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 21 05:44:37.854635 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 05:44:37.854642 kernel: x2apic enabled Jun 21 05:44:37.854648 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 05:44:37.854657 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 21 05:44:37.854691 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 21 05:44:37.854700 kernel: kvm-guest: setup PV IPIs Jun 21 05:44:37.854706 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 21 05:44:37.854713 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jun 21 05:44:37.854720 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jun 21 05:44:37.854727 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 21 05:44:37.854733 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 21 05:44:37.854740 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 21 05:44:37.854749 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 05:44:37.854756 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 05:44:37.854762 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 05:44:37.854769 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 21 05:44:37.854776 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 21 05:44:37.854782 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 21 05:44:37.854789 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 21 05:44:37.854796 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 21 05:44:37.854803 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 21 05:44:37.854812 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 05:44:37.854818 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 05:44:37.854825 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 05:44:37.854832 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 21 05:44:37.854838 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 05:44:37.854845 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jun 21 05:44:37.854852 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jun 21 05:44:37.854858 kernel: Freeing SMP alternatives memory: 32K Jun 21 05:44:37.854867 kernel: pid_max: default: 32768 minimum: 301 Jun 21 05:44:37.854874 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 05:44:37.854880 kernel: landlock: Up and running. Jun 21 05:44:37.854887 kernel: SELinux: Initializing. Jun 21 05:44:37.854893 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 05:44:37.854900 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 05:44:37.854907 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jun 21 05:44:37.854914 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 21 05:44:37.854920 kernel: ... version: 0 Jun 21 05:44:37.854928 kernel: ... bit width: 48 Jun 21 05:44:37.854935 kernel: ... generic registers: 6 Jun 21 05:44:37.854942 kernel: ... value mask: 0000ffffffffffff Jun 21 05:44:37.854948 kernel: ... max period: 00007fffffffffff Jun 21 05:44:37.854955 kernel: ... fixed-purpose events: 0 Jun 21 05:44:37.854961 kernel: ... event mask: 000000000000003f Jun 21 05:44:37.854968 kernel: signal: max sigframe size: 3376 Jun 21 05:44:37.854975 kernel: rcu: Hierarchical SRCU implementation. Jun 21 05:44:37.854981 kernel: rcu: Max phase no-delay instances is 400. Jun 21 05:44:37.854988 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 05:44:37.854996 kernel: smp: Bringing up secondary CPUs ... Jun 21 05:44:37.855003 kernel: smpboot: x86: Booting SMP configuration: Jun 21 05:44:37.855009 kernel: .... node #0, CPUs: #1 Jun 21 05:44:37.855016 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 05:44:37.855022 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jun 21 05:44:37.855029 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 227296K reserved, 0K cma-reserved) Jun 21 05:44:37.855036 kernel: devtmpfs: initialized Jun 21 05:44:37.855043 kernel: x86/mm: Memory block size: 128MB Jun 21 05:44:37.855049 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 05:44:37.855058 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 05:44:37.855065 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 05:44:37.855073 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 05:44:37.855080 kernel: audit: initializing netlink subsys (disabled) Jun 21 05:44:37.855086 kernel: audit: type=2000 audit(1750484675.616:1): state=initialized audit_enabled=0 res=1 Jun 21 05:44:37.855093 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 05:44:37.855100 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 05:44:37.855106 kernel: cpuidle: using governor menu Jun 21 05:44:37.855114 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 05:44:37.855121 kernel: dca service started, version 1.12.1 Jun 21 05:44:37.855128 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jun 21 05:44:37.855134 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jun 21 05:44:37.855141 kernel: PCI: Using configuration type 1 for base access Jun 21 05:44:37.855148 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 05:44:37.855154 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 05:44:37.855161 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 05:44:37.855168 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 05:44:37.855176 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 05:44:37.855183 kernel: ACPI: Added _OSI(Module Device) Jun 21 05:44:37.855189 kernel: ACPI: Added _OSI(Processor Device) Jun 21 05:44:37.855196 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 05:44:37.855203 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 05:44:37.855209 kernel: ACPI: Interpreter enabled Jun 21 05:44:37.855216 kernel: ACPI: PM: (supports S0 S3 S5) Jun 21 05:44:37.855222 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 05:44:37.855229 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 05:44:37.855238 kernel: PCI: Using E820 reservations for host bridge windows Jun 21 05:44:37.855245 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 21 05:44:37.855252 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 05:44:37.855418 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 21 05:44:37.855532 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 21 05:44:37.855640 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 21 05:44:37.855650 kernel: PCI host bridge to bus 0000:00 Jun 21 05:44:37.855800 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 21 05:44:37.855907 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 21 05:44:37.856005 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 21 05:44:37.856100 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jun 21 05:44:37.856203 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 21 05:44:37.856319 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jun 21 05:44:37.856417 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 05:44:37.857783 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jun 21 05:44:37.857919 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jun 21 05:44:37.858050 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jun 21 05:44:37.858175 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jun 21 05:44:37.858283 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jun 21 05:44:37.858388 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 21 05:44:37.858509 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:44:37.858657 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jun 21 05:44:37.859908 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jun 21 05:44:37.860023 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jun 21 05:44:37.860141 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 05:44:37.860248 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jun 21 05:44:37.860354 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jun 21 05:44:37.860498 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jun 21 05:44:37.860797 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jun 21 05:44:37.860929 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jun 21 05:44:37.861605 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 21 05:44:37.861917 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jun 21 05:44:37.862034 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jun 21 05:44:37.862147 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jun 21 05:44:37.862272 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jun 21 05:44:37.862385 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jun 21 05:44:37.862395 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 21 05:44:37.862402 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 21 05:44:37.862408 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 21 05:44:37.862415 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 21 05:44:37.862422 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 21 05:44:37.862428 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 21 05:44:37.862438 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 21 05:44:37.862444 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 21 05:44:37.862451 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 21 05:44:37.862458 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 21 05:44:37.862464 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 21 05:44:37.862471 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 21 05:44:37.862477 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 21 05:44:37.862484 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 21 05:44:37.862490 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 21 05:44:37.862499 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 21 05:44:37.862505 kernel: iommu: Default domain type: Translated Jun 21 05:44:37.862512 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 05:44:37.862519 kernel: PCI: Using ACPI for IRQ routing Jun 21 05:44:37.862525 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 21 05:44:37.862532 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jun 21 05:44:37.862539 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jun 21 05:44:37.863819 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 21 05:44:37.863946 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 21 05:44:37.864994 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 21 05:44:37.865030 kernel: vgaarb: loaded Jun 21 05:44:37.865039 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 21 05:44:37.865047 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 21 05:44:37.865054 kernel: clocksource: Switched to clocksource kvm-clock Jun 21 05:44:37.865061 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 05:44:37.865069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 05:44:37.865076 kernel: pnp: PnP ACPI init Jun 21 05:44:37.865219 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jun 21 05:44:37.865232 kernel: pnp: PnP ACPI: found 5 devices Jun 21 05:44:37.865240 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 05:44:37.865247 kernel: NET: Registered PF_INET protocol family Jun 21 05:44:37.865254 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 05:44:37.865261 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 21 05:44:37.865268 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 05:44:37.865275 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 05:44:37.865285 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 21 05:44:37.865292 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 21 05:44:37.865299 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 05:44:37.865306 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 05:44:37.865313 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 05:44:37.865321 kernel: NET: Registered PF_XDP protocol family Jun 21 05:44:37.865438 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 21 05:44:37.865544 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 21 05:44:37.865647 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 21 05:44:37.865784 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jun 21 05:44:37.865890 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jun 21 05:44:37.865994 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jun 21 05:44:37.866003 kernel: PCI: CLS 0 bytes, default 64 Jun 21 05:44:37.866010 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 21 05:44:37.866018 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jun 21 05:44:37.866025 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jun 21 05:44:37.866032 kernel: Initialise system trusted keyrings Jun 21 05:44:37.866043 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 21 05:44:37.866050 kernel: Key type asymmetric registered Jun 21 05:44:37.866057 kernel: Asymmetric key parser 'x509' registered Jun 21 05:44:37.866064 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 05:44:37.866071 kernel: io scheduler mq-deadline registered Jun 21 05:44:37.866078 kernel: io scheduler kyber registered Jun 21 05:44:37.866084 kernel: io scheduler bfq registered Jun 21 05:44:37.866092 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 05:44:37.866099 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 21 05:44:37.866107 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 21 05:44:37.866117 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 05:44:37.866124 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 05:44:37.866131 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 21 05:44:37.866138 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 21 05:44:37.866145 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 21 05:44:37.866152 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 21 05:44:37.866269 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 21 05:44:37.866529 kernel: rtc_cmos 00:03: registered as rtc0 Jun 21 05:44:37.866639 kernel: rtc_cmos 00:03: setting system clock to 2025-06-21T05:44:37 UTC (1750484677) Jun 21 05:44:37.873896 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 21 05:44:37.873913 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 21 05:44:37.873921 kernel: NET: Registered PF_INET6 protocol family Jun 21 05:44:37.873929 kernel: Segment Routing with IPv6 Jun 21 05:44:37.873936 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 05:44:37.873943 kernel: NET: Registered PF_PACKET protocol family Jun 21 05:44:37.873950 kernel: Key type dns_resolver registered Jun 21 05:44:37.873961 kernel: IPI shorthand broadcast: enabled Jun 21 05:44:37.873969 kernel: sched_clock: Marking stable (2724004623, 218965006)->(2978003711, -35034082) Jun 21 05:44:37.873976 kernel: registered taskstats version 1 Jun 21 05:44:37.873983 kernel: Loading compiled-in X.509 certificates Jun 21 05:44:37.873990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 05:44:37.873997 kernel: Demotion targets for Node 0: null Jun 21 05:44:37.874003 kernel: Key type .fscrypt registered Jun 21 05:44:37.874010 kernel: Key type fscrypt-provisioning registered Jun 21 05:44:37.874017 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 05:44:37.874027 kernel: ima: Allocated hash algorithm: sha1 Jun 21 05:44:37.874034 kernel: ima: No architecture policies found Jun 21 05:44:37.874041 kernel: clk: Disabling unused clocks Jun 21 05:44:37.874048 kernel: Warning: unable to open an initial console. Jun 21 05:44:37.874056 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 05:44:37.874063 kernel: Write protecting the kernel read-only data: 24576k Jun 21 05:44:37.874070 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 05:44:37.874076 kernel: Run /init as init process Jun 21 05:44:37.874083 kernel: with arguments: Jun 21 05:44:37.874094 kernel: /init Jun 21 05:44:37.874100 kernel: with environment: Jun 21 05:44:37.874107 kernel: HOME=/ Jun 21 05:44:37.874114 kernel: TERM=linux Jun 21 05:44:37.874121 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 05:44:37.874146 systemd[1]: Successfully made /usr/ read-only. Jun 21 05:44:37.874160 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:44:37.874169 systemd[1]: Detected virtualization kvm. Jun 21 05:44:37.874179 systemd[1]: Detected architecture x86-64. Jun 21 05:44:37.874186 systemd[1]: Running in initrd. Jun 21 05:44:37.874194 systemd[1]: No hostname configured, using default hostname. Jun 21 05:44:37.874202 systemd[1]: Hostname set to . Jun 21 05:44:37.874209 systemd[1]: Initializing machine ID from random generator. Jun 21 05:44:37.874217 systemd[1]: Queued start job for default target initrd.target. Jun 21 05:44:37.874226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:44:37.874233 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:44:37.874244 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 05:44:37.874252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:44:37.874260 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 05:44:37.874268 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 05:44:37.874277 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 05:44:37.874285 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 05:44:37.874296 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:44:37.874303 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:44:37.874311 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:44:37.874318 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:44:37.874326 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:44:37.874334 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:44:37.874342 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:44:37.874350 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:44:37.874358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 05:44:37.874368 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 05:44:37.874375 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:44:37.874383 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:44:37.874393 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:44:37.874401 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:44:37.874409 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 05:44:37.874419 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:44:37.874427 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 05:44:37.874435 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 05:44:37.874443 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 05:44:37.874451 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:44:37.874458 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:44:37.874466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:44:37.874474 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 05:44:37.874508 systemd-journald[207]: Collecting audit messages is disabled. Jun 21 05:44:37.874533 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:44:37.874542 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 05:44:37.874550 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 05:44:37.874558 systemd-journald[207]: Journal started Jun 21 05:44:37.874576 systemd-journald[207]: Runtime Journal (/run/log/journal/d362d00f758846ef8b50736fd0875ac0) is 8M, max 78.5M, 70.5M free. Jun 21 05:44:37.862884 systemd-modules-load[208]: Inserted module 'overlay' Jun 21 05:44:37.894699 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 05:44:37.895903 systemd-modules-load[208]: Inserted module 'br_netfilter' Jun 21 05:44:37.935014 kernel: Bridge firewalling registered Jun 21 05:44:37.941928 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:44:37.960021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:44:37.961617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:44:37.962379 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 05:44:37.966969 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 05:44:37.969782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:44:37.980123 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:44:37.982990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:44:37.993320 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:44:38.000455 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:44:38.001260 systemd-tmpfiles[229]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 05:44:38.004780 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 05:44:38.007422 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:44:38.009030 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:44:38.013784 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:44:38.024117 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:44:38.058386 systemd-resolved[246]: Positive Trust Anchors: Jun 21 05:44:38.059083 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:44:38.059113 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:44:38.064193 systemd-resolved[246]: Defaulting to hostname 'linux'. Jun 21 05:44:38.065164 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:44:38.065987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:44:38.108703 kernel: SCSI subsystem initialized Jun 21 05:44:38.117731 kernel: Loading iSCSI transport class v2.0-870. Jun 21 05:44:38.127702 kernel: iscsi: registered transport (tcp) Jun 21 05:44:38.147978 kernel: iscsi: registered transport (qla4xxx) Jun 21 05:44:38.148015 kernel: QLogic iSCSI HBA Driver Jun 21 05:44:38.166369 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:44:38.179297 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:44:38.182027 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:44:38.230641 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 05:44:38.232479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 05:44:38.281690 kernel: raid6: avx2x4 gen() 33011 MB/s Jun 21 05:44:38.299689 kernel: raid6: avx2x2 gen() 29780 MB/s Jun 21 05:44:38.318280 kernel: raid6: avx2x1 gen() 21877 MB/s Jun 21 05:44:38.318300 kernel: raid6: using algorithm avx2x4 gen() 33011 MB/s Jun 21 05:44:38.337287 kernel: raid6: .... xor() 5039 MB/s, rmw enabled Jun 21 05:44:38.337317 kernel: raid6: using avx2x2 recovery algorithm Jun 21 05:44:38.355693 kernel: xor: automatically using best checksumming function avx Jun 21 05:44:38.488871 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 05:44:38.497098 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:44:38.499232 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:44:38.520656 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jun 21 05:44:38.525893 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:44:38.528623 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 05:44:38.547095 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jun 21 05:44:38.571975 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:44:38.573631 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:44:38.637150 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:44:38.640790 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 05:44:38.827700 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jun 21 05:44:38.833741 kernel: scsi host0: Virtio SCSI HBA Jun 21 05:44:38.836126 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 05:44:38.846707 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jun 21 05:44:38.867026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:44:38.867142 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:44:38.869345 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:44:38.872649 kernel: libata version 3.00 loaded. Jun 21 05:44:38.870590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:44:38.875770 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:44:38.885053 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 21 05:44:38.886989 kernel: AES CTR mode by8 optimization enabled Jun 21 05:44:38.899684 kernel: ahci 0000:00:1f.2: version 3.0 Jun 21 05:44:38.899871 kernel: sd 0:0:0:0: Power-on or device reset occurred Jun 21 05:44:38.900034 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jun 21 05:44:38.909344 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 21 05:44:38.909558 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 21 05:44:38.909572 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jun 21 05:44:38.909743 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jun 21 05:44:38.909893 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 21 05:44:38.910030 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jun 21 05:44:38.913689 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 21 05:44:38.922306 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 05:44:38.922330 kernel: GPT:9289727 != 167739391 Jun 21 05:44:38.922341 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 05:44:38.922351 kernel: GPT:9289727 != 167739391 Jun 21 05:44:38.922360 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 05:44:38.922374 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 21 05:44:38.922383 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 21 05:44:38.936686 kernel: scsi host1: ahci Jun 21 05:44:38.938689 kernel: scsi host2: ahci Jun 21 05:44:38.939682 kernel: scsi host3: ahci Jun 21 05:44:38.940681 kernel: scsi host4: ahci Jun 21 05:44:38.944756 kernel: scsi host5: ahci Jun 21 05:44:38.944917 kernel: scsi host6: ahci Jun 21 05:44:38.945050 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Jun 21 05:44:38.945061 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Jun 21 05:44:38.945071 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Jun 21 05:44:38.945081 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Jun 21 05:44:38.945090 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Jun 21 05:44:38.945099 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Jun 21 05:44:39.016115 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jun 21 05:44:39.017049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:44:39.049337 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jun 21 05:44:39.057511 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 21 05:44:39.064142 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jun 21 05:44:39.064755 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jun 21 05:44:39.067783 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 05:44:39.081588 disk-uuid[624]: Primary Header is updated. Jun 21 05:44:39.081588 disk-uuid[624]: Secondary Entries is updated. Jun 21 05:44:39.081588 disk-uuid[624]: Secondary Header is updated. Jun 21 05:44:39.097691 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 21 05:44:39.109759 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 21 05:44:39.258292 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 21 05:44:39.258338 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jun 21 05:44:39.258352 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jun 21 05:44:39.258361 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 21 05:44:39.258370 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 21 05:44:39.258378 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 21 05:44:39.270842 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 05:44:39.271971 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:44:39.272806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:44:39.274080 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:44:39.276765 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 05:44:39.291521 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:44:40.114824 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 21 05:44:40.114868 disk-uuid[625]: The operation has completed successfully. Jun 21 05:44:40.162533 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 05:44:40.162652 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 05:44:40.185441 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 05:44:40.205528 sh[652]: Success Jun 21 05:44:40.223252 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 05:44:40.223285 kernel: device-mapper: uevent: version 1.0.3 Jun 21 05:44:40.223903 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 05:44:40.234768 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 21 05:44:40.275411 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 05:44:40.278728 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 05:44:40.295623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 05:44:40.306153 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 05:44:40.306176 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (664) Jun 21 05:44:40.312904 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 05:44:40.312927 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:44:40.312939 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 05:44:40.320522 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 05:44:40.321376 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:44:40.322257 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 05:44:40.322887 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 05:44:40.325774 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 05:44:40.355693 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (697) Jun 21 05:44:40.362404 kernel: BTRFS info (device sda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:44:40.362427 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:44:40.362439 kernel: BTRFS info (device sda6): using free-space-tree Jun 21 05:44:40.370690 kernel: BTRFS info (device sda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:44:40.371255 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 05:44:40.373802 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 05:44:40.436230 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:44:40.439783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:44:40.479138 systemd-networkd[833]: lo: Link UP Jun 21 05:44:40.480062 systemd-networkd[833]: lo: Gained carrier Jun 21 05:44:40.482694 systemd-networkd[833]: Enumeration completed Jun 21 05:44:40.483620 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:44:40.482848 ignition[757]: Ignition 2.21.0 Jun 21 05:44:40.483624 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 05:44:40.482855 ignition[757]: Stage: fetch-offline Jun 21 05:44:40.485799 systemd-networkd[833]: eth0: Link UP Jun 21 05:44:40.482896 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:44:40.485803 systemd-networkd[833]: eth0: Gained carrier Jun 21 05:44:40.482905 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 21 05:44:40.485811 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:44:40.482969 ignition[757]: parsed url from cmdline: "" Jun 21 05:44:40.486112 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:44:40.482973 ignition[757]: no config URL provided Jun 21 05:44:40.487245 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:44:40.482978 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 05:44:40.489454 systemd[1]: Reached target network.target - Network. Jun 21 05:44:40.482985 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jun 21 05:44:40.491779 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 05:44:40.482990 ignition[757]: failed to fetch config: resource requires networking Jun 21 05:44:40.483110 ignition[757]: Ignition finished successfully Jun 21 05:44:40.510474 ignition[842]: Ignition 2.21.0 Jun 21 05:44:40.510489 ignition[842]: Stage: fetch Jun 21 05:44:40.510607 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:44:40.510617 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 21 05:44:40.510704 ignition[842]: parsed url from cmdline: "" Jun 21 05:44:40.510708 ignition[842]: no config URL provided Jun 21 05:44:40.510712 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 05:44:40.510721 ignition[842]: no config at "/usr/lib/ignition/user.ign" Jun 21 05:44:40.510756 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #1 Jun 21 05:44:40.510947 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 21 05:44:40.711490 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #2 Jun 21 05:44:40.711652 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 21 05:44:40.941734 systemd-networkd[833]: eth0: DHCPv4 address 172.233.208.28/24, gateway 172.233.208.1 acquired from 23.205.167.145 Jun 21 05:44:41.111831 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #3 Jun 21 05:44:41.203623 ignition[842]: PUT result: OK Jun 21 05:44:41.203752 ignition[842]: GET http://169.254.169.254/v1/user-data: attempt #1 Jun 21 05:44:41.316648 ignition[842]: GET result: OK Jun 21 05:44:41.316974 ignition[842]: parsing config with SHA512: c9293b6bc5d287813bc282e920b9ae4474653564c4e81b232bd5ebb5a21849bf761c48daf96ba9d2fe72aecdb184f5ce89154716480fd94d5cd4cf8605572d4f Jun 21 05:44:41.324227 unknown[842]: fetched base config from "system" Jun 21 05:44:41.324241 unknown[842]: fetched base config from "system" Jun 21 05:44:41.324490 ignition[842]: fetch: fetch complete Jun 21 05:44:41.324246 unknown[842]: fetched user config from "akamai" Jun 21 05:44:41.324495 ignition[842]: fetch: fetch passed Jun 21 05:44:41.324530 ignition[842]: Ignition finished successfully Jun 21 05:44:41.328023 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 05:44:41.329534 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 05:44:41.374843 ignition[850]: Ignition 2.21.0 Jun 21 05:44:41.374853 ignition[850]: Stage: kargs Jun 21 05:44:41.374965 ignition[850]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:44:41.374975 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 21 05:44:41.375517 ignition[850]: kargs: kargs passed Jun 21 05:44:41.376919 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 05:44:41.375557 ignition[850]: Ignition finished successfully Jun 21 05:44:41.379157 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 05:44:41.399386 ignition[856]: Ignition 2.21.0 Jun 21 05:44:41.399396 ignition[856]: Stage: disks Jun 21 05:44:41.399509 ignition[856]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:44:41.402084 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 05:44:41.399518 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 21 05:44:41.403079 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 05:44:41.400404 ignition[856]: disks: disks passed Jun 21 05:44:41.403968 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 05:44:41.400442 ignition[856]: Ignition finished successfully Jun 21 05:44:41.405185 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:44:41.406352 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:44:41.407310 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:44:41.409224 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 05:44:41.434682 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 21 05:44:41.437264 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 05:44:41.438834 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 05:44:41.542693 kernel: EXT4-fs (sda9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 05:44:41.543537 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 05:44:41.544434 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 05:44:41.546268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:44:41.548745 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 05:44:41.550019 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 21 05:44:41.550059 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 05:44:41.550082 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:44:41.555917 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 05:44:41.558108 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 05:44:41.566927 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (873) Jun 21 05:44:41.566957 kernel: BTRFS info (device sda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:44:41.570985 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:44:41.571007 kernel: BTRFS info (device sda6): using free-space-tree Jun 21 05:44:41.576795 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:44:41.587795 systemd-networkd[833]: eth0: Gained IPv6LL Jun 21 05:44:41.605142 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 05:44:41.609544 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Jun 21 05:44:41.613568 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 05:44:41.617204 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 05:44:41.696049 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 05:44:41.698448 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 05:44:41.700165 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 05:44:41.711493 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 05:44:41.714005 kernel: BTRFS info (device sda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:44:41.727032 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 05:44:41.737061 ignition[986]: INFO : Ignition 2.21.0 Jun 21 05:44:41.737061 ignition[986]: INFO : Stage: mount Jun 21 05:44:41.738221 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:44:41.738221 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 21 05:44:41.738221 ignition[986]: INFO : mount: mount passed Jun 21 05:44:41.738221 ignition[986]: INFO : Ignition finished successfully Jun 21 05:44:41.739955 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 05:44:41.741390 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 05:44:42.545067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:44:42.568698 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (999) Jun 21 05:44:42.571807 kernel: BTRFS info (device sda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:44:42.571869 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:44:42.574569 kernel: BTRFS info (device sda6): using free-space-tree Jun 21 05:44:42.579171 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:44:42.610331 ignition[1015]: INFO : Ignition 2.21.0 Jun 21 05:44:42.610331 ignition[1015]: INFO : Stage: files Jun 21 05:44:42.611682 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:44:42.611682 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 21 05:44:42.611682 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Jun 21 05:44:42.614027 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 05:44:42.614027 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 05:44:42.614027 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 05:44:42.616403 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 05:44:42.616403 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 05:44:42.616403 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 05:44:42.616403 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 21 05:44:42.614924 unknown[1015]: wrote ssh authorized keys file for user: core Jun 21 05:44:42.814437 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 05:44:42.968988 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 05:44:42.968988 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 05:44:42.971017 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 21 05:44:43.223203 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 21 05:44:43.277031 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 05:44:43.277914 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 21 05:44:43.277914 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 05:44:43.277914 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:44:43.277914 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:44:43.277914 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:44:43.284179 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:44:43.284179 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:44:43.284179 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:44:43.284179 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:44:43.284179 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:44:43.284179 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 05:44:43.284179 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 05:44:43.284179 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 05:44:43.284179 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 21 05:44:43.655118 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 21 05:44:44.030020 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 05:44:44.030020 ignition[1015]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 21 05:44:44.032265 ignition[1015]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:44:44.034867 ignition[1015]: INFO : files: files passed Jun 21 05:44:44.034867 ignition[1015]: INFO : Ignition finished successfully Jun 21 05:44:44.036389 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 05:44:44.038246 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 05:44:44.043098 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 05:44:44.051400 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 05:44:44.052112 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 05:44:44.061002 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:44:44.062063 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:44:44.063042 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:44:44.064033 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:44:44.065272 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 05:44:44.067065 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 05:44:44.130733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 05:44:44.130869 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 05:44:44.132483 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 05:44:44.133388 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 05:44:44.134696 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 05:44:44.135407 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 05:44:44.158143 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:44:44.160157 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 05:44:44.173848 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:44:44.175338 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:44:44.176154 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 05:44:44.177772 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 05:44:44.177930 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:44:44.179088 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 05:44:44.179892 systemd[1]: Stopped target basic.target - Basic System. Jun 21 05:44:44.181113 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 05:44:44.182210 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:44:44.183314 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 05:44:44.184057 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:44:44.185242 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 05:44:44.186523 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:44:44.187865 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 05:44:44.189094 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 05:44:44.190346 systemd[1]: Stopped target swap.target - Swaps. Jun 21 05:44:44.191572 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 05:44:44.191738 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:44:44.193220 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:44:44.194044 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:44:44.195135 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 05:44:44.195327 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:44:44.196324 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 05:44:44.196453 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 05:44:44.198239 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 05:44:44.198352 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:44:44.199133 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 05:44:44.199263 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 05:44:44.202752 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 05:44:44.204414 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 05:44:44.206804 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 05:44:44.206935 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:44:44.208980 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 05:44:44.209097 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:44:44.217756 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 05:44:44.217877 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 05:44:44.230677 ignition[1070]: INFO : Ignition 2.21.0 Jun 21 05:44:44.230677 ignition[1070]: INFO : Stage: umount Jun 21 05:44:44.233280 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:44:44.233280 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 21 05:44:44.233280 ignition[1070]: INFO : umount: umount passed Jun 21 05:44:44.233280 ignition[1070]: INFO : Ignition finished successfully Jun 21 05:44:44.233136 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 05:44:44.233256 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 05:44:44.234313 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 05:44:44.234390 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 05:44:44.256693 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 05:44:44.256753 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 05:44:44.257985 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 05:44:44.258033 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 05:44:44.259052 systemd[1]: Stopped target network.target - Network. Jun 21 05:44:44.260080 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 05:44:44.260132 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:44:44.261233 systemd[1]: Stopped target paths.target - Path Units. Jun 21 05:44:44.262292 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 05:44:44.266713 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:44:44.267478 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 05:44:44.268724 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 05:44:44.269836 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 05:44:44.269882 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:44:44.271008 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 05:44:44.271045 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:44:44.272337 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 05:44:44.272389 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 05:44:44.273415 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 05:44:44.273464 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 05:44:44.274700 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 05:44:44.275829 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 05:44:44.278127 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 05:44:44.278621 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 05:44:44.280315 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 05:44:44.281512 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 05:44:44.281623 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 05:44:44.285741 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 05:44:44.286470 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 05:44:44.286552 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 05:44:44.288120 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 05:44:44.288174 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:44:44.291584 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:44:44.291906 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 05:44:44.292051 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 05:44:44.293597 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 05:44:44.294177 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 05:44:44.295443 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 05:44:44.295487 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:44:44.297508 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 05:44:44.299021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 05:44:44.299076 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:44:44.301073 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 05:44:44.301131 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:44:44.303192 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 05:44:44.303247 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 05:44:44.304339 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:44:44.308885 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 05:44:44.322438 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 05:44:44.323174 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 05:44:44.325352 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 05:44:44.325540 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:44:44.326882 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 05:44:44.326932 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 05:44:44.328089 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 05:44:44.328128 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:44:44.329295 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 05:44:44.329344 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:44:44.331105 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 05:44:44.331152 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 05:44:44.332445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 05:44:44.332496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:44:44.335768 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 05:44:44.336920 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 05:44:44.336974 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:44:44.339798 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 05:44:44.339852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:44:44.341547 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 21 05:44:44.341594 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 05:44:44.343369 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 05:44:44.343415 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:44:44.344998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:44:44.345047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:44:44.352859 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 05:44:44.352974 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 05:44:44.354464 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 05:44:44.356829 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 05:44:44.377896 systemd[1]: Switching root. Jun 21 05:44:44.411051 systemd-journald[207]: Journal stopped Jun 21 05:44:45.544999 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jun 21 05:44:45.545028 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 05:44:45.545041 kernel: SELinux: policy capability open_perms=1 Jun 21 05:44:45.545053 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 05:44:45.545062 kernel: SELinux: policy capability always_check_network=0 Jun 21 05:44:45.545072 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 05:44:45.545082 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 05:44:45.545091 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 05:44:45.545100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 05:44:45.545110 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 05:44:45.545121 kernel: audit: type=1403 audit(1750484684.600:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 05:44:45.545131 systemd[1]: Successfully loaded SELinux policy in 81.031ms. Jun 21 05:44:45.545142 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.793ms. Jun 21 05:44:45.545154 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:44:45.545165 systemd[1]: Detected virtualization kvm. Jun 21 05:44:45.545177 systemd[1]: Detected architecture x86-64. Jun 21 05:44:45.545187 systemd[1]: Detected first boot. Jun 21 05:44:45.545198 systemd[1]: Initializing machine ID from random generator. Jun 21 05:44:45.545208 zram_generator::config[1114]: No configuration found. Jun 21 05:44:45.545219 kernel: Guest personality initialized and is inactive Jun 21 05:44:45.545228 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 21 05:44:45.545238 kernel: Initialized host personality Jun 21 05:44:45.545250 kernel: NET: Registered PF_VSOCK protocol family Jun 21 05:44:45.545260 systemd[1]: Populated /etc with preset unit settings. Jun 21 05:44:45.545271 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 05:44:45.545282 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 05:44:45.545293 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 05:44:45.545303 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 05:44:45.545314 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 05:44:45.545326 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 05:44:45.545336 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 05:44:45.545347 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 05:44:45.545358 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 05:44:45.545368 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 05:44:45.545379 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 05:44:45.545389 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 05:44:45.545401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:44:45.545411 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:44:45.545421 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 05:44:45.545431 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 05:44:45.545443 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 05:44:45.545454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:44:45.545464 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 05:44:45.545474 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:44:45.545486 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:44:45.545496 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 05:44:45.545506 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 05:44:45.545516 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 05:44:45.545526 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 05:44:45.545536 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:44:45.545548 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:44:45.545558 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:44:45.545570 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:44:45.545580 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 05:44:45.545590 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 05:44:45.545600 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 05:44:45.545610 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:44:45.545623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:44:45.545633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:44:45.545643 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 05:44:45.545653 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 05:44:45.547447 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 05:44:45.547477 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 05:44:45.547490 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:44:45.547501 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 05:44:45.547515 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 05:44:45.547525 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 05:44:45.547536 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 05:44:45.547547 systemd[1]: Reached target machines.target - Containers. Jun 21 05:44:45.547557 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 05:44:45.547568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:44:45.547578 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:44:45.547589 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 05:44:45.547602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:44:45.547613 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:44:45.547623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:44:45.547633 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 05:44:45.547644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:44:45.547654 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 05:44:45.547694 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 05:44:45.547705 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 05:44:45.547716 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 05:44:45.547728 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 05:44:45.547740 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:44:45.547751 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:44:45.547761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:44:45.547773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:44:45.547783 kernel: loop: module loaded Jun 21 05:44:45.547793 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 05:44:45.547804 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 05:44:45.547816 kernel: fuse: init (API version 7.41) Jun 21 05:44:45.547826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:44:45.547836 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 05:44:45.547847 systemd[1]: Stopped verity-setup.service. Jun 21 05:44:45.547858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:44:45.547869 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 05:44:45.547879 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 05:44:45.547889 kernel: ACPI: bus type drm_connector registered Jun 21 05:44:45.547901 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 05:44:45.547911 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 05:44:45.547922 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 05:44:45.547932 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 05:44:45.547942 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 05:44:45.547952 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:44:45.547963 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 05:44:45.547973 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 05:44:45.548013 systemd-journald[1205]: Collecting audit messages is disabled. Jun 21 05:44:45.548036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:44:45.548047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:44:45.548057 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:44:45.548067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:44:45.548079 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:44:45.548090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:44:45.548100 systemd-journald[1205]: Journal started Jun 21 05:44:45.548118 systemd-journald[1205]: Runtime Journal (/run/log/journal/a9fc4d7207d14972882067791ff6ba83) is 8M, max 78.5M, 70.5M free. Jun 21 05:44:45.187258 systemd[1]: Queued start job for default target multi-user.target. Jun 21 05:44:45.200906 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 21 05:44:45.201488 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 05:44:45.552186 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:44:45.553446 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 05:44:45.553824 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 05:44:45.554750 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:44:45.555069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:44:45.556104 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:44:45.557120 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:44:45.558164 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 05:44:45.559259 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 05:44:45.576371 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:44:45.578788 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 05:44:45.581533 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 05:44:45.582188 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 05:44:45.582261 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:44:45.584628 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 05:44:45.588879 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 05:44:45.589897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:44:45.596957 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 05:44:45.606785 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 05:44:45.607392 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:44:45.611188 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 05:44:45.612735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:44:45.614941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:44:45.618687 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 05:44:45.620835 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 05:44:45.623571 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 05:44:45.624858 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 05:44:45.653691 kernel: loop0: detected capacity change from 0 to 221472 Jun 21 05:44:45.654190 systemd-journald[1205]: Time spent on flushing to /var/log/journal/a9fc4d7207d14972882067791ff6ba83 is 41.775ms for 1000 entries. Jun 21 05:44:45.654190 systemd-journald[1205]: System Journal (/var/log/journal/a9fc4d7207d14972882067791ff6ba83) is 8M, max 195.6M, 187.6M free. Jun 21 05:44:45.708787 systemd-journald[1205]: Received client request to flush runtime journal. Jun 21 05:44:45.708822 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 05:44:45.666153 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:44:45.667640 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 05:44:45.672426 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 05:44:45.676207 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 05:44:45.704600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:44:45.712282 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 05:44:45.717160 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jun 21 05:44:45.717181 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jun 21 05:44:45.729477 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 05:44:45.734771 kernel: loop1: detected capacity change from 0 to 8 Jun 21 05:44:45.736788 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 05:44:45.739986 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 05:44:45.761021 kernel: loop2: detected capacity change from 0 to 113872 Jun 21 05:44:45.796993 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 05:44:45.798354 kernel: loop3: detected capacity change from 0 to 146240 Jun 21 05:44:45.804254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:44:45.852385 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jun 21 05:44:45.852897 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jun 21 05:44:45.854696 kernel: loop4: detected capacity change from 0 to 221472 Jun 21 05:44:45.859447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:44:45.881342 kernel: loop5: detected capacity change from 0 to 8 Jun 21 05:44:45.885692 kernel: loop6: detected capacity change from 0 to 113872 Jun 21 05:44:45.903744 kernel: loop7: detected capacity change from 0 to 146240 Jun 21 05:44:45.921354 (sd-merge)[1264]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jun 21 05:44:45.922050 (sd-merge)[1264]: Merged extensions into '/usr'. Jun 21 05:44:45.930826 systemd[1]: Reload requested from client PID 1239 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 05:44:45.930915 systemd[1]: Reloading... Jun 21 05:44:46.023699 zram_generator::config[1293]: No configuration found. Jun 21 05:44:46.142730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:44:46.192499 ldconfig[1234]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 05:44:46.218304 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 05:44:46.218445 systemd[1]: Reloading finished in 286 ms. Jun 21 05:44:46.235288 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 05:44:46.236421 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 05:44:46.248800 systemd[1]: Starting ensure-sysext.service... Jun 21 05:44:46.250739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:44:46.278739 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Jun 21 05:44:46.278824 systemd[1]: Reloading... Jun 21 05:44:46.300313 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 05:44:46.300356 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 05:44:46.300609 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 05:44:46.300875 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 05:44:46.301619 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 05:44:46.301890 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jun 21 05:44:46.301963 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jun 21 05:44:46.309954 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:44:46.309986 systemd-tmpfiles[1335]: Skipping /boot Jun 21 05:44:46.331857 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:44:46.331877 systemd-tmpfiles[1335]: Skipping /boot Jun 21 05:44:46.400692 zram_generator::config[1365]: No configuration found. Jun 21 05:44:46.482685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:44:46.543862 systemd[1]: Reloading finished in 264 ms. Jun 21 05:44:46.559390 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 05:44:46.560373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:44:46.580962 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:44:46.584962 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 05:44:46.589875 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 05:44:46.592472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:44:46.596541 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:44:46.600176 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 05:44:46.602595 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:44:46.603178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:44:46.605828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:44:46.609438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:44:46.613006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:44:46.613650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:44:46.613741 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:44:46.613806 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:44:46.616211 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 05:44:46.621882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:44:46.622044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:44:46.622193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:44:46.622261 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:44:46.622334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:44:46.624818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:44:46.624985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:44:46.629165 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:44:46.630795 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:44:46.630867 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:44:46.630963 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:44:46.636308 systemd[1]: Finished ensure-sysext.service. Jun 21 05:44:46.642967 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 05:44:46.654060 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 05:44:46.675304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:44:46.675498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:44:46.676623 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:44:46.677064 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:44:46.678272 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:44:46.678465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:44:46.682951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:44:46.687974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:44:46.688246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:44:46.689500 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:44:46.703642 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 05:44:46.707809 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 05:44:46.710460 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 05:44:46.712222 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 05:44:46.724710 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Jun 21 05:44:46.733870 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 05:44:46.734939 augenrules[1447]: No rules Jun 21 05:44:46.736111 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 05:44:46.740046 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:44:46.740334 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:44:46.765884 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:44:46.770971 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:44:46.882371 systemd-resolved[1410]: Positive Trust Anchors: Jun 21 05:44:46.882760 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:44:46.882792 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:44:46.889227 systemd-resolved[1410]: Defaulting to hostname 'linux'. Jun 21 05:44:46.893304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:44:46.894077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:44:46.894901 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 05:44:46.895878 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:44:46.896852 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 05:44:46.898940 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 05:44:46.899983 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 05:44:46.900771 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 05:44:46.901846 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 05:44:46.901940 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:44:46.902714 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 05:44:46.903984 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 05:44:46.904699 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 05:44:46.905600 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:44:46.907351 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 05:44:46.912309 systemd-networkd[1461]: lo: Link UP Jun 21 05:44:46.912327 systemd-networkd[1461]: lo: Gained carrier Jun 21 05:44:46.912818 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 05:44:46.915130 systemd-networkd[1461]: Enumeration completed Jun 21 05:44:46.915692 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 05:44:46.917107 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 05:44:46.917974 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 05:44:46.918973 systemd-timesyncd[1425]: No network connectivity, watching for changes. Jun 21 05:44:46.929417 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 05:44:46.930385 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 05:44:46.932759 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:44:46.933507 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 05:44:46.935286 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 05:44:46.935314 systemd[1]: Reached target network.target - Network. Jun 21 05:44:46.936802 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:44:46.937359 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:44:46.938826 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:44:46.938854 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:44:46.939897 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 05:44:46.943860 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 05:44:46.976656 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 05:44:46.983282 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 05:44:46.985693 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 05:44:46.989277 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 05:44:46.989882 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 05:44:46.994853 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 05:44:47.010418 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 05:44:47.013260 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 05:44:47.017795 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 05:44:47.023636 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 05:44:47.036835 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 05:44:47.042630 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Refreshing passwd entry cache Jun 21 05:44:47.042635 oslogin_cache_refresh[1504]: Refreshing passwd entry cache Jun 21 05:44:47.044349 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 05:44:47.047484 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 05:44:47.049473 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 05:44:47.050484 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 05:44:47.051817 jq[1502]: false Jun 21 05:44:47.054460 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 05:44:47.066835 extend-filesystems[1503]: Found /dev/sda6 Jun 21 05:44:47.077174 extend-filesystems[1503]: Found /dev/sda9 Jun 21 05:44:47.077174 extend-filesystems[1503]: Checking size of /dev/sda9 Jun 21 05:44:47.069986 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 05:44:47.079764 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Failure getting users, quitting Jun 21 05:44:47.079764 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:44:47.079764 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Refreshing group entry cache Jun 21 05:44:47.079764 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Failure getting groups, quitting Jun 21 05:44:47.079764 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:44:47.068438 oslogin_cache_refresh[1504]: Failure getting users, quitting Jun 21 05:44:47.068450 oslogin_cache_refresh[1504]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:44:47.068484 oslogin_cache_refresh[1504]: Refreshing group entry cache Jun 21 05:44:47.068895 oslogin_cache_refresh[1504]: Failure getting groups, quitting Jun 21 05:44:47.068902 oslogin_cache_refresh[1504]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:44:47.084160 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 05:44:47.085442 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 05:44:47.086891 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 05:44:47.087186 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 05:44:47.087397 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 05:44:47.088227 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 05:44:47.088435 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 05:44:47.088775 extend-filesystems[1503]: Resized partition /dev/sda9 Jun 21 05:44:47.091135 extend-filesystems[1535]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 05:44:47.093962 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 05:44:47.095717 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jun 21 05:44:47.096147 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 05:44:47.118763 update_engine[1521]: I20250621 05:44:47.113462 1521 main.cc:92] Flatcar Update Engine starting Jun 21 05:44:47.107579 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:44:47.107584 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 05:44:47.117266 systemd-networkd[1461]: eth0: Link UP Jun 21 05:44:47.117402 systemd-networkd[1461]: eth0: Gained carrier Jun 21 05:44:47.117414 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:44:47.129811 jq[1524]: true Jun 21 05:44:47.162878 jq[1547]: true Jun 21 05:44:47.165232 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 05:44:47.193375 dbus-daemon[1499]: [system] SELinux support is enabled Jun 21 05:44:47.197262 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 05:44:47.201841 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 05:44:47.202730 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 05:44:47.203931 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 05:44:47.203951 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 05:44:47.208441 tar[1536]: linux-amd64/helm Jun 21 05:44:47.219939 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 05:44:47.233084 systemd[1]: Started update-engine.service - Update Engine. Jun 21 05:44:47.234931 update_engine[1521]: I20250621 05:44:47.233836 1521 update_check_scheduler.cc:74] Next update check in 6m5s Jun 21 05:44:47.237459 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 05:44:47.248697 coreos-metadata[1497]: Jun 21 05:44:47.248 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jun 21 05:44:47.340681 systemd-logind[1512]: New seat seat0. Jun 21 05:44:47.342839 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 05:44:47.352531 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Jun 21 05:44:47.351723 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 05:44:47.361849 systemd[1]: Starting sshkeys.service... Jun 21 05:44:47.385207 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 05:44:47.403809 containerd[1551]: time="2025-06-21T05:44:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 05:44:47.403809 containerd[1551]: time="2025-06-21T05:44:47.403480507Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 05:44:47.437475 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 21 05:44:47.446163 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 21 05:44:47.455741 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jun 21 05:44:47.477393 containerd[1551]: time="2025-06-21T05:44:47.463769117Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.5µs" Jun 21 05:44:47.477393 containerd[1551]: time="2025-06-21T05:44:47.463806867Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 05:44:47.477393 containerd[1551]: time="2025-06-21T05:44:47.463833116Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 05:44:47.479045 containerd[1551]: time="2025-06-21T05:44:47.479020869Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 05:44:47.479414 containerd[1551]: time="2025-06-21T05:44:47.479371609Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 05:44:47.479906 extend-filesystems[1535]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 21 05:44:47.479906 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 10 Jun 21 05:44:47.479906 extend-filesystems[1535]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jun 21 05:44:47.481598 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.482748207Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.482938597Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.482958497Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.483215587Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.483230847Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.483242817Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.483251637Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.483348817Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.492933362Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.492974342Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:44:47.497647 containerd[1551]: time="2025-06-21T05:44:47.493008462Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 05:44:47.501889 extend-filesystems[1503]: Resized filesystem in /dev/sda9 Jun 21 05:44:47.481880 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 05:44:47.510314 containerd[1551]: time="2025-06-21T05:44:47.493049462Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 05:44:47.510314 containerd[1551]: time="2025-06-21T05:44:47.498240729Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 05:44:47.510314 containerd[1551]: time="2025-06-21T05:44:47.498341649Z" level=info msg="metadata content store policy set" policy=shared Jun 21 05:44:47.507713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 21 05:44:47.512787 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518727309Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518798439Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518816789Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518830189Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518887099Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518899699Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518912089Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518951449Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518963659Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518974849Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518983929Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 05:44:47.519018 containerd[1551]: time="2025-06-21T05:44:47.518995889Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 05:44:47.519397 containerd[1551]: time="2025-06-21T05:44:47.519367169Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.521721118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.521744058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.521960157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.521973037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.521987047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.521998187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.522007577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.522036227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.522048737Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.522058417Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.523072877Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 05:44:47.523117 containerd[1551]: time="2025-06-21T05:44:47.523090887Z" level=info msg="Start snapshots syncer" Jun 21 05:44:47.531944 containerd[1551]: time="2025-06-21T05:44:47.531393053Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 05:44:47.532254 containerd[1551]: time="2025-06-21T05:44:47.532221932Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 05:44:47.534184 containerd[1551]: time="2025-06-21T05:44:47.533724972Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.537782680Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.537966109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.537989839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.540692848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.540708658Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.540723098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.540734728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.540745318Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.540810818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.540825988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 05:44:47.542186 containerd[1551]: time="2025-06-21T05:44:47.540857368Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 05:44:47.545594 containerd[1551]: time="2025-06-21T05:44:47.545496086Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:44:47.545594 containerd[1551]: time="2025-06-21T05:44:47.545551336Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:44:47.545938 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545575396Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545701406Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545712076Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545722566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545734256Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545772626Z" level=info msg="runtime interface created" Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545779316Z" level=info msg="created NRI interface" Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545788106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545801596Z" level=info msg="Connect containerd service" Jun 21 05:44:47.545967 containerd[1551]: time="2025-06-21T05:44:47.545850795Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 05:44:47.551694 kernel: ACPI: button: Power Button [PWRF] Jun 21 05:44:47.561370 containerd[1551]: time="2025-06-21T05:44:47.561124028Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 05:44:47.567896 coreos-metadata[1581]: Jun 21 05:44:47.567 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jun 21 05:44:47.581984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 05:44:47.594725 systemd-networkd[1461]: eth0: DHCPv4 address 172.233.208.28/24, gateway 172.233.208.1 acquired from 23.205.167.145 Jun 21 05:44:47.596341 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jun 21 05:44:47.599094 dbus-daemon[1499]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1461 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 21 05:44:47.610691 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 21 05:44:47.669724 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 21 05:44:47.669979 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 21 05:44:48.615382 systemd-resolved[1410]: Clock change detected. Flushing caches. Jun 21 05:44:48.616284 systemd-timesyncd[1425]: Contacted time server 15.204.87.223:123 (0.flatcar.pool.ntp.org). Jun 21 05:44:48.616714 systemd-timesyncd[1425]: Initial clock synchronization to Sat 2025-06-21 05:44:48.614706 UTC. Jun 21 05:44:48.629670 kernel: EDAC MC: Ver: 3.0.0 Jun 21 05:44:48.661825 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 05:44:48.682545 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 05:44:48.767234 containerd[1551]: time="2025-06-21T05:44:48.767183002Z" level=info msg="Start subscribing containerd event" Jun 21 05:44:48.767379 containerd[1551]: time="2025-06-21T05:44:48.767337912Z" level=info msg="Start recovering state" Jun 21 05:44:48.768371 containerd[1551]: time="2025-06-21T05:44:48.768338701Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 05:44:48.768439 containerd[1551]: time="2025-06-21T05:44:48.768409871Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 05:44:48.769839 containerd[1551]: time="2025-06-21T05:44:48.769808030Z" level=info msg="Start event monitor" Jun 21 05:44:48.769839 containerd[1551]: time="2025-06-21T05:44:48.769839750Z" level=info msg="Start cni network conf syncer for default" Jun 21 05:44:48.769892 containerd[1551]: time="2025-06-21T05:44:48.769849470Z" level=info msg="Start streaming server" Jun 21 05:44:48.769892 containerd[1551]: time="2025-06-21T05:44:48.769865040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 05:44:48.769892 containerd[1551]: time="2025-06-21T05:44:48.769872170Z" level=info msg="runtime interface starting up..." Jun 21 05:44:48.769892 containerd[1551]: time="2025-06-21T05:44:48.769878410Z" level=info msg="starting plugins..." Jun 21 05:44:48.769892 containerd[1551]: time="2025-06-21T05:44:48.769893300Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 05:44:48.770060 containerd[1551]: time="2025-06-21T05:44:48.770028010Z" level=info msg="containerd successfully booted in 0.451565s" Jun 21 05:44:48.770276 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 05:44:48.777369 systemd-logind[1512]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 05:44:48.783161 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 05:44:48.795448 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 05:44:48.825980 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 05:44:48.826985 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 05:44:48.829208 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 21 05:44:48.833570 systemd-logind[1512]: Watching system buttons on /dev/input/event2 (Power Button) Jun 21 05:44:48.834833 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 05:44:48.844509 dbus-daemon[1499]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 21 05:44:48.849499 dbus-daemon[1499]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1600 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 21 05:44:48.857490 systemd[1]: Starting polkit.service - Authorization Manager... Jun 21 05:44:48.870376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:44:48.879624 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 05:44:48.884162 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 05:44:48.887738 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 05:44:48.889067 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 05:44:49.003958 polkitd[1637]: Started polkitd version 126 Jun 21 05:44:49.007241 polkitd[1637]: Loading rules from directory /etc/polkit-1/rules.d Jun 21 05:44:49.007761 polkitd[1637]: Loading rules from directory /run/polkit-1/rules.d Jun 21 05:44:49.007831 polkitd[1637]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 21 05:44:49.008046 polkitd[1637]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jun 21 05:44:49.008094 polkitd[1637]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 21 05:44:49.008167 polkitd[1637]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 21 05:44:49.009713 polkitd[1637]: Finished loading, compiling and executing 2 rules Jun 21 05:44:49.010753 systemd[1]: Started polkit.service - Authorization Manager. Jun 21 05:44:49.012088 dbus-daemon[1499]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 21 05:44:49.012885 polkitd[1637]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 21 05:44:49.047176 systemd-hostnamed[1600]: Hostname set to <172-233-208-28> (transient) Jun 21 05:44:49.047315 systemd-resolved[1410]: System hostname changed to '172-233-208-28'. Jun 21 05:44:49.127735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:44:49.173874 tar[1536]: linux-amd64/LICENSE Jun 21 05:44:49.174371 tar[1536]: linux-amd64/README.md Jun 21 05:44:49.176801 coreos-metadata[1497]: Jun 21 05:44:49.176 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jun 21 05:44:49.197533 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 05:44:49.225876 systemd-networkd[1461]: eth0: Gained IPv6LL Jun 21 05:44:49.229520 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 05:44:49.230937 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 05:44:49.234386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:44:49.237865 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 05:44:49.260694 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 05:44:49.273432 coreos-metadata[1497]: Jun 21 05:44:49.273 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jun 21 05:44:49.456449 coreos-metadata[1497]: Jun 21 05:44:49.456 INFO Fetch successful Jun 21 05:44:49.456449 coreos-metadata[1497]: Jun 21 05:44:49.456 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jun 21 05:44:49.485696 coreos-metadata[1581]: Jun 21 05:44:49.485 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jun 21 05:44:49.574725 coreos-metadata[1581]: Jun 21 05:44:49.574 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jun 21 05:44:49.756228 coreos-metadata[1581]: Jun 21 05:44:49.756 INFO Fetch successful Jun 21 05:44:49.777699 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Jun 21 05:44:49.779394 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 21 05:44:49.783827 systemd[1]: Finished sshkeys.service. Jun 21 05:44:49.833676 coreos-metadata[1497]: Jun 21 05:44:49.833 INFO Fetch successful Jun 21 05:44:49.929078 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 05:44:49.931283 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 05:44:50.137020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:44:50.138755 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 05:44:50.140204 systemd[1]: Startup finished in 2.811s (kernel) + 6.918s (initrd) + 4.701s (userspace) = 14.430s. Jun 21 05:44:50.143933 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:44:50.664113 kubelet[1702]: E0621 05:44:50.664041 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:44:50.668426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:44:50.668637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:44:50.669360 systemd[1]: kubelet.service: Consumed 847ms CPU time, 264.4M memory peak. Jun 21 05:44:52.930035 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 05:44:52.931580 systemd[1]: Started sshd@0-172.233.208.28:22-147.75.109.163:51252.service - OpenSSH per-connection server daemon (147.75.109.163:51252). Jun 21 05:44:53.275557 sshd[1714]: Accepted publickey for core from 147.75.109.163 port 51252 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:44:53.277616 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:44:53.284460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 05:44:53.285919 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 05:44:53.294972 systemd-logind[1512]: New session 1 of user core. Jun 21 05:44:53.306332 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 05:44:53.310012 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 05:44:53.324327 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 05:44:53.326980 systemd-logind[1512]: New session c1 of user core. Jun 21 05:44:53.468145 systemd[1718]: Queued start job for default target default.target. Jun 21 05:44:53.479841 systemd[1718]: Created slice app.slice - User Application Slice. Jun 21 05:44:53.479869 systemd[1718]: Reached target paths.target - Paths. Jun 21 05:44:53.479997 systemd[1718]: Reached target timers.target - Timers. Jun 21 05:44:53.481446 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 05:44:53.492407 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 05:44:53.492543 systemd[1718]: Reached target sockets.target - Sockets. Jun 21 05:44:53.492807 systemd[1718]: Reached target basic.target - Basic System. Jun 21 05:44:53.492909 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 05:44:53.494077 systemd[1718]: Reached target default.target - Main User Target. Jun 21 05:44:53.494108 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 05:44:53.494128 systemd[1718]: Startup finished in 160ms. Jun 21 05:44:53.755369 systemd[1]: Started sshd@1-172.233.208.28:22-147.75.109.163:51262.service - OpenSSH per-connection server daemon (147.75.109.163:51262). Jun 21 05:44:54.107171 sshd[1729]: Accepted publickey for core from 147.75.109.163 port 51262 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:44:54.108921 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:44:54.114630 systemd-logind[1512]: New session 2 of user core. Jun 21 05:44:54.124829 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 05:44:54.363867 sshd[1731]: Connection closed by 147.75.109.163 port 51262 Jun 21 05:44:54.364723 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jun 21 05:44:54.369338 systemd-logind[1512]: Session 2 logged out. Waiting for processes to exit. Jun 21 05:44:54.370601 systemd[1]: sshd@1-172.233.208.28:22-147.75.109.163:51262.service: Deactivated successfully. Jun 21 05:44:54.372763 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 05:44:54.374570 systemd-logind[1512]: Removed session 2. Jun 21 05:44:54.428279 systemd[1]: Started sshd@2-172.233.208.28:22-147.75.109.163:51274.service - OpenSSH per-connection server daemon (147.75.109.163:51274). Jun 21 05:44:54.785235 sshd[1737]: Accepted publickey for core from 147.75.109.163 port 51274 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:44:54.787202 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:44:54.792575 systemd-logind[1512]: New session 3 of user core. Jun 21 05:44:54.799794 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 05:44:55.034359 sshd[1739]: Connection closed by 147.75.109.163 port 51274 Jun 21 05:44:55.034989 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jun 21 05:44:55.039572 systemd[1]: sshd@2-172.233.208.28:22-147.75.109.163:51274.service: Deactivated successfully. Jun 21 05:44:55.041832 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 05:44:55.043355 systemd-logind[1512]: Session 3 logged out. Waiting for processes to exit. Jun 21 05:44:55.045000 systemd-logind[1512]: Removed session 3. Jun 21 05:44:55.095597 systemd[1]: Started sshd@3-172.233.208.28:22-147.75.109.163:51282.service - OpenSSH per-connection server daemon (147.75.109.163:51282). Jun 21 05:44:55.451415 sshd[1745]: Accepted publickey for core from 147.75.109.163 port 51282 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:44:55.452903 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:44:55.458360 systemd-logind[1512]: New session 4 of user core. Jun 21 05:44:55.460767 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 05:44:55.697606 sshd[1747]: Connection closed by 147.75.109.163 port 51282 Jun 21 05:44:55.698450 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jun 21 05:44:55.703172 systemd-logind[1512]: Session 4 logged out. Waiting for processes to exit. Jun 21 05:44:55.704160 systemd[1]: sshd@3-172.233.208.28:22-147.75.109.163:51282.service: Deactivated successfully. Jun 21 05:44:55.706305 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 05:44:55.708148 systemd-logind[1512]: Removed session 4. Jun 21 05:44:55.765275 systemd[1]: Started sshd@4-172.233.208.28:22-147.75.109.163:51288.service - OpenSSH per-connection server daemon (147.75.109.163:51288). Jun 21 05:44:56.111386 sshd[1753]: Accepted publickey for core from 147.75.109.163 port 51288 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:44:56.113282 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:44:56.118726 systemd-logind[1512]: New session 5 of user core. Jun 21 05:44:56.124762 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 05:44:56.320859 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 05:44:56.321181 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:44:56.343943 sudo[1756]: pam_unix(sudo:session): session closed for user root Jun 21 05:44:56.396405 sshd[1755]: Connection closed by 147.75.109.163 port 51288 Jun 21 05:44:56.397132 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jun 21 05:44:56.402001 systemd[1]: sshd@4-172.233.208.28:22-147.75.109.163:51288.service: Deactivated successfully. Jun 21 05:44:56.404339 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 05:44:56.405218 systemd-logind[1512]: Session 5 logged out. Waiting for processes to exit. Jun 21 05:44:56.406667 systemd-logind[1512]: Removed session 5. Jun 21 05:44:56.457806 systemd[1]: Started sshd@5-172.233.208.28:22-147.75.109.163:42622.service - OpenSSH per-connection server daemon (147.75.109.163:42622). Jun 21 05:44:56.792820 sshd[1762]: Accepted publickey for core from 147.75.109.163 port 42622 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:44:56.794427 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:44:56.798696 systemd-logind[1512]: New session 6 of user core. Jun 21 05:44:56.806949 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 05:44:56.989376 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 05:44:56.989707 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:44:56.994747 sudo[1766]: pam_unix(sudo:session): session closed for user root Jun 21 05:44:57.003892 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 05:44:57.004182 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:44:57.014673 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:44:57.052980 augenrules[1788]: No rules Jun 21 05:44:57.054432 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:44:57.054731 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:44:57.056091 sudo[1765]: pam_unix(sudo:session): session closed for user root Jun 21 05:44:57.106664 sshd[1764]: Connection closed by 147.75.109.163 port 42622 Jun 21 05:44:57.107146 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jun 21 05:44:57.111742 systemd[1]: sshd@5-172.233.208.28:22-147.75.109.163:42622.service: Deactivated successfully. Jun 21 05:44:57.113875 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 05:44:57.114750 systemd-logind[1512]: Session 6 logged out. Waiting for processes to exit. Jun 21 05:44:57.116477 systemd-logind[1512]: Removed session 6. Jun 21 05:44:57.174056 systemd[1]: Started sshd@6-172.233.208.28:22-147.75.109.163:42628.service - OpenSSH per-connection server daemon (147.75.109.163:42628). Jun 21 05:44:57.523116 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 42628 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:44:57.524720 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:44:57.529821 systemd-logind[1512]: New session 7 of user core. Jun 21 05:44:57.539778 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 05:44:57.724907 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 05:44:57.725213 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:44:58.015451 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 05:44:58.025953 (dockerd)[1818]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 05:44:58.204349 dockerd[1818]: time="2025-06-21T05:44:58.204283542Z" level=info msg="Starting up" Jun 21 05:44:58.205592 dockerd[1818]: time="2025-06-21T05:44:58.205568751Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 05:44:58.251824 dockerd[1818]: time="2025-06-21T05:44:58.251790338Z" level=info msg="Loading containers: start." Jun 21 05:44:58.261686 kernel: Initializing XFRM netlink socket Jun 21 05:44:58.493519 systemd-networkd[1461]: docker0: Link UP Jun 21 05:44:58.498757 dockerd[1818]: time="2025-06-21T05:44:58.498713515Z" level=info msg="Loading containers: done." Jun 21 05:44:58.518008 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2873435562-merged.mount: Deactivated successfully. Jun 21 05:44:58.519204 dockerd[1818]: time="2025-06-21T05:44:58.519160655Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 05:44:58.519293 dockerd[1818]: time="2025-06-21T05:44:58.519235055Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 05:44:58.519380 dockerd[1818]: time="2025-06-21T05:44:58.519348305Z" level=info msg="Initializing buildkit" Jun 21 05:44:58.542094 dockerd[1818]: time="2025-06-21T05:44:58.541886863Z" level=info msg="Completed buildkit initialization" Jun 21 05:44:58.545715 dockerd[1818]: time="2025-06-21T05:44:58.545687541Z" level=info msg="Daemon has completed initialization" Jun 21 05:44:58.546008 dockerd[1818]: time="2025-06-21T05:44:58.545839121Z" level=info msg="API listen on /run/docker.sock" Jun 21 05:44:58.545885 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 05:44:59.368174 containerd[1551]: time="2025-06-21T05:44:59.368112150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 21 05:45:00.092507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount314965159.mount: Deactivated successfully. Jun 21 05:45:00.918951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 05:45:00.922340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:45:01.121935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:45:01.130994 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:45:01.173314 kubelet[2081]: E0621 05:45:01.173178 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:45:01.178215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:45:01.178448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:45:01.179454 systemd[1]: kubelet.service: Consumed 188ms CPU time, 110.6M memory peak. Jun 21 05:45:01.386194 containerd[1551]: time="2025-06-21T05:45:01.386142411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:01.387145 containerd[1551]: time="2025-06-21T05:45:01.386916570Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077750" Jun 21 05:45:01.387710 containerd[1551]: time="2025-06-21T05:45:01.387681280Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:01.390118 containerd[1551]: time="2025-06-21T05:45:01.390085439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:01.390937 containerd[1551]: time="2025-06-21T05:45:01.390915898Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.022764608s" Jun 21 05:45:01.391023 containerd[1551]: time="2025-06-21T05:45:01.391006608Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 21 05:45:01.391875 containerd[1551]: time="2025-06-21T05:45:01.391845348Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 21 05:45:03.093147 containerd[1551]: time="2025-06-21T05:45:03.093095117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:03.094320 containerd[1551]: time="2025-06-21T05:45:03.094101687Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713300" Jun 21 05:45:03.095068 containerd[1551]: time="2025-06-21T05:45:03.095036256Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:03.097346 containerd[1551]: time="2025-06-21T05:45:03.097312195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:03.098286 containerd[1551]: time="2025-06-21T05:45:03.098245035Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.706351307s" Jun 21 05:45:03.098883 containerd[1551]: time="2025-06-21T05:45:03.098346484Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 21 05:45:03.099583 containerd[1551]: time="2025-06-21T05:45:03.099420804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 21 05:45:04.466771 containerd[1551]: time="2025-06-21T05:45:04.466000170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:04.466771 containerd[1551]: time="2025-06-21T05:45:04.466741040Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783677" Jun 21 05:45:04.467210 containerd[1551]: time="2025-06-21T05:45:04.467161510Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:04.468723 containerd[1551]: time="2025-06-21T05:45:04.468702429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:04.469485 containerd[1551]: time="2025-06-21T05:45:04.469457879Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.370011025s" Jun 21 05:45:04.469519 containerd[1551]: time="2025-06-21T05:45:04.469486629Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 21 05:45:04.470086 containerd[1551]: time="2025-06-21T05:45:04.470062398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 21 05:45:05.560922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149032783.mount: Deactivated successfully. Jun 21 05:45:06.048458 containerd[1551]: time="2025-06-21T05:45:06.048105549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:06.051113 containerd[1551]: time="2025-06-21T05:45:06.049290509Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383949" Jun 21 05:45:06.055157 containerd[1551]: time="2025-06-21T05:45:06.055116106Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:06.057164 containerd[1551]: time="2025-06-21T05:45:06.056998105Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.586906427s" Jun 21 05:45:06.057164 containerd[1551]: time="2025-06-21T05:45:06.057041255Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 21 05:45:06.057164 containerd[1551]: time="2025-06-21T05:45:06.057051645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:06.058362 containerd[1551]: time="2025-06-21T05:45:06.058330614Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 05:45:06.648788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311335998.mount: Deactivated successfully. Jun 21 05:45:07.345923 containerd[1551]: time="2025-06-21T05:45:07.345877920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:07.346826 containerd[1551]: time="2025-06-21T05:45:07.346681320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565247" Jun 21 05:45:07.347287 containerd[1551]: time="2025-06-21T05:45:07.347259449Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:07.349551 containerd[1551]: time="2025-06-21T05:45:07.349519028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:07.350722 containerd[1551]: time="2025-06-21T05:45:07.350691578Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.292232154s" Jun 21 05:45:07.350796 containerd[1551]: time="2025-06-21T05:45:07.350782978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 05:45:07.351545 containerd[1551]: time="2025-06-21T05:45:07.351522257Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 05:45:07.838430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402501205.mount: Deactivated successfully. Jun 21 05:45:07.844714 containerd[1551]: time="2025-06-21T05:45:07.844663891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:45:07.845280 containerd[1551]: time="2025-06-21T05:45:07.845252960Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jun 21 05:45:07.845788 containerd[1551]: time="2025-06-21T05:45:07.845736020Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:45:07.847157 containerd[1551]: time="2025-06-21T05:45:07.847117060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:45:07.847926 containerd[1551]: time="2025-06-21T05:45:07.847714899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 496.165252ms" Jun 21 05:45:07.847926 containerd[1551]: time="2025-06-21T05:45:07.847743939Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 05:45:07.848514 containerd[1551]: time="2025-06-21T05:45:07.848482869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 21 05:45:08.347545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885804286.mount: Deactivated successfully. Jun 21 05:45:09.730159 containerd[1551]: time="2025-06-21T05:45:09.730045408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:09.731704 containerd[1551]: time="2025-06-21T05:45:09.731386047Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780019" Jun 21 05:45:09.732342 containerd[1551]: time="2025-06-21T05:45:09.732301887Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:09.735107 containerd[1551]: time="2025-06-21T05:45:09.734616046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:09.735616 containerd[1551]: time="2025-06-21T05:45:09.735578645Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.887069186s" Jun 21 05:45:09.735682 containerd[1551]: time="2025-06-21T05:45:09.735616305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 21 05:45:11.344902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 21 05:45:11.348792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:45:11.384808 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 05:45:11.384937 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 05:45:11.385383 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:45:11.388285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:45:11.420412 systemd[1]: Reload requested from client PID 2245 ('systemctl') (unit session-7.scope)... Jun 21 05:45:11.420432 systemd[1]: Reloading... Jun 21 05:45:11.558226 zram_generator::config[2292]: No configuration found. Jun 21 05:45:11.668044 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:45:11.774813 systemd[1]: Reloading finished in 353 ms. Jun 21 05:45:11.841076 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 05:45:11.841200 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 05:45:11.841580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:45:11.841658 systemd[1]: kubelet.service: Consumed 140ms CPU time, 98.3M memory peak. Jun 21 05:45:11.843397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:45:12.030437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:45:12.040207 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:45:12.083554 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:45:12.083554 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 05:45:12.083554 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:45:12.083975 kubelet[2343]: I0621 05:45:12.083613 2343 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:45:12.286057 kubelet[2343]: I0621 05:45:12.285916 2343 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 05:45:12.287404 kubelet[2343]: I0621 05:45:12.286209 2343 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:45:12.287404 kubelet[2343]: I0621 05:45:12.286516 2343 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 05:45:12.307503 kubelet[2343]: E0621 05:45:12.307478 2343 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.233.208.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.208.28:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:45:12.309055 kubelet[2343]: I0621 05:45:12.309019 2343 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:45:12.316428 kubelet[2343]: I0621 05:45:12.316408 2343 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:45:12.321951 kubelet[2343]: I0621 05:45:12.321920 2343 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:45:12.323218 kubelet[2343]: I0621 05:45:12.322027 2343 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 05:45:12.323218 kubelet[2343]: I0621 05:45:12.322140 2343 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:45:12.323218 kubelet[2343]: I0621 05:45:12.322173 2343 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-208-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:45:12.323218 kubelet[2343]: I0621 05:45:12.322373 2343 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:45:12.323399 kubelet[2343]: I0621 05:45:12.322382 2343 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 05:45:12.323399 kubelet[2343]: I0621 05:45:12.322466 2343 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:45:12.327506 kubelet[2343]: I0621 05:45:12.327481 2343 kubelet.go:408] "Attempting to sync node with API server" Jun 21 05:45:12.327606 kubelet[2343]: I0621 05:45:12.327594 2343 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:45:12.327778 kubelet[2343]: I0621 05:45:12.327738 2343 kubelet.go:314] "Adding apiserver pod source" Jun 21 05:45:12.327778 kubelet[2343]: I0621 05:45:12.327770 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:45:12.330442 kubelet[2343]: W0621 05:45:12.329634 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.233.208.28:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-208-28&limit=500&resourceVersion=0": dial tcp 172.233.208.28:6443: connect: connection refused Jun 21 05:45:12.330442 kubelet[2343]: E0621 05:45:12.329725 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.233.208.28:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-208-28&limit=500&resourceVersion=0\": dial tcp 172.233.208.28:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:45:12.330442 kubelet[2343]: W0621 05:45:12.330279 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.233.208.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.233.208.28:6443: connect: connection refused Jun 21 05:45:12.330442 kubelet[2343]: E0621 05:45:12.330341 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.233.208.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.208.28:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:45:12.330868 kubelet[2343]: I0621 05:45:12.330851 2343 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:45:12.331244 kubelet[2343]: I0621 05:45:12.331230 2343 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 05:45:12.331341 kubelet[2343]: W0621 05:45:12.331330 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 05:45:12.334196 kubelet[2343]: I0621 05:45:12.334183 2343 server.go:1274] "Started kubelet" Jun 21 05:45:12.335062 kubelet[2343]: I0621 05:45:12.334973 2343 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:45:12.335939 kubelet[2343]: I0621 05:45:12.335907 2343 server.go:449] "Adding debug handlers to kubelet server" Jun 21 05:45:12.339887 kubelet[2343]: I0621 05:45:12.339352 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:45:12.339887 kubelet[2343]: I0621 05:45:12.339616 2343 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:45:12.342756 kubelet[2343]: I0621 05:45:12.342726 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:45:12.343024 kubelet[2343]: E0621 05:45:12.339796 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.208.28:6443/api/v1/namespaces/default/events\": dial tcp 172.233.208.28:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-208-28.184af892476dfcab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-208-28,UID:172-233-208-28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-208-28,},FirstTimestamp:2025-06-21 05:45:12.334163115 +0000 UTC m=+0.288410556,LastTimestamp:2025-06-21 05:45:12.334163115 +0000 UTC m=+0.288410556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-208-28,}" Jun 21 05:45:12.346306 kubelet[2343]: I0621 05:45:12.346290 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:45:12.348728 kubelet[2343]: I0621 05:45:12.348697 2343 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 05:45:12.348930 kubelet[2343]: E0621 05:45:12.348900 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:12.349379 kubelet[2343]: I0621 05:45:12.349358 2343 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 05:45:12.349464 kubelet[2343]: I0621 05:45:12.349435 2343 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:45:12.349804 kubelet[2343]: E0621 05:45:12.349765 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.208.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-208-28?timeout=10s\": dial tcp 172.233.208.28:6443: connect: connection refused" interval="200ms" Jun 21 05:45:12.350313 kubelet[2343]: W0621 05:45:12.350244 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.233.208.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.233.208.28:6443: connect: connection refused Jun 21 05:45:12.350379 kubelet[2343]: E0621 05:45:12.350290 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.233.208.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.208.28:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:45:12.350538 kubelet[2343]: E0621 05:45:12.350511 2343 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:45:12.351292 kubelet[2343]: I0621 05:45:12.351262 2343 factory.go:221] Registration of the systemd container factory successfully Jun 21 05:45:12.351481 kubelet[2343]: I0621 05:45:12.351447 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:45:12.353271 kubelet[2343]: I0621 05:45:12.353254 2343 factory.go:221] Registration of the containerd container factory successfully Jun 21 05:45:12.370835 kubelet[2343]: I0621 05:45:12.370808 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 05:45:12.372811 kubelet[2343]: I0621 05:45:12.372767 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 05:45:12.372811 kubelet[2343]: I0621 05:45:12.372801 2343 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 05:45:12.372858 kubelet[2343]: I0621 05:45:12.372827 2343 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 05:45:12.372890 kubelet[2343]: E0621 05:45:12.372866 2343 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:45:12.379852 kubelet[2343]: W0621 05:45:12.379440 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.233.208.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.233.208.28:6443: connect: connection refused Jun 21 05:45:12.379852 kubelet[2343]: E0621 05:45:12.379493 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.233.208.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.233.208.28:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:45:12.380358 kubelet[2343]: I0621 05:45:12.380344 2343 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 05:45:12.380409 kubelet[2343]: I0621 05:45:12.380400 2343 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 05:45:12.380458 kubelet[2343]: I0621 05:45:12.380450 2343 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:45:12.382394 kubelet[2343]: I0621 05:45:12.382361 2343 policy_none.go:49] "None policy: Start" Jun 21 05:45:12.383159 kubelet[2343]: I0621 05:45:12.382975 2343 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 05:45:12.383159 kubelet[2343]: I0621 05:45:12.382993 2343 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:45:12.389041 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 05:45:12.404087 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 05:45:12.407433 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 05:45:12.417673 kubelet[2343]: I0621 05:45:12.417588 2343 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 05:45:12.418295 kubelet[2343]: I0621 05:45:12.418274 2343 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:45:12.418950 kubelet[2343]: I0621 05:45:12.418292 2343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:45:12.419341 kubelet[2343]: I0621 05:45:12.419314 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:45:12.421453 kubelet[2343]: E0621 05:45:12.421428 2343 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-233-208-28\" not found" Jun 21 05:45:12.483231 systemd[1]: Created slice kubepods-burstable-pod8bd879010b305ae94c4cb24d4079f1ef.slice - libcontainer container kubepods-burstable-pod8bd879010b305ae94c4cb24d4079f1ef.slice. Jun 21 05:45:12.492459 systemd[1]: Created slice kubepods-burstable-poddad76bed08043afb1e94f6a99dc2fcf6.slice - libcontainer container kubepods-burstable-poddad76bed08043afb1e94f6a99dc2fcf6.slice. Jun 21 05:45:12.496881 systemd[1]: Created slice kubepods-burstable-podf8d6e321566021293b8f2e012906028a.slice - libcontainer container kubepods-burstable-podf8d6e321566021293b8f2e012906028a.slice. Jun 21 05:45:12.521166 kubelet[2343]: I0621 05:45:12.521145 2343 kubelet_node_status.go:72] "Attempting to register node" node="172-233-208-28" Jun 21 05:45:12.521444 kubelet[2343]: E0621 05:45:12.521420 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.233.208.28:6443/api/v1/nodes\": dial tcp 172.233.208.28:6443: connect: connection refused" node="172-233-208-28" Jun 21 05:45:12.550943 kubelet[2343]: I0621 05:45:12.550799 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-k8s-certs\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:12.551053 kubelet[2343]: I0621 05:45:12.550936 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dad76bed08043afb1e94f6a99dc2fcf6-kubeconfig\") pod \"kube-scheduler-172-233-208-28\" (UID: \"dad76bed08043afb1e94f6a99dc2fcf6\") " pod="kube-system/kube-scheduler-172-233-208-28" Jun 21 05:45:12.551053 kubelet[2343]: I0621 05:45:12.550965 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bd879010b305ae94c4cb24d4079f1ef-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-208-28\" (UID: \"8bd879010b305ae94c4cb24d4079f1ef\") " pod="kube-system/kube-apiserver-172-233-208-28" Jun 21 05:45:12.551053 kubelet[2343]: I0621 05:45:12.550986 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-flexvolume-dir\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:12.551053 kubelet[2343]: I0621 05:45:12.551003 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-ca-certs\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:12.551053 kubelet[2343]: I0621 05:45:12.551018 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-kubeconfig\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:12.551183 kubelet[2343]: I0621 05:45:12.551033 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:12.551183 kubelet[2343]: I0621 05:45:12.551049 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bd879010b305ae94c4cb24d4079f1ef-ca-certs\") pod \"kube-apiserver-172-233-208-28\" (UID: \"8bd879010b305ae94c4cb24d4079f1ef\") " pod="kube-system/kube-apiserver-172-233-208-28" Jun 21 05:45:12.551183 kubelet[2343]: I0621 05:45:12.551065 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bd879010b305ae94c4cb24d4079f1ef-k8s-certs\") pod \"kube-apiserver-172-233-208-28\" (UID: \"8bd879010b305ae94c4cb24d4079f1ef\") " pod="kube-system/kube-apiserver-172-233-208-28" Jun 21 05:45:12.551757 kubelet[2343]: E0621 05:45:12.551675 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.208.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-208-28?timeout=10s\": dial tcp 172.233.208.28:6443: connect: connection refused" interval="400ms" Jun 21 05:45:12.704549 kubelet[2343]: E0621 05:45:12.704433 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.208.28:6443/api/v1/namespaces/default/events\": dial tcp 172.233.208.28:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-208-28.184af892476dfcab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-208-28,UID:172-233-208-28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-208-28,},FirstTimestamp:2025-06-21 05:45:12.334163115 +0000 UTC m=+0.288410556,LastTimestamp:2025-06-21 05:45:12.334163115 +0000 UTC m=+0.288410556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-208-28,}" Jun 21 05:45:12.724360 kubelet[2343]: I0621 05:45:12.724307 2343 kubelet_node_status.go:72] "Attempting to register node" node="172-233-208-28" Jun 21 05:45:12.724609 kubelet[2343]: E0621 05:45:12.724569 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.233.208.28:6443/api/v1/nodes\": dial tcp 172.233.208.28:6443: connect: connection refused" node="172-233-208-28" Jun 21 05:45:12.789436 kubelet[2343]: E0621 05:45:12.789373 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:12.790296 containerd[1551]: time="2025-06-21T05:45:12.790248417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-208-28,Uid:8bd879010b305ae94c4cb24d4079f1ef,Namespace:kube-system,Attempt:0,}" Jun 21 05:45:12.796054 kubelet[2343]: E0621 05:45:12.795958 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:12.796550 containerd[1551]: time="2025-06-21T05:45:12.796418684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-208-28,Uid:dad76bed08043afb1e94f6a99dc2fcf6,Namespace:kube-system,Attempt:0,}" Jun 21 05:45:12.799003 kubelet[2343]: E0621 05:45:12.798980 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:12.799505 containerd[1551]: time="2025-06-21T05:45:12.799479013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-208-28,Uid:f8d6e321566021293b8f2e012906028a,Namespace:kube-system,Attempt:0,}" Jun 21 05:45:12.816303 containerd[1551]: time="2025-06-21T05:45:12.816191374Z" level=info msg="connecting to shim 6daac8aaaa213808d248bf6fde4c897a1931e324418151bfe963f08729a43f2e" address="unix:///run/containerd/s/820c585cdd1b7e0aa3a023364eac9d2d5cc4700538526c71f1c01c6e9515fcc4" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:45:12.843765 containerd[1551]: time="2025-06-21T05:45:12.843703551Z" level=info msg="connecting to shim 7d403c41e57617e35ebee9d2d870f3ebe3c96a99254b9f50562bad288f414242" address="unix:///run/containerd/s/450c393fb75139401e3272b2f2a78cabf8645dc1e51c5a1e5a43bbe677c5290c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:45:12.858819 containerd[1551]: time="2025-06-21T05:45:12.858768063Z" level=info msg="connecting to shim 2c827959fe4914417976adc63b1cd4fd93b849825c3c4d420d68223d3ff88c55" address="unix:///run/containerd/s/10f257614ff13f6b7a54e0328eac9cdd18e0e103a76cac1e4db0aa6468274363" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:45:12.877954 systemd[1]: Started cri-containerd-6daac8aaaa213808d248bf6fde4c897a1931e324418151bfe963f08729a43f2e.scope - libcontainer container 6daac8aaaa213808d248bf6fde4c897a1931e324418151bfe963f08729a43f2e. Jun 21 05:45:12.888189 systemd[1]: Started cri-containerd-7d403c41e57617e35ebee9d2d870f3ebe3c96a99254b9f50562bad288f414242.scope - libcontainer container 7d403c41e57617e35ebee9d2d870f3ebe3c96a99254b9f50562bad288f414242. Jun 21 05:45:12.905777 systemd[1]: Started cri-containerd-2c827959fe4914417976adc63b1cd4fd93b849825c3c4d420d68223d3ff88c55.scope - libcontainer container 2c827959fe4914417976adc63b1cd4fd93b849825c3c4d420d68223d3ff88c55. Jun 21 05:45:12.955140 kubelet[2343]: E0621 05:45:12.954929 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.208.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-208-28?timeout=10s\": dial tcp 172.233.208.28:6443: connect: connection refused" interval="800ms" Jun 21 05:45:12.967757 containerd[1551]: time="2025-06-21T05:45:12.967521839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-208-28,Uid:8bd879010b305ae94c4cb24d4079f1ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"6daac8aaaa213808d248bf6fde4c897a1931e324418151bfe963f08729a43f2e\"" Jun 21 05:45:12.970130 kubelet[2343]: E0621 05:45:12.970085 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:12.976840 containerd[1551]: time="2025-06-21T05:45:12.976740454Z" level=info msg="CreateContainer within sandbox \"6daac8aaaa213808d248bf6fde4c897a1931e324418151bfe963f08729a43f2e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 05:45:12.978225 containerd[1551]: time="2025-06-21T05:45:12.978190123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-208-28,Uid:f8d6e321566021293b8f2e012906028a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d403c41e57617e35ebee9d2d870f3ebe3c96a99254b9f50562bad288f414242\"" Jun 21 05:45:12.979341 kubelet[2343]: E0621 05:45:12.979156 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:12.981086 containerd[1551]: time="2025-06-21T05:45:12.981056272Z" level=info msg="CreateContainer within sandbox \"7d403c41e57617e35ebee9d2d870f3ebe3c96a99254b9f50562bad288f414242\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 05:45:12.989265 containerd[1551]: time="2025-06-21T05:45:12.989237988Z" level=info msg="Container 29eb34daf81cc32f6552bb766128fdd6f0490e9237ceca1ceba6bb254b0f62ad: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:12.996026 containerd[1551]: time="2025-06-21T05:45:12.995956654Z" level=info msg="Container 44621f2e8a409e5362fd9154b21542746ef74979cda63169923d0b1ccdb4281b: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:12.997369 containerd[1551]: time="2025-06-21T05:45:12.997205224Z" level=info msg="CreateContainer within sandbox \"6daac8aaaa213808d248bf6fde4c897a1931e324418151bfe963f08729a43f2e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"29eb34daf81cc32f6552bb766128fdd6f0490e9237ceca1ceba6bb254b0f62ad\"" Jun 21 05:45:12.999304 containerd[1551]: time="2025-06-21T05:45:12.999166583Z" level=info msg="StartContainer for \"29eb34daf81cc32f6552bb766128fdd6f0490e9237ceca1ceba6bb254b0f62ad\"" Jun 21 05:45:13.001054 containerd[1551]: time="2025-06-21T05:45:13.001001192Z" level=info msg="connecting to shim 29eb34daf81cc32f6552bb766128fdd6f0490e9237ceca1ceba6bb254b0f62ad" address="unix:///run/containerd/s/820c585cdd1b7e0aa3a023364eac9d2d5cc4700538526c71f1c01c6e9515fcc4" protocol=ttrpc version=3 Jun 21 05:45:13.006962 containerd[1551]: time="2025-06-21T05:45:13.006897229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-208-28,Uid:dad76bed08043afb1e94f6a99dc2fcf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c827959fe4914417976adc63b1cd4fd93b849825c3c4d420d68223d3ff88c55\"" Jun 21 05:45:13.007852 kubelet[2343]: E0621 05:45:13.007812 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:13.010901 containerd[1551]: time="2025-06-21T05:45:13.010506287Z" level=info msg="CreateContainer within sandbox \"2c827959fe4914417976adc63b1cd4fd93b849825c3c4d420d68223d3ff88c55\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 05:45:13.010901 containerd[1551]: time="2025-06-21T05:45:13.010779047Z" level=info msg="CreateContainer within sandbox \"7d403c41e57617e35ebee9d2d870f3ebe3c96a99254b9f50562bad288f414242\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"44621f2e8a409e5362fd9154b21542746ef74979cda63169923d0b1ccdb4281b\"" Jun 21 05:45:13.011243 containerd[1551]: time="2025-06-21T05:45:13.011197517Z" level=info msg="StartContainer for \"44621f2e8a409e5362fd9154b21542746ef74979cda63169923d0b1ccdb4281b\"" Jun 21 05:45:13.012446 containerd[1551]: time="2025-06-21T05:45:13.012406366Z" level=info msg="connecting to shim 44621f2e8a409e5362fd9154b21542746ef74979cda63169923d0b1ccdb4281b" address="unix:///run/containerd/s/450c393fb75139401e3272b2f2a78cabf8645dc1e51c5a1e5a43bbe677c5290c" protocol=ttrpc version=3 Jun 21 05:45:13.018756 containerd[1551]: time="2025-06-21T05:45:13.018724503Z" level=info msg="Container dd843dcd7384debcb4b9130609c4ec24950d28c312e800fb7e6c983c68e6d3c3: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:13.024254 containerd[1551]: time="2025-06-21T05:45:13.024230500Z" level=info msg="CreateContainer within sandbox \"2c827959fe4914417976adc63b1cd4fd93b849825c3c4d420d68223d3ff88c55\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd843dcd7384debcb4b9130609c4ec24950d28c312e800fb7e6c983c68e6d3c3\"" Jun 21 05:45:13.026399 containerd[1551]: time="2025-06-21T05:45:13.025083530Z" level=info msg="StartContainer for \"dd843dcd7384debcb4b9130609c4ec24950d28c312e800fb7e6c983c68e6d3c3\"" Jun 21 05:45:13.026399 containerd[1551]: time="2025-06-21T05:45:13.026079949Z" level=info msg="connecting to shim dd843dcd7384debcb4b9130609c4ec24950d28c312e800fb7e6c983c68e6d3c3" address="unix:///run/containerd/s/10f257614ff13f6b7a54e0328eac9cdd18e0e103a76cac1e4db0aa6468274363" protocol=ttrpc version=3 Jun 21 05:45:13.034975 systemd[1]: Started cri-containerd-29eb34daf81cc32f6552bb766128fdd6f0490e9237ceca1ceba6bb254b0f62ad.scope - libcontainer container 29eb34daf81cc32f6552bb766128fdd6f0490e9237ceca1ceba6bb254b0f62ad. Jun 21 05:45:13.055806 systemd[1]: Started cri-containerd-44621f2e8a409e5362fd9154b21542746ef74979cda63169923d0b1ccdb4281b.scope - libcontainer container 44621f2e8a409e5362fd9154b21542746ef74979cda63169923d0b1ccdb4281b. Jun 21 05:45:13.065212 systemd[1]: Started cri-containerd-dd843dcd7384debcb4b9130609c4ec24950d28c312e800fb7e6c983c68e6d3c3.scope - libcontainer container dd843dcd7384debcb4b9130609c4ec24950d28c312e800fb7e6c983c68e6d3c3. Jun 21 05:45:13.130270 kubelet[2343]: I0621 05:45:13.129811 2343 kubelet_node_status.go:72] "Attempting to register node" node="172-233-208-28" Jun 21 05:45:13.130724 kubelet[2343]: E0621 05:45:13.130679 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.233.208.28:6443/api/v1/nodes\": dial tcp 172.233.208.28:6443: connect: connection refused" node="172-233-208-28" Jun 21 05:45:13.134104 containerd[1551]: time="2025-06-21T05:45:13.134077715Z" level=info msg="StartContainer for \"29eb34daf81cc32f6552bb766128fdd6f0490e9237ceca1ceba6bb254b0f62ad\" returns successfully" Jun 21 05:45:13.150479 kubelet[2343]: W0621 05:45:13.150140 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.233.208.28:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-208-28&limit=500&resourceVersion=0": dial tcp 172.233.208.28:6443: connect: connection refused Jun 21 05:45:13.151802 kubelet[2343]: E0621 05:45:13.151773 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.233.208.28:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-208-28&limit=500&resourceVersion=0\": dial tcp 172.233.208.28:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:45:13.157433 kubelet[2343]: W0621 05:45:13.157270 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.233.208.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.233.208.28:6443: connect: connection refused Jun 21 05:45:13.158778 kubelet[2343]: E0621 05:45:13.158710 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.233.208.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.208.28:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:45:13.173528 containerd[1551]: time="2025-06-21T05:45:13.173505776Z" level=info msg="StartContainer for \"44621f2e8a409e5362fd9154b21542746ef74979cda63169923d0b1ccdb4281b\" returns successfully" Jun 21 05:45:13.197109 containerd[1551]: time="2025-06-21T05:45:13.197052534Z" level=info msg="StartContainer for \"dd843dcd7384debcb4b9130609c4ec24950d28c312e800fb7e6c983c68e6d3c3\" returns successfully" Jun 21 05:45:13.390991 kubelet[2343]: E0621 05:45:13.390570 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:13.392567 kubelet[2343]: E0621 05:45:13.392546 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:13.407664 kubelet[2343]: E0621 05:45:13.405391 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:13.940850 kubelet[2343]: I0621 05:45:13.940793 2343 kubelet_node_status.go:72] "Attempting to register node" node="172-233-208-28" Jun 21 05:45:14.406222 kubelet[2343]: E0621 05:45:14.406169 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:14.410274 kubelet[2343]: E0621 05:45:14.410236 2343 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-233-208-28\" not found" node="172-233-208-28" Jun 21 05:45:14.571966 kubelet[2343]: I0621 05:45:14.571909 2343 kubelet_node_status.go:75] "Successfully registered node" node="172-233-208-28" Jun 21 05:45:14.571966 kubelet[2343]: E0621 05:45:14.571948 2343 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172-233-208-28\": node \"172-233-208-28\" not found" Jun 21 05:45:14.591103 kubelet[2343]: E0621 05:45:14.591064 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:14.692342 kubelet[2343]: E0621 05:45:14.692196 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:14.792962 kubelet[2343]: E0621 05:45:14.792936 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:14.894141 kubelet[2343]: E0621 05:45:14.894097 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:14.958222 kubelet[2343]: E0621 05:45:14.958150 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:14.995163 kubelet[2343]: E0621 05:45:14.995101 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:15.095943 kubelet[2343]: E0621 05:45:15.095894 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:15.196707 kubelet[2343]: E0621 05:45:15.196619 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:15.297442 kubelet[2343]: E0621 05:45:15.297328 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:15.398548 kubelet[2343]: E0621 05:45:15.398489 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:15.499098 kubelet[2343]: E0621 05:45:15.499057 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:15.599997 kubelet[2343]: E0621 05:45:15.599947 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:15.700073 kubelet[2343]: E0621 05:45:15.700032 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:16.333993 kubelet[2343]: I0621 05:45:16.333787 2343 apiserver.go:52] "Watching apiserver" Jun 21 05:45:16.349919 kubelet[2343]: I0621 05:45:16.349885 2343 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 05:45:16.489346 systemd[1]: Reload requested from client PID 2614 ('systemctl') (unit session-7.scope)... Jun 21 05:45:16.489370 systemd[1]: Reloading... Jun 21 05:45:16.605695 zram_generator::config[2658]: No configuration found. Jun 21 05:45:16.710186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:45:16.823424 systemd[1]: Reloading finished in 333 ms. Jun 21 05:45:16.852126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:45:16.879047 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 05:45:16.879368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:45:16.879424 systemd[1]: kubelet.service: Consumed 694ms CPU time, 127.6M memory peak. Jun 21 05:45:16.881987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:45:17.071661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:45:17.077051 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:45:17.116905 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:45:17.116905 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 05:45:17.116905 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:45:17.117264 kubelet[2709]: I0621 05:45:17.116997 2709 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:45:17.125077 kubelet[2709]: I0621 05:45:17.124957 2709 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 05:45:17.125077 kubelet[2709]: I0621 05:45:17.124978 2709 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:45:17.125151 kubelet[2709]: I0621 05:45:17.125101 2709 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 05:45:17.126056 kubelet[2709]: I0621 05:45:17.126027 2709 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 05:45:17.127544 kubelet[2709]: I0621 05:45:17.127512 2709 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:45:17.133030 kubelet[2709]: I0621 05:45:17.132367 2709 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:45:17.136342 kubelet[2709]: I0621 05:45:17.136323 2709 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:45:17.136441 kubelet[2709]: I0621 05:45:17.136425 2709 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 05:45:17.136572 kubelet[2709]: I0621 05:45:17.136545 2709 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:45:17.136714 kubelet[2709]: I0621 05:45:17.136567 2709 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-208-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:45:17.136804 kubelet[2709]: I0621 05:45:17.136719 2709 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:45:17.136804 kubelet[2709]: I0621 05:45:17.136728 2709 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 05:45:17.136804 kubelet[2709]: I0621 05:45:17.136749 2709 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:45:17.137028 kubelet[2709]: I0621 05:45:17.137008 2709 kubelet.go:408] "Attempting to sync node with API server" Jun 21 05:45:17.137028 kubelet[2709]: I0621 05:45:17.137024 2709 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:45:17.137264 kubelet[2709]: I0621 05:45:17.137048 2709 kubelet.go:314] "Adding apiserver pod source" Jun 21 05:45:17.137264 kubelet[2709]: I0621 05:45:17.137254 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:45:17.141976 kubelet[2709]: I0621 05:45:17.141942 2709 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:45:17.142876 kubelet[2709]: I0621 05:45:17.142864 2709 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 05:45:17.145158 kubelet[2709]: I0621 05:45:17.145073 2709 server.go:1274] "Started kubelet" Jun 21 05:45:17.149961 kubelet[2709]: I0621 05:45:17.149948 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:45:17.150531 kubelet[2709]: I0621 05:45:17.150397 2709 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:45:17.153530 kubelet[2709]: I0621 05:45:17.153022 2709 server.go:449] "Adding debug handlers to kubelet server" Jun 21 05:45:17.155407 kubelet[2709]: I0621 05:45:17.155367 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:45:17.155969 kubelet[2709]: I0621 05:45:17.155939 2709 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:45:17.157728 kubelet[2709]: I0621 05:45:17.157711 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:45:17.159968 kubelet[2709]: I0621 05:45:17.159945 2709 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 05:45:17.160245 kubelet[2709]: E0621 05:45:17.160230 2709 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-208-28\" not found" Jun 21 05:45:17.162076 kubelet[2709]: I0621 05:45:17.162049 2709 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 05:45:17.163613 kubelet[2709]: I0621 05:45:17.163594 2709 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:45:17.163937 kubelet[2709]: I0621 05:45:17.163898 2709 factory.go:221] Registration of the systemd container factory successfully Jun 21 05:45:17.164005 kubelet[2709]: I0621 05:45:17.163973 2709 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:45:17.166302 kubelet[2709]: I0621 05:45:17.166240 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 05:45:17.166820 kubelet[2709]: E0621 05:45:17.166788 2709 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:45:17.169396 kubelet[2709]: I0621 05:45:17.169363 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 05:45:17.169396 kubelet[2709]: I0621 05:45:17.169388 2709 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 05:45:17.169477 kubelet[2709]: I0621 05:45:17.169404 2709 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 05:45:17.169477 kubelet[2709]: E0621 05:45:17.169449 2709 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:45:17.171342 kubelet[2709]: I0621 05:45:17.171210 2709 factory.go:221] Registration of the containerd container factory successfully Jun 21 05:45:17.226983 kubelet[2709]: I0621 05:45:17.226956 2709 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 05:45:17.227521 kubelet[2709]: I0621 05:45:17.227240 2709 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 05:45:17.227521 kubelet[2709]: I0621 05:45:17.227269 2709 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:45:17.227521 kubelet[2709]: I0621 05:45:17.227420 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 05:45:17.227521 kubelet[2709]: I0621 05:45:17.227431 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 05:45:17.227521 kubelet[2709]: I0621 05:45:17.227454 2709 policy_none.go:49] "None policy: Start" Jun 21 05:45:17.228189 kubelet[2709]: I0621 05:45:17.228176 2709 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 05:45:17.228335 kubelet[2709]: I0621 05:45:17.228324 2709 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:45:17.228561 kubelet[2709]: I0621 05:45:17.228549 2709 state_mem.go:75] "Updated machine memory state" Jun 21 05:45:17.234483 kubelet[2709]: I0621 05:45:17.234466 2709 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 05:45:17.235229 kubelet[2709]: I0621 05:45:17.235216 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:45:17.235534 kubelet[2709]: I0621 05:45:17.235273 2709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:45:17.236040 kubelet[2709]: I0621 05:45:17.236027 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:45:17.339916 kubelet[2709]: I0621 05:45:17.339883 2709 kubelet_node_status.go:72] "Attempting to register node" node="172-233-208-28" Jun 21 05:45:17.346094 kubelet[2709]: I0621 05:45:17.345924 2709 kubelet_node_status.go:111] "Node was previously registered" node="172-233-208-28" Jun 21 05:45:17.346094 kubelet[2709]: I0621 05:45:17.346004 2709 kubelet_node_status.go:75] "Successfully registered node" node="172-233-208-28" Jun 21 05:45:17.365128 kubelet[2709]: I0621 05:45:17.365091 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-ca-certs\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:17.365128 kubelet[2709]: I0621 05:45:17.365120 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-k8s-certs\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:17.365128 kubelet[2709]: I0621 05:45:17.365144 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dad76bed08043afb1e94f6a99dc2fcf6-kubeconfig\") pod \"kube-scheduler-172-233-208-28\" (UID: \"dad76bed08043afb1e94f6a99dc2fcf6\") " pod="kube-system/kube-scheduler-172-233-208-28" Jun 21 05:45:17.365128 kubelet[2709]: I0621 05:45:17.365162 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bd879010b305ae94c4cb24d4079f1ef-ca-certs\") pod \"kube-apiserver-172-233-208-28\" (UID: \"8bd879010b305ae94c4cb24d4079f1ef\") " pod="kube-system/kube-apiserver-172-233-208-28" Jun 21 05:45:17.365503 kubelet[2709]: I0621 05:45:17.365182 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bd879010b305ae94c4cb24d4079f1ef-k8s-certs\") pod \"kube-apiserver-172-233-208-28\" (UID: \"8bd879010b305ae94c4cb24d4079f1ef\") " pod="kube-system/kube-apiserver-172-233-208-28" Jun 21 05:45:17.365503 kubelet[2709]: I0621 05:45:17.365206 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bd879010b305ae94c4cb24d4079f1ef-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-208-28\" (UID: \"8bd879010b305ae94c4cb24d4079f1ef\") " pod="kube-system/kube-apiserver-172-233-208-28" Jun 21 05:45:17.365503 kubelet[2709]: I0621 05:45:17.365227 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-flexvolume-dir\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:17.365503 kubelet[2709]: I0621 05:45:17.365244 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-kubeconfig\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:17.365503 kubelet[2709]: I0621 05:45:17.365266 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8d6e321566021293b8f2e012906028a-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-208-28\" (UID: \"f8d6e321566021293b8f2e012906028a\") " pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:17.489937 sudo[2743]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 21 05:45:17.490958 sudo[2743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 21 05:45:17.579916 kubelet[2709]: E0621 05:45:17.579580 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:17.581960 kubelet[2709]: E0621 05:45:17.581443 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:17.582273 kubelet[2709]: E0621 05:45:17.582260 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:17.991707 sudo[2743]: pam_unix(sudo:session): session closed for user root Jun 21 05:45:18.139080 kubelet[2709]: I0621 05:45:18.138695 2709 apiserver.go:52] "Watching apiserver" Jun 21 05:45:18.163067 kubelet[2709]: I0621 05:45:18.162992 2709 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 05:45:18.200744 kubelet[2709]: E0621 05:45:18.200572 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:18.203081 kubelet[2709]: E0621 05:45:18.201203 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:18.222819 kubelet[2709]: E0621 05:45:18.222754 2709 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-172-233-208-28\" already exists" pod="kube-system/kube-controller-manager-172-233-208-28" Jun 21 05:45:18.223080 kubelet[2709]: E0621 05:45:18.223039 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:18.328400 kubelet[2709]: I0621 05:45:18.328218 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-233-208-28" podStartSLOduration=1.328190678 podStartE2EDuration="1.328190678s" podCreationTimestamp="2025-06-21 05:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:45:18.315238314 +0000 UTC m=+1.234317744" watchObservedRunningTime="2025-06-21 05:45:18.328190678 +0000 UTC m=+1.247270108" Jun 21 05:45:18.339666 kubelet[2709]: I0621 05:45:18.338692 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-233-208-28" podStartSLOduration=1.338679712 podStartE2EDuration="1.338679712s" podCreationTimestamp="2025-06-21 05:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:45:18.329473477 +0000 UTC m=+1.248552917" watchObservedRunningTime="2025-06-21 05:45:18.338679712 +0000 UTC m=+1.257759152" Jun 21 05:45:18.339666 kubelet[2709]: I0621 05:45:18.338752 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-233-208-28" podStartSLOduration=1.3387482419999999 podStartE2EDuration="1.338748242s" podCreationTimestamp="2025-06-21 05:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:45:18.338615242 +0000 UTC m=+1.257694672" watchObservedRunningTime="2025-06-21 05:45:18.338748242 +0000 UTC m=+1.257827672" Jun 21 05:45:19.086672 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 21 05:45:19.204374 kubelet[2709]: E0621 05:45:19.204329 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:19.204374 kubelet[2709]: E0621 05:45:19.204970 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:19.204374 kubelet[2709]: E0621 05:45:19.205228 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:19.291623 sudo[1800]: pam_unix(sudo:session): session closed for user root Jun 21 05:45:19.343866 sshd[1799]: Connection closed by 147.75.109.163 port 42628 Jun 21 05:45:19.344488 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jun 21 05:45:19.349621 systemd[1]: sshd@6-172.233.208.28:22-147.75.109.163:42628.service: Deactivated successfully. Jun 21 05:45:19.352618 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 05:45:19.353014 systemd[1]: session-7.scope: Consumed 3.541s CPU time, 265.4M memory peak. Jun 21 05:45:19.355924 systemd-logind[1512]: Session 7 logged out. Waiting for processes to exit. Jun 21 05:45:19.358826 systemd-logind[1512]: Removed session 7. Jun 21 05:45:20.206530 kubelet[2709]: E0621 05:45:20.206197 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:22.721683 kubelet[2709]: I0621 05:45:22.721624 2709 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 05:45:22.722211 containerd[1551]: time="2025-06-21T05:45:22.722036903Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 05:45:22.722443 kubelet[2709]: I0621 05:45:22.722207 2709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 05:45:23.826371 systemd[1]: Created slice kubepods-besteffort-podfa74cb8f_a808_4c37_a58a_333bfb084655.slice - libcontainer container kubepods-besteffort-podfa74cb8f_a808_4c37_a58a_333bfb084655.slice. Jun 21 05:45:23.895671 kubelet[2709]: W0621 05:45:23.895552 2709 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172-233-208-28" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-233-208-28' and this object Jun 21 05:45:23.895671 kubelet[2709]: E0621 05:45:23.895600 2709 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:172-233-208-28\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-233-208-28' and this object" logger="UnhandledError" Jun 21 05:45:23.895671 kubelet[2709]: W0621 05:45:23.895635 2709 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172-233-208-28" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-233-208-28' and this object Jun 21 05:45:23.897109 kubelet[2709]: E0621 05:45:23.896691 2709 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172-233-208-28\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-233-208-28' and this object" logger="UnhandledError" Jun 21 05:45:23.903922 kubelet[2709]: W0621 05:45:23.903868 2709 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:172-233-208-28" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-233-208-28' and this object Jun 21 05:45:23.903922 kubelet[2709]: E0621 05:45:23.903898 2709 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:172-233-208-28\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-233-208-28' and this object" logger="UnhandledError" Jun 21 05:45:23.906842 systemd[1]: Created slice kubepods-burstable-pod6c51f834_1773_4b0c_a525_1051d089db39.slice - libcontainer container kubepods-burstable-pod6c51f834_1773_4b0c_a525_1051d089db39.slice. Jun 21 05:45:23.910674 kubelet[2709]: I0621 05:45:23.910265 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-bpf-maps\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910674 kubelet[2709]: I0621 05:45:23.910307 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-hostproc\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910674 kubelet[2709]: I0621 05:45:23.910419 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c51f834-1773-4b0c-a525-1051d089db39-clustermesh-secrets\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910674 kubelet[2709]: I0621 05:45:23.910441 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhrw9\" (UniqueName: \"kubernetes.io/projected/4742faf6-2c30-460d-b7f9-000fdcf06c17-kube-api-access-zhrw9\") pod \"kube-proxy-jcph2\" (UID: \"4742faf6-2c30-460d-b7f9-000fdcf06c17\") " pod="kube-system/kube-proxy-jcph2" Jun 21 05:45:23.910674 kubelet[2709]: I0621 05:45:23.910461 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c51f834-1773-4b0c-a525-1051d089db39-hubble-tls\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910674 kubelet[2709]: I0621 05:45:23.910578 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cilium-cgroup\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910835 kubelet[2709]: I0621 05:45:23.910600 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7pcv\" (UniqueName: \"kubernetes.io/projected/6c51f834-1773-4b0c-a525-1051d089db39-kube-api-access-k7pcv\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910835 kubelet[2709]: I0621 05:45:23.910616 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-xtables-lock\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910835 kubelet[2709]: I0621 05:45:23.910633 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4742faf6-2c30-460d-b7f9-000fdcf06c17-kube-proxy\") pod \"kube-proxy-jcph2\" (UID: \"4742faf6-2c30-460d-b7f9-000fdcf06c17\") " pod="kube-system/kube-proxy-jcph2" Jun 21 05:45:23.910835 kubelet[2709]: I0621 05:45:23.910766 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4742faf6-2c30-460d-b7f9-000fdcf06c17-xtables-lock\") pod \"kube-proxy-jcph2\" (UID: \"4742faf6-2c30-460d-b7f9-000fdcf06c17\") " pod="kube-system/kube-proxy-jcph2" Jun 21 05:45:23.910835 kubelet[2709]: I0621 05:45:23.910796 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhdv\" (UniqueName: \"kubernetes.io/projected/fa74cb8f-a808-4c37-a58a-333bfb084655-kube-api-access-bhhdv\") pod \"cilium-operator-5d85765b45-t8djn\" (UID: \"fa74cb8f-a808-4c37-a58a-333bfb084655\") " pod="kube-system/cilium-operator-5d85765b45-t8djn" Jun 21 05:45:23.910939 kubelet[2709]: I0621 05:45:23.910812 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cni-path\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910939 kubelet[2709]: I0621 05:45:23.910846 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c51f834-1773-4b0c-a525-1051d089db39-cilium-config-path\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910939 kubelet[2709]: I0621 05:45:23.910863 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-host-proc-sys-net\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.910939 kubelet[2709]: I0621 05:45:23.910882 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-host-proc-sys-kernel\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.911023 kubelet[2709]: I0621 05:45:23.910993 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa74cb8f-a808-4c37-a58a-333bfb084655-cilium-config-path\") pod \"cilium-operator-5d85765b45-t8djn\" (UID: \"fa74cb8f-a808-4c37-a58a-333bfb084655\") " pod="kube-system/cilium-operator-5d85765b45-t8djn" Jun 21 05:45:23.911023 kubelet[2709]: I0621 05:45:23.911013 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cilium-run\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.911064 kubelet[2709]: I0621 05:45:23.911029 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-etc-cni-netd\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.914044 kubelet[2709]: I0621 05:45:23.911133 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4742faf6-2c30-460d-b7f9-000fdcf06c17-lib-modules\") pod \"kube-proxy-jcph2\" (UID: \"4742faf6-2c30-460d-b7f9-000fdcf06c17\") " pod="kube-system/kube-proxy-jcph2" Jun 21 05:45:23.914044 kubelet[2709]: I0621 05:45:23.911155 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-lib-modules\") pod \"cilium-8cd46\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " pod="kube-system/cilium-8cd46" Jun 21 05:45:23.914561 systemd[1]: Created slice kubepods-besteffort-pod4742faf6_2c30_460d_b7f9_000fdcf06c17.slice - libcontainer container kubepods-besteffort-pod4742faf6_2c30_460d_b7f9_000fdcf06c17.slice. Jun 21 05:45:24.136447 kubelet[2709]: E0621 05:45:24.136102 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:24.136973 containerd[1551]: time="2025-06-21T05:45:24.136926488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t8djn,Uid:fa74cb8f-a808-4c37-a58a-333bfb084655,Namespace:kube-system,Attempt:0,}" Jun 21 05:45:24.155780 containerd[1551]: time="2025-06-21T05:45:24.155741880Z" level=info msg="connecting to shim 778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d" address="unix:///run/containerd/s/9f78b9e60bfeebcf1624b50429267b41c007ff88480c032ed9c958fe61a8febb" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:45:24.187796 systemd[1]: Started cri-containerd-778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d.scope - libcontainer container 778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d. Jun 21 05:45:24.237539 containerd[1551]: time="2025-06-21T05:45:24.237438462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t8djn,Uid:fa74cb8f-a808-4c37-a58a-333bfb084655,Namespace:kube-system,Attempt:0,} returns sandbox id \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\"" Jun 21 05:45:24.238200 kubelet[2709]: E0621 05:45:24.238179 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:24.241978 containerd[1551]: time="2025-06-21T05:45:24.241932653Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 21 05:45:25.014512 kubelet[2709]: E0621 05:45:25.014455 2709 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jun 21 05:45:25.015016 kubelet[2709]: E0621 05:45:25.014546 2709 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4742faf6-2c30-460d-b7f9-000fdcf06c17-kube-proxy podName:4742faf6-2c30-460d-b7f9-000fdcf06c17 nodeName:}" failed. No retries permitted until 2025-06-21 05:45:25.514522389 +0000 UTC m=+8.433601839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/4742faf6-2c30-460d-b7f9-000fdcf06c17-kube-proxy") pod "kube-proxy-jcph2" (UID: "4742faf6-2c30-460d-b7f9-000fdcf06c17") : failed to sync configmap cache: timed out waiting for the condition Jun 21 05:45:25.113518 kubelet[2709]: E0621 05:45:25.113478 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:25.114255 containerd[1551]: time="2025-06-21T05:45:25.113984450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8cd46,Uid:6c51f834-1773-4b0c-a525-1051d089db39,Namespace:kube-system,Attempt:0,}" Jun 21 05:45:25.130912 containerd[1551]: time="2025-06-21T05:45:25.130888113Z" level=info msg="connecting to shim 34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f" address="unix:///run/containerd/s/1c1bde417fb5aaa373393ec78a52056b39a910c8277b96ca354ab9e6769b14fe" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:45:25.156785 systemd[1]: Started cri-containerd-34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f.scope - libcontainer container 34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f. Jun 21 05:45:25.207263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405502424.mount: Deactivated successfully. Jun 21 05:45:25.219386 containerd[1551]: time="2025-06-21T05:45:25.219331920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8cd46,Uid:6c51f834-1773-4b0c-a525-1051d089db39,Namespace:kube-system,Attempt:0,} returns sandbox id \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\"" Jun 21 05:45:25.220982 kubelet[2709]: E0621 05:45:25.220796 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:25.582255 containerd[1551]: time="2025-06-21T05:45:25.582214966Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:25.582924 containerd[1551]: time="2025-06-21T05:45:25.582852375Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 21 05:45:25.583517 containerd[1551]: time="2025-06-21T05:45:25.583456476Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:25.584665 containerd[1551]: time="2025-06-21T05:45:25.584540835Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.342577673s" Jun 21 05:45:25.584665 containerd[1551]: time="2025-06-21T05:45:25.584574303Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 21 05:45:25.585587 containerd[1551]: time="2025-06-21T05:45:25.585569675Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 21 05:45:25.588752 containerd[1551]: time="2025-06-21T05:45:25.588702185Z" level=info msg="CreateContainer within sandbox \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 21 05:45:25.595556 containerd[1551]: time="2025-06-21T05:45:25.595499411Z" level=info msg="Container 44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:25.620918 containerd[1551]: time="2025-06-21T05:45:25.620865000Z" level=info msg="CreateContainer within sandbox \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\"" Jun 21 05:45:25.621834 containerd[1551]: time="2025-06-21T05:45:25.621788876Z" level=info msg="StartContainer for \"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\"" Jun 21 05:45:25.624213 containerd[1551]: time="2025-06-21T05:45:25.624065248Z" level=info msg="connecting to shim 44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c" address="unix:///run/containerd/s/9f78b9e60bfeebcf1624b50429267b41c007ff88480c032ed9c958fe61a8febb" protocol=ttrpc version=3 Jun 21 05:45:25.642791 systemd[1]: Started cri-containerd-44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c.scope - libcontainer container 44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c. Jun 21 05:45:25.675526 containerd[1551]: time="2025-06-21T05:45:25.675455524Z" level=info msg="StartContainer for \"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" returns successfully" Jun 21 05:45:25.718607 kubelet[2709]: E0621 05:45:25.718551 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:25.719482 containerd[1551]: time="2025-06-21T05:45:25.719414175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcph2,Uid:4742faf6-2c30-460d-b7f9-000fdcf06c17,Namespace:kube-system,Attempt:0,}" Jun 21 05:45:25.736914 containerd[1551]: time="2025-06-21T05:45:25.736560986Z" level=info msg="connecting to shim 038312c8ab7b94fd56b400d521245a1188a55d0967311529a2bb5aaca00f0ccc" address="unix:///run/containerd/s/e0cbf37df0666b5d1495e98151aeb10288374442b79428aaa480edf379e7f92f" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:45:25.763782 systemd[1]: Started cri-containerd-038312c8ab7b94fd56b400d521245a1188a55d0967311529a2bb5aaca00f0ccc.scope - libcontainer container 038312c8ab7b94fd56b400d521245a1188a55d0967311529a2bb5aaca00f0ccc. Jun 21 05:45:25.791825 containerd[1551]: time="2025-06-21T05:45:25.791780790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcph2,Uid:4742faf6-2c30-460d-b7f9-000fdcf06c17,Namespace:kube-system,Attempt:0,} returns sandbox id \"038312c8ab7b94fd56b400d521245a1188a55d0967311529a2bb5aaca00f0ccc\"" Jun 21 05:45:25.793582 kubelet[2709]: E0621 05:45:25.793542 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:25.796836 containerd[1551]: time="2025-06-21T05:45:25.796364672Z" level=info msg="CreateContainer within sandbox \"038312c8ab7b94fd56b400d521245a1188a55d0967311529a2bb5aaca00f0ccc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 05:45:25.804260 containerd[1551]: time="2025-06-21T05:45:25.804126231Z" level=info msg="Container 3c18af94379f3f59b67e99f5e80cf0d36aace7d52ccc8d2de84d0338707ca3f3: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:25.810388 containerd[1551]: time="2025-06-21T05:45:25.810348704Z" level=info msg="CreateContainer within sandbox \"038312c8ab7b94fd56b400d521245a1188a55d0967311529a2bb5aaca00f0ccc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c18af94379f3f59b67e99f5e80cf0d36aace7d52ccc8d2de84d0338707ca3f3\"" Jun 21 05:45:25.812073 containerd[1551]: time="2025-06-21T05:45:25.812011915Z" level=info msg="StartContainer for \"3c18af94379f3f59b67e99f5e80cf0d36aace7d52ccc8d2de84d0338707ca3f3\"" Jun 21 05:45:25.814373 containerd[1551]: time="2025-06-21T05:45:25.814178531Z" level=info msg="connecting to shim 3c18af94379f3f59b67e99f5e80cf0d36aace7d52ccc8d2de84d0338707ca3f3" address="unix:///run/containerd/s/e0cbf37df0666b5d1495e98151aeb10288374442b79428aaa480edf379e7f92f" protocol=ttrpc version=3 Jun 21 05:45:25.836802 systemd[1]: Started cri-containerd-3c18af94379f3f59b67e99f5e80cf0d36aace7d52ccc8d2de84d0338707ca3f3.scope - libcontainer container 3c18af94379f3f59b67e99f5e80cf0d36aace7d52ccc8d2de84d0338707ca3f3. Jun 21 05:45:25.882125 containerd[1551]: time="2025-06-21T05:45:25.881041168Z" level=info msg="StartContainer for \"3c18af94379f3f59b67e99f5e80cf0d36aace7d52ccc8d2de84d0338707ca3f3\" returns successfully" Jun 21 05:45:26.228416 kubelet[2709]: E0621 05:45:26.228305 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:26.241149 kubelet[2709]: E0621 05:45:26.240993 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:26.267602 kubelet[2709]: I0621 05:45:26.267552 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jcph2" podStartSLOduration=3.2675269350000002 podStartE2EDuration="3.267526935s" podCreationTimestamp="2025-06-21 05:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:45:26.256380765 +0000 UTC m=+9.175460205" watchObservedRunningTime="2025-06-21 05:45:26.267526935 +0000 UTC m=+9.186606385" Jun 21 05:45:27.243307 kubelet[2709]: E0621 05:45:27.243246 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:27.629004 kubelet[2709]: E0621 05:45:27.628954 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:27.656760 kubelet[2709]: I0621 05:45:27.655189 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-t8djn" podStartSLOduration=3.310022771 podStartE2EDuration="4.655168395s" podCreationTimestamp="2025-06-21 05:45:23 +0000 UTC" firstStartedPulling="2025-06-21 05:45:24.240252969 +0000 UTC m=+7.159332409" lastFinishedPulling="2025-06-21 05:45:25.585398593 +0000 UTC m=+8.504478033" observedRunningTime="2025-06-21 05:45:26.268002894 +0000 UTC m=+9.187082334" watchObservedRunningTime="2025-06-21 05:45:27.655168395 +0000 UTC m=+10.574247835" Jun 21 05:45:28.526939 kubelet[2709]: E0621 05:45:28.526911 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:29.007944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471515882.mount: Deactivated successfully. Jun 21 05:45:30.014807 kubelet[2709]: E0621 05:45:30.014545 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:30.252622 kubelet[2709]: E0621 05:45:30.252409 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:30.545947 containerd[1551]: time="2025-06-21T05:45:30.545884281Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:30.547287 containerd[1551]: time="2025-06-21T05:45:30.547254053Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 21 05:45:30.549163 containerd[1551]: time="2025-06-21T05:45:30.548490721Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:45:30.550994 containerd[1551]: time="2025-06-21T05:45:30.550956215Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.965309354s" Jun 21 05:45:30.551035 containerd[1551]: time="2025-06-21T05:45:30.550991814Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 21 05:45:30.557157 containerd[1551]: time="2025-06-21T05:45:30.557027465Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 05:45:30.565316 containerd[1551]: time="2025-06-21T05:45:30.565294977Z" level=info msg="Container 0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:30.574734 containerd[1551]: time="2025-06-21T05:45:30.574701801Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\"" Jun 21 05:45:30.575336 containerd[1551]: time="2025-06-21T05:45:30.575304909Z" level=info msg="StartContainer for \"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\"" Jun 21 05:45:30.575974 containerd[1551]: time="2025-06-21T05:45:30.575940768Z" level=info msg="connecting to shim 0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69" address="unix:///run/containerd/s/1c1bde417fb5aaa373393ec78a52056b39a910c8277b96ca354ab9e6769b14fe" protocol=ttrpc version=3 Jun 21 05:45:30.599808 systemd[1]: Started cri-containerd-0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69.scope - libcontainer container 0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69. Jun 21 05:45:30.634089 containerd[1551]: time="2025-06-21T05:45:30.634019852Z" level=info msg="StartContainer for \"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\" returns successfully" Jun 21 05:45:30.649074 systemd[1]: cri-containerd-0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69.scope: Deactivated successfully. Jun 21 05:45:30.652564 containerd[1551]: time="2025-06-21T05:45:30.652538919Z" level=info msg="received exit event container_id:\"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\" id:\"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\" pid:3169 exited_at:{seconds:1750484730 nanos:650567627}" Jun 21 05:45:30.652772 containerd[1551]: time="2025-06-21T05:45:30.652750842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\" id:\"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\" pid:3169 exited_at:{seconds:1750484730 nanos:650567627}" Jun 21 05:45:30.678567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69-rootfs.mount: Deactivated successfully. Jun 21 05:45:31.254530 kubelet[2709]: E0621 05:45:31.254466 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:31.256745 containerd[1551]: time="2025-06-21T05:45:31.256701362Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 05:45:31.265008 containerd[1551]: time="2025-06-21T05:45:31.264954524Z" level=info msg="Container 3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:31.270172 containerd[1551]: time="2025-06-21T05:45:31.270064387Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\"" Jun 21 05:45:31.271325 containerd[1551]: time="2025-06-21T05:45:31.271188730Z" level=info msg="StartContainer for \"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\"" Jun 21 05:45:31.272805 containerd[1551]: time="2025-06-21T05:45:31.272715971Z" level=info msg="connecting to shim 3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f" address="unix:///run/containerd/s/1c1bde417fb5aaa373393ec78a52056b39a910c8277b96ca354ab9e6769b14fe" protocol=ttrpc version=3 Jun 21 05:45:31.289764 systemd[1]: Started cri-containerd-3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f.scope - libcontainer container 3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f. Jun 21 05:45:31.315448 containerd[1551]: time="2025-06-21T05:45:31.315420159Z" level=info msg="StartContainer for \"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\" returns successfully" Jun 21 05:45:31.328957 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 05:45:31.329417 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:45:31.329735 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:45:31.331803 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:45:31.332003 systemd[1]: cri-containerd-3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f.scope: Deactivated successfully. Jun 21 05:45:31.333063 containerd[1551]: time="2025-06-21T05:45:31.333042055Z" level=info msg="received exit event container_id:\"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\" id:\"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\" pid:3214 exited_at:{seconds:1750484731 nanos:332516453}" Jun 21 05:45:31.333227 containerd[1551]: time="2025-06-21T05:45:31.333209229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\" id:\"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\" pid:3214 exited_at:{seconds:1750484731 nanos:332516453}" Jun 21 05:45:31.360562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:45:32.258102 kubelet[2709]: E0621 05:45:32.258032 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:32.260577 containerd[1551]: time="2025-06-21T05:45:32.260517392Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 05:45:32.279688 containerd[1551]: time="2025-06-21T05:45:32.279495741Z" level=info msg="Container fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:32.281637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598081006.mount: Deactivated successfully. Jun 21 05:45:32.287636 containerd[1551]: time="2025-06-21T05:45:32.287605473Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\"" Jun 21 05:45:32.288265 containerd[1551]: time="2025-06-21T05:45:32.288227695Z" level=info msg="StartContainer for \"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\"" Jun 21 05:45:32.289670 containerd[1551]: time="2025-06-21T05:45:32.289623832Z" level=info msg="connecting to shim fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5" address="unix:///run/containerd/s/1c1bde417fb5aaa373393ec78a52056b39a910c8277b96ca354ab9e6769b14fe" protocol=ttrpc version=3 Jun 21 05:45:32.313787 systemd[1]: Started cri-containerd-fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5.scope - libcontainer container fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5. Jun 21 05:45:32.358308 containerd[1551]: time="2025-06-21T05:45:32.358206596Z" level=info msg="StartContainer for \"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\" returns successfully" Jun 21 05:45:32.364743 systemd[1]: cri-containerd-fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5.scope: Deactivated successfully. Jun 21 05:45:32.365957 containerd[1551]: time="2025-06-21T05:45:32.365932449Z" level=info msg="received exit event container_id:\"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\" id:\"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\" pid:3259 exited_at:{seconds:1750484732 nanos:365261449}" Jun 21 05:45:32.366068 containerd[1551]: time="2025-06-21T05:45:32.366028096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\" id:\"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\" pid:3259 exited_at:{seconds:1750484732 nanos:365261449}" Jun 21 05:45:32.389521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5-rootfs.mount: Deactivated successfully. Jun 21 05:45:33.262969 kubelet[2709]: E0621 05:45:33.262180 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:33.266209 containerd[1551]: time="2025-06-21T05:45:33.266132631Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 05:45:33.280908 containerd[1551]: time="2025-06-21T05:45:33.280243326Z" level=info msg="Container d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:33.287612 containerd[1551]: time="2025-06-21T05:45:33.287562586Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\"" Jun 21 05:45:33.288120 containerd[1551]: time="2025-06-21T05:45:33.288070241Z" level=info msg="StartContainer for \"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\"" Jun 21 05:45:33.288988 containerd[1551]: time="2025-06-21T05:45:33.288819979Z" level=info msg="connecting to shim d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c" address="unix:///run/containerd/s/1c1bde417fb5aaa373393ec78a52056b39a910c8277b96ca354ab9e6769b14fe" protocol=ttrpc version=3 Jun 21 05:45:33.310765 systemd[1]: Started cri-containerd-d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c.scope - libcontainer container d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c. Jun 21 05:45:33.338428 systemd[1]: cri-containerd-d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c.scope: Deactivated successfully. Jun 21 05:45:33.340354 containerd[1551]: time="2025-06-21T05:45:33.340323612Z" level=info msg="received exit event container_id:\"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\" id:\"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\" pid:3298 exited_at:{seconds:1750484733 nanos:340207596}" Jun 21 05:45:33.340598 containerd[1551]: time="2025-06-21T05:45:33.340458197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\" id:\"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\" pid:3298 exited_at:{seconds:1750484733 nanos:340207596}" Jun 21 05:45:33.340598 containerd[1551]: time="2025-06-21T05:45:33.340488817Z" level=info msg="StartContainer for \"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\" returns successfully" Jun 21 05:45:33.361944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c-rootfs.mount: Deactivated successfully. Jun 21 05:45:33.710097 update_engine[1521]: I20250621 05:45:33.710037 1521 update_attempter.cc:509] Updating boot flags... Jun 21 05:45:34.266773 kubelet[2709]: E0621 05:45:34.266738 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:34.270052 containerd[1551]: time="2025-06-21T05:45:34.270023252Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 05:45:34.283368 containerd[1551]: time="2025-06-21T05:45:34.283172438Z" level=info msg="Container 7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:45:34.287504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267543939.mount: Deactivated successfully. Jun 21 05:45:34.295938 containerd[1551]: time="2025-06-21T05:45:34.295916445Z" level=info msg="CreateContainer within sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\"" Jun 21 05:45:34.297097 containerd[1551]: time="2025-06-21T05:45:34.296865909Z" level=info msg="StartContainer for \"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\"" Jun 21 05:45:34.297920 containerd[1551]: time="2025-06-21T05:45:34.297897082Z" level=info msg="connecting to shim 7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477" address="unix:///run/containerd/s/1c1bde417fb5aaa373393ec78a52056b39a910c8277b96ca354ab9e6769b14fe" protocol=ttrpc version=3 Jun 21 05:45:34.322763 systemd[1]: Started cri-containerd-7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477.scope - libcontainer container 7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477. Jun 21 05:45:34.364048 containerd[1551]: time="2025-06-21T05:45:34.364018171Z" level=info msg="StartContainer for \"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" returns successfully" Jun 21 05:45:34.427045 containerd[1551]: time="2025-06-21T05:45:34.427004755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" id:\"882a4362aa25e65ebdb49ed454e74871d76d4a2a44adb9204960791ffe5e2529\" pid:3388 exited_at:{seconds:1750484734 nanos:426398141}" Jun 21 05:45:34.506324 kubelet[2709]: I0621 05:45:34.506270 2709 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 21 05:45:34.542384 systemd[1]: Created slice kubepods-burstable-podf9d5f268_fc04_46aa_9534_89fb5500f263.slice - libcontainer container kubepods-burstable-podf9d5f268_fc04_46aa_9534_89fb5500f263.slice. Jun 21 05:45:34.553207 kubelet[2709]: W0621 05:45:34.553186 2709 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-233-208-28" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-233-208-28' and this object Jun 21 05:45:34.553879 kubelet[2709]: E0621 05:45:34.553597 2709 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:172-233-208-28\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-233-208-28' and this object" logger="UnhandledError" Jun 21 05:45:34.560945 systemd[1]: Created slice kubepods-burstable-pode279209b_8a28_4ff6_b309_995b452bfc76.slice - libcontainer container kubepods-burstable-pode279209b_8a28_4ff6_b309_995b452bfc76.slice. Jun 21 05:45:34.586770 kubelet[2709]: I0621 05:45:34.586712 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e279209b-8a28-4ff6-b309-995b452bfc76-config-volume\") pod \"coredns-7c65d6cfc9-mmgrz\" (UID: \"e279209b-8a28-4ff6-b309-995b452bfc76\") " pod="kube-system/coredns-7c65d6cfc9-mmgrz" Jun 21 05:45:34.586921 kubelet[2709]: I0621 05:45:34.586838 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nkgg\" (UniqueName: \"kubernetes.io/projected/e279209b-8a28-4ff6-b309-995b452bfc76-kube-api-access-9nkgg\") pod \"coredns-7c65d6cfc9-mmgrz\" (UID: \"e279209b-8a28-4ff6-b309-995b452bfc76\") " pod="kube-system/coredns-7c65d6cfc9-mmgrz" Jun 21 05:45:34.586921 kubelet[2709]: I0621 05:45:34.586864 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9d5f268-fc04-46aa-9534-89fb5500f263-config-volume\") pod \"coredns-7c65d6cfc9-4gnxf\" (UID: \"f9d5f268-fc04-46aa-9534-89fb5500f263\") " pod="kube-system/coredns-7c65d6cfc9-4gnxf" Jun 21 05:45:34.587021 kubelet[2709]: I0621 05:45:34.587008 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvmsl\" (UniqueName: \"kubernetes.io/projected/f9d5f268-fc04-46aa-9534-89fb5500f263-kube-api-access-qvmsl\") pod \"coredns-7c65d6cfc9-4gnxf\" (UID: \"f9d5f268-fc04-46aa-9534-89fb5500f263\") " pod="kube-system/coredns-7c65d6cfc9-4gnxf" Jun 21 05:45:35.273667 kubelet[2709]: E0621 05:45:35.273404 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:35.286980 kubelet[2709]: I0621 05:45:35.286945 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8cd46" podStartSLOduration=6.95635362 podStartE2EDuration="12.286931629s" podCreationTimestamp="2025-06-21 05:45:23 +0000 UTC" firstStartedPulling="2025-06-21 05:45:25.221478098 +0000 UTC m=+8.140557538" lastFinishedPulling="2025-06-21 05:45:30.552056107 +0000 UTC m=+13.471135547" observedRunningTime="2025-06-21 05:45:35.285847827 +0000 UTC m=+18.204927277" watchObservedRunningTime="2025-06-21 05:45:35.286931629 +0000 UTC m=+18.206011069" Jun 21 05:45:35.688570 kubelet[2709]: E0621 05:45:35.688492 2709 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 21 05:45:35.688570 kubelet[2709]: E0621 05:45:35.688510 2709 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 21 05:45:35.688570 kubelet[2709]: E0621 05:45:35.688555 2709 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e279209b-8a28-4ff6-b309-995b452bfc76-config-volume podName:e279209b-8a28-4ff6-b309-995b452bfc76 nodeName:}" failed. No retries permitted until 2025-06-21 05:45:36.188539118 +0000 UTC m=+19.107618558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e279209b-8a28-4ff6-b309-995b452bfc76-config-volume") pod "coredns-7c65d6cfc9-mmgrz" (UID: "e279209b-8a28-4ff6-b309-995b452bfc76") : failed to sync configmap cache: timed out waiting for the condition Jun 21 05:45:35.688570 kubelet[2709]: E0621 05:45:35.688567 2709 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9d5f268-fc04-46aa-9534-89fb5500f263-config-volume podName:f9d5f268-fc04-46aa-9534-89fb5500f263 nodeName:}" failed. No retries permitted until 2025-06-21 05:45:36.188560738 +0000 UTC m=+19.107640178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9d5f268-fc04-46aa-9534-89fb5500f263-config-volume") pod "coredns-7c65d6cfc9-4gnxf" (UID: "f9d5f268-fc04-46aa-9534-89fb5500f263") : failed to sync configmap cache: timed out waiting for the condition Jun 21 05:45:36.274839 kubelet[2709]: E0621 05:45:36.274813 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:36.355039 kubelet[2709]: E0621 05:45:36.354758 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:36.355273 containerd[1551]: time="2025-06-21T05:45:36.355245113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4gnxf,Uid:f9d5f268-fc04-46aa-9534-89fb5500f263,Namespace:kube-system,Attempt:0,}" Jun 21 05:45:36.365106 kubelet[2709]: E0621 05:45:36.365070 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:36.366753 containerd[1551]: time="2025-06-21T05:45:36.366725631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mmgrz,Uid:e279209b-8a28-4ff6-b309-995b452bfc76,Namespace:kube-system,Attempt:0,}" Jun 21 05:45:37.276488 kubelet[2709]: E0621 05:45:37.276456 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:45:52.579986 kubelet[2709]: E0621 05:45:52.579932 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:03.361960 systemd-networkd[1461]: cilium_host: Link UP Jun 21 05:46:03.363046 systemd-networkd[1461]: cilium_net: Link UP Jun 21 05:46:03.364114 systemd-networkd[1461]: cilium_host: Gained carrier Jun 21 05:46:03.365260 systemd-networkd[1461]: cilium_net: Gained carrier Jun 21 05:46:03.462277 systemd-networkd[1461]: cilium_vxlan: Link UP Jun 21 05:46:03.462577 systemd-networkd[1461]: cilium_vxlan: Gained carrier Jun 21 05:46:03.655701 kernel: NET: Registered PF_ALG protocol family Jun 21 05:46:03.977896 systemd-networkd[1461]: cilium_host: Gained IPv6LL Jun 21 05:46:04.042770 systemd-networkd[1461]: cilium_net: Gained IPv6LL Jun 21 05:46:04.250211 systemd-networkd[1461]: lxc_health: Link UP Jun 21 05:46:04.263780 systemd-networkd[1461]: lxc_health: Gained carrier Jun 21 05:46:04.429496 kernel: eth0: renamed from tmpe3ced Jun 21 05:46:04.428823 systemd-networkd[1461]: lxc797318905f57: Link UP Jun 21 05:46:04.443298 systemd-networkd[1461]: lxc30cbd6d408c3: Link UP Jun 21 05:46:04.446127 kernel: eth0: renamed from tmpd1102 Jun 21 05:46:04.445270 systemd-networkd[1461]: lxc797318905f57: Gained carrier Jun 21 05:46:04.452384 systemd-networkd[1461]: lxc30cbd6d408c3: Gained carrier Jun 21 05:46:04.745785 systemd-networkd[1461]: cilium_vxlan: Gained IPv6LL Jun 21 05:46:05.116193 kubelet[2709]: E0621 05:46:05.115800 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:05.322422 kubelet[2709]: E0621 05:46:05.322363 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:05.705873 systemd-networkd[1461]: lxc_health: Gained IPv6LL Jun 21 05:46:05.834285 systemd-networkd[1461]: lxc30cbd6d408c3: Gained IPv6LL Jun 21 05:46:05.898047 systemd-networkd[1461]: lxc797318905f57: Gained IPv6LL Jun 21 05:46:07.554421 containerd[1551]: time="2025-06-21T05:46:07.554331436Z" level=info msg="connecting to shim e3ced66aff53e1ac4229ec95056f29062eac94f19acf06c472d8b3a882f78a4e" address="unix:///run/containerd/s/5a6c13e41d96801571cad747d579c08430ac599fbe96af1b4d33aa6aa90c8938" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:46:07.562829 containerd[1551]: time="2025-06-21T05:46:07.562797155Z" level=info msg="connecting to shim d1102293fbef28382d1be62401561a398eb76acb51c33a2b49df3253398e751c" address="unix:///run/containerd/s/c72c2d66fb9e6a17793045ed20079abf3b08c2930d973fe71900a8bcc04f659b" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:46:07.597914 systemd[1]: Started cri-containerd-e3ced66aff53e1ac4229ec95056f29062eac94f19acf06c472d8b3a882f78a4e.scope - libcontainer container e3ced66aff53e1ac4229ec95056f29062eac94f19acf06c472d8b3a882f78a4e. Jun 21 05:46:07.613758 systemd[1]: Started cri-containerd-d1102293fbef28382d1be62401561a398eb76acb51c33a2b49df3253398e751c.scope - libcontainer container d1102293fbef28382d1be62401561a398eb76acb51c33a2b49df3253398e751c. Jun 21 05:46:07.677343 containerd[1551]: time="2025-06-21T05:46:07.677288968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4gnxf,Uid:f9d5f268-fc04-46aa-9534-89fb5500f263,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3ced66aff53e1ac4229ec95056f29062eac94f19acf06c472d8b3a882f78a4e\"" Jun 21 05:46:07.678711 kubelet[2709]: E0621 05:46:07.678691 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:07.685663 containerd[1551]: time="2025-06-21T05:46:07.685408739Z" level=info msg="CreateContainer within sandbox \"e3ced66aff53e1ac4229ec95056f29062eac94f19acf06c472d8b3a882f78a4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:46:07.690307 containerd[1551]: time="2025-06-21T05:46:07.690280180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mmgrz,Uid:e279209b-8a28-4ff6-b309-995b452bfc76,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1102293fbef28382d1be62401561a398eb76acb51c33a2b49df3253398e751c\"" Jun 21 05:46:07.692865 kubelet[2709]: E0621 05:46:07.692488 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:07.695578 containerd[1551]: time="2025-06-21T05:46:07.695417452Z" level=info msg="CreateContainer within sandbox \"d1102293fbef28382d1be62401561a398eb76acb51c33a2b49df3253398e751c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:46:07.703249 containerd[1551]: time="2025-06-21T05:46:07.703122974Z" level=info msg="Container 5410ee402bbcd2474c62a2fc4ed51d9ee8551ef4f5e8aa90ebbd0caa7793f7f3: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:46:07.707122 containerd[1551]: time="2025-06-21T05:46:07.707101060Z" level=info msg="Container 4df082d506275486b6487b2c4191fbc6befa81a624ba3a371d47d63589f2f9a0: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:46:07.708785 containerd[1551]: time="2025-06-21T05:46:07.708758103Z" level=info msg="CreateContainer within sandbox \"e3ced66aff53e1ac4229ec95056f29062eac94f19acf06c472d8b3a882f78a4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5410ee402bbcd2474c62a2fc4ed51d9ee8551ef4f5e8aa90ebbd0caa7793f7f3\"" Jun 21 05:46:07.709679 containerd[1551]: time="2025-06-21T05:46:07.709229552Z" level=info msg="StartContainer for \"5410ee402bbcd2474c62a2fc4ed51d9ee8551ef4f5e8aa90ebbd0caa7793f7f3\"" Jun 21 05:46:07.710830 containerd[1551]: time="2025-06-21T05:46:07.710808066Z" level=info msg="connecting to shim 5410ee402bbcd2474c62a2fc4ed51d9ee8551ef4f5e8aa90ebbd0caa7793f7f3" address="unix:///run/containerd/s/5a6c13e41d96801571cad747d579c08430ac599fbe96af1b4d33aa6aa90c8938" protocol=ttrpc version=3 Jun 21 05:46:07.712531 containerd[1551]: time="2025-06-21T05:46:07.712500250Z" level=info msg="CreateContainer within sandbox \"d1102293fbef28382d1be62401561a398eb76acb51c33a2b49df3253398e751c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4df082d506275486b6487b2c4191fbc6befa81a624ba3a371d47d63589f2f9a0\"" Jun 21 05:46:07.713714 containerd[1551]: time="2025-06-21T05:46:07.713395947Z" level=info msg="StartContainer for \"4df082d506275486b6487b2c4191fbc6befa81a624ba3a371d47d63589f2f9a0\"" Jun 21 05:46:07.715439 containerd[1551]: time="2025-06-21T05:46:07.715274520Z" level=info msg="connecting to shim 4df082d506275486b6487b2c4191fbc6befa81a624ba3a371d47d63589f2f9a0" address="unix:///run/containerd/s/c72c2d66fb9e6a17793045ed20079abf3b08c2930d973fe71900a8bcc04f659b" protocol=ttrpc version=3 Jun 21 05:46:07.731786 systemd[1]: Started cri-containerd-5410ee402bbcd2474c62a2fc4ed51d9ee8551ef4f5e8aa90ebbd0caa7793f7f3.scope - libcontainer container 5410ee402bbcd2474c62a2fc4ed51d9ee8551ef4f5e8aa90ebbd0caa7793f7f3. Jun 21 05:46:07.735302 systemd[1]: Started cri-containerd-4df082d506275486b6487b2c4191fbc6befa81a624ba3a371d47d63589f2f9a0.scope - libcontainer container 4df082d506275486b6487b2c4191fbc6befa81a624ba3a371d47d63589f2f9a0. Jun 21 05:46:07.778301 containerd[1551]: time="2025-06-21T05:46:07.778245200Z" level=info msg="StartContainer for \"5410ee402bbcd2474c62a2fc4ed51d9ee8551ef4f5e8aa90ebbd0caa7793f7f3\" returns successfully" Jun 21 05:46:07.785272 containerd[1551]: time="2025-06-21T05:46:07.785248805Z" level=info msg="StartContainer for \"4df082d506275486b6487b2c4191fbc6befa81a624ba3a371d47d63589f2f9a0\" returns successfully" Jun 21 05:46:08.332571 kubelet[2709]: E0621 05:46:08.331620 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:08.335341 kubelet[2709]: E0621 05:46:08.334732 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:08.345440 kubelet[2709]: I0621 05:46:08.345015 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mmgrz" podStartSLOduration=45.345001454 podStartE2EDuration="45.345001454s" podCreationTimestamp="2025-06-21 05:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:46:08.343885727 +0000 UTC m=+51.262965157" watchObservedRunningTime="2025-06-21 05:46:08.345001454 +0000 UTC m=+51.264080894" Jun 21 05:46:08.366764 kubelet[2709]: I0621 05:46:08.366712 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4gnxf" podStartSLOduration=45.366693189 podStartE2EDuration="45.366693189s" podCreationTimestamp="2025-06-21 05:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:46:08.356804233 +0000 UTC m=+51.275883693" watchObservedRunningTime="2025-06-21 05:46:08.366693189 +0000 UTC m=+51.285772639" Jun 21 05:46:08.529313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802931581.mount: Deactivated successfully. Jun 21 05:46:09.336879 kubelet[2709]: E0621 05:46:09.336382 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:09.336879 kubelet[2709]: E0621 05:46:09.336550 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:10.337946 kubelet[2709]: E0621 05:46:10.337903 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:10.338353 kubelet[2709]: E0621 05:46:10.338274 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:33.170760 kubelet[2709]: E0621 05:46:33.170226 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:35.294710 systemd[1]: Started sshd@7-172.233.208.28:22-194.165.16.165:57086.service - OpenSSH per-connection server daemon (194.165.16.165:57086). Jun 21 05:46:35.315053 sshd[4030]: banner exchange: Connection from 194.165.16.165 port 57086: invalid format Jun 21 05:46:35.315661 systemd[1]: sshd@7-172.233.208.28:22-194.165.16.165:57086.service: Deactivated successfully. Jun 21 05:46:35.572893 systemd[1]: Started sshd@8-172.233.208.28:22-194.165.16.165:59106.service - OpenSSH per-connection server daemon (194.165.16.165:59106). Jun 21 05:46:35.677088 sshd[4034]: banner exchange: Connection from 194.165.16.165 port 59106: invalid format Jun 21 05:46:35.678332 systemd[1]: sshd@8-172.233.208.28:22-194.165.16.165:59106.service: Deactivated successfully. Jun 21 05:46:35.881848 systemd[1]: Started sshd@9-172.233.208.28:22-194.165.16.165:1344.service - OpenSSH per-connection server daemon (194.165.16.165:1344). Jun 21 05:46:35.910017 sshd[4038]: banner exchange: Connection from 194.165.16.165 port 1344: invalid format Jun 21 05:46:35.911136 systemd[1]: sshd@9-172.233.208.28:22-194.165.16.165:1344.service: Deactivated successfully. Jun 21 05:46:36.170662 kubelet[2709]: E0621 05:46:36.170555 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:37.171371 kubelet[2709]: E0621 05:46:37.170927 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:44.170000 kubelet[2709]: E0621 05:46:44.169968 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:46:56.170503 kubelet[2709]: E0621 05:46:56.170448 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:47:13.170680 kubelet[2709]: E0621 05:47:13.170178 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:47:14.170701 kubelet[2709]: E0621 05:47:14.170511 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:47:18.170137 kubelet[2709]: E0621 05:47:18.170101 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:47:24.444241 systemd[1]: Started sshd@10-172.233.208.28:22-147.75.109.163:59958.service - OpenSSH per-connection server daemon (147.75.109.163:59958). Jun 21 05:47:24.781022 sshd[4048]: Accepted publickey for core from 147.75.109.163 port 59958 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:24.782454 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:24.788079 systemd-logind[1512]: New session 8 of user core. Jun 21 05:47:24.790776 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 05:47:25.105933 sshd[4050]: Connection closed by 147.75.109.163 port 59958 Jun 21 05:47:25.106935 sshd-session[4048]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:25.111025 systemd[1]: sshd@10-172.233.208.28:22-147.75.109.163:59958.service: Deactivated successfully. Jun 21 05:47:25.113831 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 05:47:25.115279 systemd-logind[1512]: Session 8 logged out. Waiting for processes to exit. Jun 21 05:47:25.116554 systemd-logind[1512]: Removed session 8. Jun 21 05:47:30.173035 systemd[1]: Started sshd@11-172.233.208.28:22-147.75.109.163:44446.service - OpenSSH per-connection server daemon (147.75.109.163:44446). Jun 21 05:47:30.512313 sshd[4065]: Accepted publickey for core from 147.75.109.163 port 44446 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:30.513617 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:30.518068 systemd-logind[1512]: New session 9 of user core. Jun 21 05:47:30.524766 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 05:47:30.812472 sshd[4067]: Connection closed by 147.75.109.163 port 44446 Jun 21 05:47:30.813893 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:30.817441 systemd[1]: sshd@11-172.233.208.28:22-147.75.109.163:44446.service: Deactivated successfully. Jun 21 05:47:30.819581 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 05:47:30.820392 systemd-logind[1512]: Session 9 logged out. Waiting for processes to exit. Jun 21 05:47:30.822178 systemd-logind[1512]: Removed session 9. Jun 21 05:47:35.879862 systemd[1]: Started sshd@12-172.233.208.28:22-147.75.109.163:46834.service - OpenSSH per-connection server daemon (147.75.109.163:46834). Jun 21 05:47:36.225331 sshd[4079]: Accepted publickey for core from 147.75.109.163 port 46834 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:36.227217 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:36.233006 systemd-logind[1512]: New session 10 of user core. Jun 21 05:47:36.242781 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 05:47:36.534083 sshd[4081]: Connection closed by 147.75.109.163 port 46834 Jun 21 05:47:36.534855 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:36.539541 systemd-logind[1512]: Session 10 logged out. Waiting for processes to exit. Jun 21 05:47:36.540375 systemd[1]: sshd@12-172.233.208.28:22-147.75.109.163:46834.service: Deactivated successfully. Jun 21 05:47:36.542609 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 05:47:36.544420 systemd-logind[1512]: Removed session 10. Jun 21 05:47:36.595233 systemd[1]: Started sshd@13-172.233.208.28:22-147.75.109.163:46840.service - OpenSSH per-connection server daemon (147.75.109.163:46840). Jun 21 05:47:36.936670 sshd[4094]: Accepted publickey for core from 147.75.109.163 port 46840 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:36.938212 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:36.943083 systemd-logind[1512]: New session 11 of user core. Jun 21 05:47:36.955796 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 05:47:37.288363 sshd[4096]: Connection closed by 147.75.109.163 port 46840 Jun 21 05:47:37.288956 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:37.295236 systemd[1]: sshd@13-172.233.208.28:22-147.75.109.163:46840.service: Deactivated successfully. Jun 21 05:47:37.298398 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 05:47:37.299996 systemd-logind[1512]: Session 11 logged out. Waiting for processes to exit. Jun 21 05:47:37.301686 systemd-logind[1512]: Removed session 11. Jun 21 05:47:37.348827 systemd[1]: Started sshd@14-172.233.208.28:22-147.75.109.163:46856.service - OpenSSH per-connection server daemon (147.75.109.163:46856). Jun 21 05:47:37.680259 sshd[4106]: Accepted publickey for core from 147.75.109.163 port 46856 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:37.681687 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:37.687157 systemd-logind[1512]: New session 12 of user core. Jun 21 05:47:37.693773 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 05:47:37.985946 sshd[4108]: Connection closed by 147.75.109.163 port 46856 Jun 21 05:47:37.988256 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:37.992977 systemd-logind[1512]: Session 12 logged out. Waiting for processes to exit. Jun 21 05:47:37.994066 systemd[1]: sshd@14-172.233.208.28:22-147.75.109.163:46856.service: Deactivated successfully. Jun 21 05:47:37.996479 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 05:47:37.998497 systemd-logind[1512]: Removed session 12. Jun 21 05:47:42.170916 kubelet[2709]: E0621 05:47:42.170879 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:47:43.055398 systemd[1]: Started sshd@15-172.233.208.28:22-147.75.109.163:46866.service - OpenSSH per-connection server daemon (147.75.109.163:46866). Jun 21 05:47:43.381963 sshd[4120]: Accepted publickey for core from 147.75.109.163 port 46866 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:43.383467 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:43.389012 systemd-logind[1512]: New session 13 of user core. Jun 21 05:47:43.391752 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 05:47:43.677463 sshd[4122]: Connection closed by 147.75.109.163 port 46866 Jun 21 05:47:43.679001 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:43.683191 systemd-logind[1512]: Session 13 logged out. Waiting for processes to exit. Jun 21 05:47:43.683957 systemd[1]: sshd@15-172.233.208.28:22-147.75.109.163:46866.service: Deactivated successfully. Jun 21 05:47:43.686284 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 05:47:43.688101 systemd-logind[1512]: Removed session 13. Jun 21 05:47:48.739528 systemd[1]: Started sshd@16-172.233.208.28:22-147.75.109.163:55368.service - OpenSSH per-connection server daemon (147.75.109.163:55368). Jun 21 05:47:49.081708 sshd[4134]: Accepted publickey for core from 147.75.109.163 port 55368 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:49.083327 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:49.088775 systemd-logind[1512]: New session 14 of user core. Jun 21 05:47:49.095776 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 05:47:49.386084 sshd[4136]: Connection closed by 147.75.109.163 port 55368 Jun 21 05:47:49.387750 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:49.391551 systemd-logind[1512]: Session 14 logged out. Waiting for processes to exit. Jun 21 05:47:49.392382 systemd[1]: sshd@16-172.233.208.28:22-147.75.109.163:55368.service: Deactivated successfully. Jun 21 05:47:49.395098 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 05:47:49.396460 systemd-logind[1512]: Removed session 14. Jun 21 05:47:49.455829 systemd[1]: Started sshd@17-172.233.208.28:22-147.75.109.163:55384.service - OpenSSH per-connection server daemon (147.75.109.163:55384). Jun 21 05:47:49.811522 sshd[4148]: Accepted publickey for core from 147.75.109.163 port 55384 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:49.812697 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:49.816705 systemd-logind[1512]: New session 15 of user core. Jun 21 05:47:49.820779 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 05:47:50.132215 sshd[4150]: Connection closed by 147.75.109.163 port 55384 Jun 21 05:47:50.132829 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:50.136575 systemd-logind[1512]: Session 15 logged out. Waiting for processes to exit. Jun 21 05:47:50.137410 systemd[1]: sshd@17-172.233.208.28:22-147.75.109.163:55384.service: Deactivated successfully. Jun 21 05:47:50.139349 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 05:47:50.141236 systemd-logind[1512]: Removed session 15. Jun 21 05:47:50.170802 kubelet[2709]: E0621 05:47:50.170780 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:47:50.197871 systemd[1]: Started sshd@18-172.233.208.28:22-147.75.109.163:55392.service - OpenSSH per-connection server daemon (147.75.109.163:55392). Jun 21 05:47:50.541484 sshd[4160]: Accepted publickey for core from 147.75.109.163 port 55392 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:50.542968 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:50.548261 systemd-logind[1512]: New session 16 of user core. Jun 21 05:47:50.552844 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 05:47:51.944996 sshd[4162]: Connection closed by 147.75.109.163 port 55392 Jun 21 05:47:51.946721 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:51.951127 systemd[1]: sshd@18-172.233.208.28:22-147.75.109.163:55392.service: Deactivated successfully. Jun 21 05:47:51.953561 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 05:47:51.954157 systemd[1]: session-16.scope: Consumed 434ms CPU time, 66.9M memory peak. Jun 21 05:47:51.954890 systemd-logind[1512]: Session 16 logged out. Waiting for processes to exit. Jun 21 05:47:51.956748 systemd-logind[1512]: Removed session 16. Jun 21 05:47:52.006747 systemd[1]: Started sshd@19-172.233.208.28:22-147.75.109.163:55408.service - OpenSSH per-connection server daemon (147.75.109.163:55408). Jun 21 05:47:52.344668 sshd[4179]: Accepted publickey for core from 147.75.109.163 port 55408 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:52.346481 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:52.350870 systemd-logind[1512]: New session 17 of user core. Jun 21 05:47:52.357778 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 05:47:52.739238 sshd[4181]: Connection closed by 147.75.109.163 port 55408 Jun 21 05:47:52.740807 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:52.745121 systemd-logind[1512]: Session 17 logged out. Waiting for processes to exit. Jun 21 05:47:52.745921 systemd[1]: sshd@19-172.233.208.28:22-147.75.109.163:55408.service: Deactivated successfully. Jun 21 05:47:52.751049 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 05:47:52.753348 systemd-logind[1512]: Removed session 17. Jun 21 05:47:52.798783 systemd[1]: Started sshd@20-172.233.208.28:22-147.75.109.163:55422.service - OpenSSH per-connection server daemon (147.75.109.163:55422). Jun 21 05:47:53.137688 sshd[4191]: Accepted publickey for core from 147.75.109.163 port 55422 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:53.139006 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:53.143922 systemd-logind[1512]: New session 18 of user core. Jun 21 05:47:53.148781 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 05:47:53.430886 sshd[4193]: Connection closed by 147.75.109.163 port 55422 Jun 21 05:47:53.431170 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:53.436249 systemd[1]: sshd@20-172.233.208.28:22-147.75.109.163:55422.service: Deactivated successfully. Jun 21 05:47:53.436531 systemd-logind[1512]: Session 18 logged out. Waiting for processes to exit. Jun 21 05:47:53.438621 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 05:47:53.440636 systemd-logind[1512]: Removed session 18. Jun 21 05:47:58.499487 systemd[1]: Started sshd@21-172.233.208.28:22-147.75.109.163:40618.service - OpenSSH per-connection server daemon (147.75.109.163:40618). Jun 21 05:47:58.848692 sshd[4210]: Accepted publickey for core from 147.75.109.163 port 40618 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:47:58.849786 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:47:58.854885 systemd-logind[1512]: New session 19 of user core. Jun 21 05:47:58.861774 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 05:47:59.152787 sshd[4212]: Connection closed by 147.75.109.163 port 40618 Jun 21 05:47:59.153444 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jun 21 05:47:59.156930 systemd[1]: sshd@21-172.233.208.28:22-147.75.109.163:40618.service: Deactivated successfully. Jun 21 05:47:59.159962 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 05:47:59.161362 systemd-logind[1512]: Session 19 logged out. Waiting for processes to exit. Jun 21 05:47:59.163808 systemd-logind[1512]: Removed session 19. Jun 21 05:48:04.215871 systemd[1]: Started sshd@22-172.233.208.28:22-147.75.109.163:40626.service - OpenSSH per-connection server daemon (147.75.109.163:40626). Jun 21 05:48:04.555776 sshd[4224]: Accepted publickey for core from 147.75.109.163 port 40626 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:48:04.556797 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:48:04.561483 systemd-logind[1512]: New session 20 of user core. Jun 21 05:48:04.572790 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 05:48:04.853519 sshd[4226]: Connection closed by 147.75.109.163 port 40626 Jun 21 05:48:04.854833 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Jun 21 05:48:04.859126 systemd[1]: sshd@22-172.233.208.28:22-147.75.109.163:40626.service: Deactivated successfully. Jun 21 05:48:04.861332 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 05:48:04.862147 systemd-logind[1512]: Session 20 logged out. Waiting for processes to exit. Jun 21 05:48:04.863882 systemd-logind[1512]: Removed session 20. Jun 21 05:48:06.169933 kubelet[2709]: E0621 05:48:06.169895 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:09.913579 systemd[1]: Started sshd@23-172.233.208.28:22-147.75.109.163:37936.service - OpenSSH per-connection server daemon (147.75.109.163:37936). Jun 21 05:48:10.241604 sshd[4238]: Accepted publickey for core from 147.75.109.163 port 37936 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:48:10.242793 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:48:10.247324 systemd-logind[1512]: New session 21 of user core. Jun 21 05:48:10.253759 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 05:48:10.531493 sshd[4240]: Connection closed by 147.75.109.163 port 37936 Jun 21 05:48:10.532258 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jun 21 05:48:10.536042 systemd[1]: sshd@23-172.233.208.28:22-147.75.109.163:37936.service: Deactivated successfully. Jun 21 05:48:10.538862 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 05:48:10.540539 systemd-logind[1512]: Session 21 logged out. Waiting for processes to exit. Jun 21 05:48:10.542795 systemd-logind[1512]: Removed session 21. Jun 21 05:48:10.595279 systemd[1]: Started sshd@24-172.233.208.28:22-147.75.109.163:37944.service - OpenSSH per-connection server daemon (147.75.109.163:37944). Jun 21 05:48:10.931324 sshd[4252]: Accepted publickey for core from 147.75.109.163 port 37944 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:48:10.933071 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:48:10.938355 systemd-logind[1512]: New session 22 of user core. Jun 21 05:48:10.945775 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 05:48:11.171454 kubelet[2709]: E0621 05:48:11.171140 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:12.170686 kubelet[2709]: E0621 05:48:12.170520 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:12.403674 containerd[1551]: time="2025-06-21T05:48:12.403548480Z" level=info msg="StopContainer for \"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" with timeout 30 (s)" Jun 21 05:48:12.405458 containerd[1551]: time="2025-06-21T05:48:12.405413435Z" level=info msg="Stop container \"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" with signal terminated" Jun 21 05:48:12.423205 systemd[1]: cri-containerd-44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c.scope: Deactivated successfully. Jun 21 05:48:12.426382 containerd[1551]: time="2025-06-21T05:48:12.426309095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" id:\"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" pid:2905 exited_at:{seconds:1750484892 nanos:425624290}" Jun 21 05:48:12.426740 containerd[1551]: time="2025-06-21T05:48:12.426571443Z" level=info msg="received exit event container_id:\"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" id:\"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" pid:2905 exited_at:{seconds:1750484892 nanos:425624290}" Jun 21 05:48:12.435772 containerd[1551]: time="2025-06-21T05:48:12.435719528Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 05:48:12.441826 containerd[1551]: time="2025-06-21T05:48:12.441780229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" id:\"18938d0d06c7024324165caed1df7294363a66fc963822c398c345019e399715\" pid:4281 exited_at:{seconds:1750484892 nanos:441397903}" Jun 21 05:48:12.445904 containerd[1551]: time="2025-06-21T05:48:12.445835767Z" level=info msg="StopContainer for \"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" with timeout 2 (s)" Jun 21 05:48:12.446567 containerd[1551]: time="2025-06-21T05:48:12.446509751Z" level=info msg="Stop container \"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" with signal terminated" Jun 21 05:48:12.454390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c-rootfs.mount: Deactivated successfully. Jun 21 05:48:12.460404 systemd-networkd[1461]: lxc_health: Link DOWN Jun 21 05:48:12.460413 systemd-networkd[1461]: lxc_health: Lost carrier Jun 21 05:48:12.475120 containerd[1551]: time="2025-06-21T05:48:12.475082710Z" level=info msg="StopContainer for \"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" returns successfully" Jun 21 05:48:12.475825 containerd[1551]: time="2025-06-21T05:48:12.475794334Z" level=info msg="StopPodSandbox for \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\"" Jun 21 05:48:12.475867 containerd[1551]: time="2025-06-21T05:48:12.475845733Z" level=info msg="Container to stop \"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:48:12.477150 systemd[1]: cri-containerd-7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477.scope: Deactivated successfully. Jun 21 05:48:12.477945 systemd[1]: cri-containerd-7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477.scope: Consumed 5.912s CPU time, 129.6M memory peak, 128K read from disk, 14.3M written to disk. Jun 21 05:48:12.479627 containerd[1551]: time="2025-06-21T05:48:12.479579653Z" level=info msg="received exit event container_id:\"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" id:\"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" pid:3355 exited_at:{seconds:1750484892 nanos:479373125}" Jun 21 05:48:12.480400 containerd[1551]: time="2025-06-21T05:48:12.480363837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" id:\"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" pid:3355 exited_at:{seconds:1750484892 nanos:479373125}" Jun 21 05:48:12.486846 systemd[1]: cri-containerd-778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d.scope: Deactivated successfully. Jun 21 05:48:12.490360 containerd[1551]: time="2025-06-21T05:48:12.490318576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" id:\"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" pid:2819 exit_status:137 exited_at:{seconds:1750484892 nanos:490094288}" Jun 21 05:48:12.512010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477-rootfs.mount: Deactivated successfully. Jun 21 05:48:12.521462 containerd[1551]: time="2025-06-21T05:48:12.521387495Z" level=info msg="StopContainer for \"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" returns successfully" Jun 21 05:48:12.522840 containerd[1551]: time="2025-06-21T05:48:12.522817723Z" level=info msg="StopPodSandbox for \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\"" Jun 21 05:48:12.522890 containerd[1551]: time="2025-06-21T05:48:12.522864702Z" level=info msg="Container to stop \"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:48:12.522890 containerd[1551]: time="2025-06-21T05:48:12.522876112Z" level=info msg="Container to stop \"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:48:12.522890 containerd[1551]: time="2025-06-21T05:48:12.522883642Z" level=info msg="Container to stop \"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:48:12.522890 containerd[1551]: time="2025-06-21T05:48:12.522897962Z" level=info msg="Container to stop \"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:48:12.523093 containerd[1551]: time="2025-06-21T05:48:12.522906802Z" level=info msg="Container to stop \"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 05:48:12.535231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d-rootfs.mount: Deactivated successfully. Jun 21 05:48:12.536343 systemd[1]: cri-containerd-34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f.scope: Deactivated successfully. Jun 21 05:48:12.539443 containerd[1551]: time="2025-06-21T05:48:12.539016822Z" level=info msg="shim disconnected" id=778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d namespace=k8s.io Jun 21 05:48:12.539443 containerd[1551]: time="2025-06-21T05:48:12.539038361Z" level=warning msg="cleaning up after shim disconnected" id=778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d namespace=k8s.io Jun 21 05:48:12.539443 containerd[1551]: time="2025-06-21T05:48:12.539071571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 05:48:12.563972 containerd[1551]: time="2025-06-21T05:48:12.563390084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" id:\"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" pid:2865 exit_status:137 exited_at:{seconds:1750484892 nanos:540024173}" Jun 21 05:48:12.563972 containerd[1551]: time="2025-06-21T05:48:12.563541972Z" level=info msg="received exit event sandbox_id:\"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" exit_status:137 exited_at:{seconds:1750484892 nanos:490094288}" Jun 21 05:48:12.563972 containerd[1551]: time="2025-06-21T05:48:12.563900509Z" level=info msg="TearDown network for sandbox \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" successfully" Jun 21 05:48:12.563972 containerd[1551]: time="2025-06-21T05:48:12.563914249Z" level=info msg="StopPodSandbox for \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" returns successfully" Jun 21 05:48:12.565366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d-shm.mount: Deactivated successfully. Jun 21 05:48:12.583673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f-rootfs.mount: Deactivated successfully. Jun 21 05:48:12.589440 containerd[1551]: time="2025-06-21T05:48:12.589390143Z" level=info msg="shim disconnected" id=34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f namespace=k8s.io Jun 21 05:48:12.589440 containerd[1551]: time="2025-06-21T05:48:12.589415153Z" level=warning msg="cleaning up after shim disconnected" id=34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f namespace=k8s.io Jun 21 05:48:12.589440 containerd[1551]: time="2025-06-21T05:48:12.589423233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 05:48:12.590855 containerd[1551]: time="2025-06-21T05:48:12.589614091Z" level=info msg="received exit event sandbox_id:\"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" exit_status:137 exited_at:{seconds:1750484892 nanos:540024173}" Jun 21 05:48:12.591002 containerd[1551]: time="2025-06-21T05:48:12.590949981Z" level=info msg="TearDown network for sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" successfully" Jun 21 05:48:12.591002 containerd[1551]: time="2025-06-21T05:48:12.590997580Z" level=info msg="StopPodSandbox for \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" returns successfully" Jun 21 05:48:12.668048 kubelet[2709]: I0621 05:48:12.668020 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cilium-run\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668429 kubelet[2709]: I0621 05:48:12.668060 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7pcv\" (UniqueName: \"kubernetes.io/projected/6c51f834-1773-4b0c-a525-1051d089db39-kube-api-access-k7pcv\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668429 kubelet[2709]: I0621 05:48:12.668077 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-etc-cni-netd\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668429 kubelet[2709]: I0621 05:48:12.668096 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c51f834-1773-4b0c-a525-1051d089db39-clustermesh-secrets\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668429 kubelet[2709]: I0621 05:48:12.668113 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c51f834-1773-4b0c-a525-1051d089db39-hubble-tls\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668429 kubelet[2709]: I0621 05:48:12.668131 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-host-proc-sys-net\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668429 kubelet[2709]: I0621 05:48:12.668148 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-hostproc\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668576 kubelet[2709]: I0621 05:48:12.668163 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-lib-modules\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668576 kubelet[2709]: I0621 05:48:12.668178 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cni-path\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668576 kubelet[2709]: I0621 05:48:12.668194 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c51f834-1773-4b0c-a525-1051d089db39-cilium-config-path\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668576 kubelet[2709]: I0621 05:48:12.668208 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-xtables-lock\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.668576 kubelet[2709]: I0621 05:48:12.668224 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhhdv\" (UniqueName: \"kubernetes.io/projected/fa74cb8f-a808-4c37-a58a-333bfb084655-kube-api-access-bhhdv\") pod \"fa74cb8f-a808-4c37-a58a-333bfb084655\" (UID: \"fa74cb8f-a808-4c37-a58a-333bfb084655\") " Jun 21 05:48:12.668576 kubelet[2709]: I0621 05:48:12.668237 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cilium-cgroup\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.669828 kubelet[2709]: I0621 05:48:12.668250 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-host-proc-sys-kernel\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.669828 kubelet[2709]: I0621 05:48:12.668264 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa74cb8f-a808-4c37-a58a-333bfb084655-cilium-config-path\") pod \"fa74cb8f-a808-4c37-a58a-333bfb084655\" (UID: \"fa74cb8f-a808-4c37-a58a-333bfb084655\") " Jun 21 05:48:12.669828 kubelet[2709]: I0621 05:48:12.668279 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-bpf-maps\") pod \"6c51f834-1773-4b0c-a525-1051d089db39\" (UID: \"6c51f834-1773-4b0c-a525-1051d089db39\") " Jun 21 05:48:12.669828 kubelet[2709]: I0621 05:48:12.668334 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.669828 kubelet[2709]: I0621 05:48:12.668364 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.669941 kubelet[2709]: I0621 05:48:12.668629 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cni-path" (OuterVolumeSpecName: "cni-path") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.669941 kubelet[2709]: I0621 05:48:12.668684 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.671187 kubelet[2709]: I0621 05:48:12.671147 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c51f834-1773-4b0c-a525-1051d089db39-kube-api-access-k7pcv" (OuterVolumeSpecName: "kube-api-access-k7pcv") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "kube-api-access-k7pcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 05:48:12.672196 kubelet[2709]: I0621 05:48:12.672175 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c51f834-1773-4b0c-a525-1051d089db39-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 21 05:48:12.674217 kubelet[2709]: I0621 05:48:12.674184 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c51f834-1773-4b0c-a525-1051d089db39-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 05:48:12.674260 kubelet[2709]: I0621 05:48:12.674226 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.675239 kubelet[2709]: I0621 05:48:12.675174 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c51f834-1773-4b0c-a525-1051d089db39-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 05:48:12.675624 kubelet[2709]: I0621 05:48:12.675363 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.675971 kubelet[2709]: I0621 05:48:12.675928 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-hostproc" (OuterVolumeSpecName: "hostproc") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.676248 kubelet[2709]: I0621 05:48:12.676196 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.676324 kubelet[2709]: I0621 05:48:12.676308 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.676387 kubelet[2709]: I0621 05:48:12.676375 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6c51f834-1773-4b0c-a525-1051d089db39" (UID: "6c51f834-1773-4b0c-a525-1051d089db39"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 05:48:12.678071 kubelet[2709]: I0621 05:48:12.678040 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa74cb8f-a808-4c37-a58a-333bfb084655-kube-api-access-bhhdv" (OuterVolumeSpecName: "kube-api-access-bhhdv") pod "fa74cb8f-a808-4c37-a58a-333bfb084655" (UID: "fa74cb8f-a808-4c37-a58a-333bfb084655"). InnerVolumeSpecName "kube-api-access-bhhdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 05:48:12.679455 kubelet[2709]: I0621 05:48:12.679414 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa74cb8f-a808-4c37-a58a-333bfb084655-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fa74cb8f-a808-4c37-a58a-333bfb084655" (UID: "fa74cb8f-a808-4c37-a58a-333bfb084655"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 05:48:12.768472 kubelet[2709]: I0621 05:48:12.768434 2709 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cni-path\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768472 kubelet[2709]: I0621 05:48:12.768463 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c51f834-1773-4b0c-a525-1051d089db39-cilium-config-path\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768472 kubelet[2709]: I0621 05:48:12.768474 2709 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-xtables-lock\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768472 kubelet[2709]: I0621 05:48:12.768484 2709 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhhdv\" (UniqueName: \"kubernetes.io/projected/fa74cb8f-a808-4c37-a58a-333bfb084655-kube-api-access-bhhdv\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768735 kubelet[2709]: I0621 05:48:12.768495 2709 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-bpf-maps\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768735 kubelet[2709]: I0621 05:48:12.768504 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cilium-cgroup\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768735 kubelet[2709]: I0621 05:48:12.768513 2709 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-host-proc-sys-kernel\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768735 kubelet[2709]: I0621 05:48:12.768522 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa74cb8f-a808-4c37-a58a-333bfb084655-cilium-config-path\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768735 kubelet[2709]: I0621 05:48:12.768533 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-cilium-run\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768735 kubelet[2709]: I0621 05:48:12.768541 2709 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-etc-cni-netd\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768735 kubelet[2709]: I0621 05:48:12.768550 2709 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7pcv\" (UniqueName: \"kubernetes.io/projected/6c51f834-1773-4b0c-a525-1051d089db39-kube-api-access-k7pcv\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768735 kubelet[2709]: I0621 05:48:12.768559 2709 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-host-proc-sys-net\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768918 kubelet[2709]: I0621 05:48:12.768567 2709 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-hostproc\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768918 kubelet[2709]: I0621 05:48:12.768575 2709 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c51f834-1773-4b0c-a525-1051d089db39-clustermesh-secrets\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768918 kubelet[2709]: I0621 05:48:12.768584 2709 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c51f834-1773-4b0c-a525-1051d089db39-hubble-tls\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:12.768918 kubelet[2709]: I0621 05:48:12.768593 2709 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51f834-1773-4b0c-a525-1051d089db39-lib-modules\") on node \"172-233-208-28\" DevicePath \"\"" Jun 21 05:48:13.178707 systemd[1]: Removed slice kubepods-besteffort-podfa74cb8f_a808_4c37_a58a_333bfb084655.slice - libcontainer container kubepods-besteffort-podfa74cb8f_a808_4c37_a58a_333bfb084655.slice. Jun 21 05:48:13.180386 systemd[1]: Removed slice kubepods-burstable-pod6c51f834_1773_4b0c_a525_1051d089db39.slice - libcontainer container kubepods-burstable-pod6c51f834_1773_4b0c_a525_1051d089db39.slice. Jun 21 05:48:13.180701 systemd[1]: kubepods-burstable-pod6c51f834_1773_4b0c_a525_1051d089db39.slice: Consumed 6.006s CPU time, 130M memory peak, 128K read from disk, 14.5M written to disk. Jun 21 05:48:13.454175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f-shm.mount: Deactivated successfully. Jun 21 05:48:13.454284 systemd[1]: var-lib-kubelet-pods-6c51f834\x2d1773\x2d4b0c\x2da525\x2d1051d089db39-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 21 05:48:13.454362 systemd[1]: var-lib-kubelet-pods-6c51f834\x2d1773\x2d4b0c\x2da525\x2d1051d089db39-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 21 05:48:13.454442 systemd[1]: var-lib-kubelet-pods-6c51f834\x2d1773\x2d4b0c\x2da525\x2d1051d089db39-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk7pcv.mount: Deactivated successfully. Jun 21 05:48:13.454509 systemd[1]: var-lib-kubelet-pods-fa74cb8f\x2da808\x2d4c37\x2da58a\x2d333bfb084655-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbhhdv.mount: Deactivated successfully. Jun 21 05:48:13.565204 kubelet[2709]: I0621 05:48:13.565169 2709 scope.go:117] "RemoveContainer" containerID="44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c" Jun 21 05:48:13.568245 containerd[1551]: time="2025-06-21T05:48:13.567380262Z" level=info msg="RemoveContainer for \"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\"" Jun 21 05:48:13.575301 containerd[1551]: time="2025-06-21T05:48:13.574823943Z" level=info msg="RemoveContainer for \"44d5e531fba151898c66a79e417598405b0e6eb7d929026fdcc0114dfdb4f15c\" returns successfully" Jun 21 05:48:13.575705 kubelet[2709]: I0621 05:48:13.575603 2709 scope.go:117] "RemoveContainer" containerID="7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477" Jun 21 05:48:13.578929 containerd[1551]: time="2025-06-21T05:48:13.578895840Z" level=info msg="RemoveContainer for \"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\"" Jun 21 05:48:13.585049 containerd[1551]: time="2025-06-21T05:48:13.585009781Z" level=info msg="RemoveContainer for \"7647a1707825c6ca635a1dd2f21ec09958ca0c8c7200335a994552f91c98d477\" returns successfully" Jun 21 05:48:13.585257 kubelet[2709]: I0621 05:48:13.585237 2709 scope.go:117] "RemoveContainer" containerID="d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c" Jun 21 05:48:13.590667 containerd[1551]: time="2025-06-21T05:48:13.590587626Z" level=info msg="RemoveContainer for \"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\"" Jun 21 05:48:13.596526 containerd[1551]: time="2025-06-21T05:48:13.596504178Z" level=info msg="RemoveContainer for \"d72b0c908e4ac9c63c3a404edf009787d3de95fcaa530a5b20874099b644c63c\" returns successfully" Jun 21 05:48:13.596855 kubelet[2709]: I0621 05:48:13.596784 2709 scope.go:117] "RemoveContainer" containerID="fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5" Jun 21 05:48:13.598559 containerd[1551]: time="2025-06-21T05:48:13.598531032Z" level=info msg="RemoveContainer for \"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\"" Jun 21 05:48:13.601388 containerd[1551]: time="2025-06-21T05:48:13.601362738Z" level=info msg="RemoveContainer for \"fbed34cfbd3030df790362ffbf03cea80406e5dbaa7d5716734fac8407df8bb5\" returns successfully" Jun 21 05:48:13.601499 kubelet[2709]: I0621 05:48:13.601471 2709 scope.go:117] "RemoveContainer" containerID="3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f" Jun 21 05:48:13.602693 containerd[1551]: time="2025-06-21T05:48:13.602673908Z" level=info msg="RemoveContainer for \"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\"" Jun 21 05:48:13.605316 containerd[1551]: time="2025-06-21T05:48:13.605285897Z" level=info msg="RemoveContainer for \"3f6b4c88d2c9a4a369f2e301f286db3e8df6eab330316cd448c86f7ed267209f\" returns successfully" Jun 21 05:48:13.605469 kubelet[2709]: I0621 05:48:13.605418 2709 scope.go:117] "RemoveContainer" containerID="0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69" Jun 21 05:48:13.607062 containerd[1551]: time="2025-06-21T05:48:13.606714926Z" level=info msg="RemoveContainer for \"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\"" Jun 21 05:48:13.608975 containerd[1551]: time="2025-06-21T05:48:13.608953668Z" level=info msg="RemoveContainer for \"0db22cef1db544ea7654bf8c7ff1e9306ddeebb0631f766af19d9341b7473e69\" returns successfully" Jun 21 05:48:14.415355 sshd[4254]: Connection closed by 147.75.109.163 port 37944 Jun 21 05:48:14.415933 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Jun 21 05:48:14.419901 systemd-logind[1512]: Session 22 logged out. Waiting for processes to exit. Jun 21 05:48:14.420092 systemd[1]: sshd@24-172.233.208.28:22-147.75.109.163:37944.service: Deactivated successfully. Jun 21 05:48:14.422214 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 05:48:14.424433 systemd-logind[1512]: Removed session 22. Jun 21 05:48:14.452501 containerd[1551]: time="2025-06-21T05:48:14.452458135Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1750484892 nanos:490094288}" Jun 21 05:48:14.482756 systemd[1]: Started sshd@25-172.233.208.28:22-147.75.109.163:37950.service - OpenSSH per-connection server daemon (147.75.109.163:37950). Jun 21 05:48:14.826215 sshd[4405]: Accepted publickey for core from 147.75.109.163 port 37950 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:48:14.828516 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:48:14.835711 systemd-logind[1512]: New session 23 of user core. Jun 21 05:48:14.840760 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 05:48:15.174670 kubelet[2709]: I0621 05:48:15.173925 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c51f834-1773-4b0c-a525-1051d089db39" path="/var/lib/kubelet/pods/6c51f834-1773-4b0c-a525-1051d089db39/volumes" Jun 21 05:48:15.174670 kubelet[2709]: I0621 05:48:15.174621 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa74cb8f-a808-4c37-a58a-333bfb084655" path="/var/lib/kubelet/pods/fa74cb8f-a808-4c37-a58a-333bfb084655/volumes" Jun 21 05:48:15.522215 kubelet[2709]: E0621 05:48:15.521489 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c51f834-1773-4b0c-a525-1051d089db39" containerName="apply-sysctl-overwrites" Jun 21 05:48:15.522356 kubelet[2709]: E0621 05:48:15.522329 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c51f834-1773-4b0c-a525-1051d089db39" containerName="mount-bpf-fs" Jun 21 05:48:15.522494 kubelet[2709]: E0621 05:48:15.522463 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c51f834-1773-4b0c-a525-1051d089db39" containerName="clean-cilium-state" Jun 21 05:48:15.522674 kubelet[2709]: E0621 05:48:15.522661 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c51f834-1773-4b0c-a525-1051d089db39" containerName="mount-cgroup" Jun 21 05:48:15.522743 kubelet[2709]: E0621 05:48:15.522731 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c51f834-1773-4b0c-a525-1051d089db39" containerName="cilium-agent" Jun 21 05:48:15.522917 kubelet[2709]: E0621 05:48:15.522797 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa74cb8f-a808-4c37-a58a-333bfb084655" containerName="cilium-operator" Jun 21 05:48:15.523138 kubelet[2709]: I0621 05:48:15.522846 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa74cb8f-a808-4c37-a58a-333bfb084655" containerName="cilium-operator" Jun 21 05:48:15.523138 kubelet[2709]: I0621 05:48:15.522987 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c51f834-1773-4b0c-a525-1051d089db39" containerName="cilium-agent" Jun 21 05:48:15.534013 systemd[1]: Created slice kubepods-burstable-pod45067189_9ced_4f4d_8fbf_142ee070544b.slice - libcontainer container kubepods-burstable-pod45067189_9ced_4f4d_8fbf_142ee070544b.slice. Jun 21 05:48:15.561466 sshd[4407]: Connection closed by 147.75.109.163 port 37950 Jun 21 05:48:15.563372 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Jun 21 05:48:15.567704 systemd[1]: sshd@25-172.233.208.28:22-147.75.109.163:37950.service: Deactivated successfully. Jun 21 05:48:15.570201 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 05:48:15.572705 systemd-logind[1512]: Session 23 logged out. Waiting for processes to exit. Jun 21 05:48:15.574898 systemd-logind[1512]: Removed session 23. Jun 21 05:48:15.581658 kubelet[2709]: I0621 05:48:15.581608 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-cni-path\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581726 kubelet[2709]: I0621 05:48:15.581639 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-host-proc-sys-kernel\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581726 kubelet[2709]: I0621 05:48:15.581682 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45067189-9ced-4f4d-8fbf-142ee070544b-hubble-tls\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581726 kubelet[2709]: I0621 05:48:15.581695 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-hostproc\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581726 kubelet[2709]: I0621 05:48:15.581706 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-etc-cni-netd\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581726 kubelet[2709]: I0621 05:48:15.581718 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45067189-9ced-4f4d-8fbf-142ee070544b-clustermesh-secrets\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581844 kubelet[2709]: I0621 05:48:15.581731 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-lib-modules\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581844 kubelet[2709]: I0621 05:48:15.581744 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-xtables-lock\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581844 kubelet[2709]: I0621 05:48:15.581756 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-cilium-cgroup\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581844 kubelet[2709]: I0621 05:48:15.581767 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45067189-9ced-4f4d-8fbf-142ee070544b-cilium-config-path\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581844 kubelet[2709]: I0621 05:48:15.581779 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45067189-9ced-4f4d-8fbf-142ee070544b-cilium-ipsec-secrets\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581944 kubelet[2709]: I0621 05:48:15.581792 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jknjf\" (UniqueName: \"kubernetes.io/projected/45067189-9ced-4f4d-8fbf-142ee070544b-kube-api-access-jknjf\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581944 kubelet[2709]: I0621 05:48:15.581806 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-host-proc-sys-net\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581944 kubelet[2709]: I0621 05:48:15.581819 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-cilium-run\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.581944 kubelet[2709]: I0621 05:48:15.581831 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45067189-9ced-4f4d-8fbf-142ee070544b-bpf-maps\") pod \"cilium-q9kvh\" (UID: \"45067189-9ced-4f4d-8fbf-142ee070544b\") " pod="kube-system/cilium-q9kvh" Jun 21 05:48:15.627117 systemd[1]: Started sshd@26-172.233.208.28:22-147.75.109.163:37960.service - OpenSSH per-connection server daemon (147.75.109.163:37960). Jun 21 05:48:15.840578 kubelet[2709]: E0621 05:48:15.840486 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:15.841854 containerd[1551]: time="2025-06-21T05:48:15.841819000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q9kvh,Uid:45067189-9ced-4f4d-8fbf-142ee070544b,Namespace:kube-system,Attempt:0,}" Jun 21 05:48:15.859664 containerd[1551]: time="2025-06-21T05:48:15.859618709Z" level=info msg="connecting to shim 59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163" address="unix:///run/containerd/s/26f769b51f4bcd77064645df6c020448dd61b445501072a5e5c90ce4bd692426" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:48:15.884776 systemd[1]: Started cri-containerd-59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163.scope - libcontainer container 59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163. Jun 21 05:48:15.906084 containerd[1551]: time="2025-06-21T05:48:15.906054739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q9kvh,Uid:45067189-9ced-4f4d-8fbf-142ee070544b,Namespace:kube-system,Attempt:0,} returns sandbox id \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\"" Jun 21 05:48:15.906768 kubelet[2709]: E0621 05:48:15.906736 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:15.908489 containerd[1551]: time="2025-06-21T05:48:15.908471140Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 05:48:15.913286 containerd[1551]: time="2025-06-21T05:48:15.913267172Z" level=info msg="Container 72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:48:15.917325 containerd[1551]: time="2025-06-21T05:48:15.917291910Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6\"" Jun 21 05:48:15.917719 containerd[1551]: time="2025-06-21T05:48:15.917694697Z" level=info msg="StartContainer for \"72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6\"" Jun 21 05:48:15.918626 containerd[1551]: time="2025-06-21T05:48:15.918601300Z" level=info msg="connecting to shim 72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6" address="unix:///run/containerd/s/26f769b51f4bcd77064645df6c020448dd61b445501072a5e5c90ce4bd692426" protocol=ttrpc version=3 Jun 21 05:48:15.937774 systemd[1]: Started cri-containerd-72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6.scope - libcontainer container 72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6. Jun 21 05:48:15.971571 containerd[1551]: time="2025-06-21T05:48:15.971369709Z" level=info msg="StartContainer for \"72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6\" returns successfully" Jun 21 05:48:15.974039 systemd[1]: cri-containerd-72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6.scope: Deactivated successfully. Jun 21 05:48:15.974779 containerd[1551]: time="2025-06-21T05:48:15.974723182Z" level=info msg="received exit event container_id:\"72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6\" id:\"72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6\" pid:4480 exited_at:{seconds:1750484895 nanos:974514915}" Jun 21 05:48:15.975477 containerd[1551]: time="2025-06-21T05:48:15.974869091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6\" id:\"72d5d003eb50200f7437847bfea29ae3f310d7087f76f24ddb4eeb73eedf21f6\" pid:4480 exited_at:{seconds:1750484895 nanos:974514915}" Jun 21 05:48:15.976259 sshd[4417]: Accepted publickey for core from 147.75.109.163 port 37960 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:48:15.978827 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:48:15.984530 systemd-logind[1512]: New session 24 of user core. Jun 21 05:48:15.988763 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 05:48:16.227207 sshd[4510]: Connection closed by 147.75.109.163 port 37960 Jun 21 05:48:16.227932 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Jun 21 05:48:16.232411 systemd[1]: sshd@26-172.233.208.28:22-147.75.109.163:37960.service: Deactivated successfully. Jun 21 05:48:16.234843 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 05:48:16.236181 systemd-logind[1512]: Session 24 logged out. Waiting for processes to exit. Jun 21 05:48:16.238168 systemd-logind[1512]: Removed session 24. Jun 21 05:48:16.294770 systemd[1]: Started sshd@27-172.233.208.28:22-147.75.109.163:38992.service - OpenSSH per-connection server daemon (147.75.109.163:38992). Jun 21 05:48:16.584543 kubelet[2709]: E0621 05:48:16.584303 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:16.590281 containerd[1551]: time="2025-06-21T05:48:16.590254832Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 05:48:16.596133 containerd[1551]: time="2025-06-21T05:48:16.595859017Z" level=info msg="Container f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:48:16.601826 containerd[1551]: time="2025-06-21T05:48:16.601792990Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81\"" Jun 21 05:48:16.602971 containerd[1551]: time="2025-06-21T05:48:16.602946901Z" level=info msg="StartContainer for \"f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81\"" Jun 21 05:48:16.604441 containerd[1551]: time="2025-06-21T05:48:16.604390190Z" level=info msg="connecting to shim f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81" address="unix:///run/containerd/s/26f769b51f4bcd77064645df6c020448dd61b445501072a5e5c90ce4bd692426" protocol=ttrpc version=3 Jun 21 05:48:16.623939 systemd[1]: Started cri-containerd-f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81.scope - libcontainer container f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81. Jun 21 05:48:16.647444 sshd[4518]: Accepted publickey for core from 147.75.109.163 port 38992 ssh2: RSA SHA256:48uuw6LAXf3fj9cIXf0pAJfwA9jkb961IItHRqHcPiw Jun 21 05:48:16.649916 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:48:16.659498 systemd-logind[1512]: New session 25 of user core. Jun 21 05:48:16.662717 containerd[1551]: time="2025-06-21T05:48:16.661696116Z" level=info msg="StartContainer for \"f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81\" returns successfully" Jun 21 05:48:16.663523 containerd[1551]: time="2025-06-21T05:48:16.663413023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81\" id:\"f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81\" pid:4532 exited_at:{seconds:1750484896 nanos:662959736}" Jun 21 05:48:16.663686 containerd[1551]: time="2025-06-21T05:48:16.663603501Z" level=info msg="received exit event container_id:\"f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81\" id:\"f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81\" pid:4532 exited_at:{seconds:1750484896 nanos:662959736}" Jun 21 05:48:16.664972 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 05:48:16.665350 systemd[1]: cri-containerd-f7ed7a67cad204da41adfc5df2f1a46f942feb47b9543d791c9706430baccd81.scope: Deactivated successfully. Jun 21 05:48:17.170490 containerd[1551]: time="2025-06-21T05:48:17.170348210Z" level=info msg="StopPodSandbox for \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\"" Jun 21 05:48:17.171716 containerd[1551]: time="2025-06-21T05:48:17.171411071Z" level=info msg="TearDown network for sandbox \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" successfully" Jun 21 05:48:17.171716 containerd[1551]: time="2025-06-21T05:48:17.171431491Z" level=info msg="StopPodSandbox for \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" returns successfully" Jun 21 05:48:17.171967 containerd[1551]: time="2025-06-21T05:48:17.171886038Z" level=info msg="RemovePodSandbox for \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\"" Jun 21 05:48:17.171967 containerd[1551]: time="2025-06-21T05:48:17.171926667Z" level=info msg="Forcibly stopping sandbox \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\"" Jun 21 05:48:17.172137 containerd[1551]: time="2025-06-21T05:48:17.172030457Z" level=info msg="TearDown network for sandbox \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" successfully" Jun 21 05:48:17.173430 containerd[1551]: time="2025-06-21T05:48:17.173398876Z" level=info msg="Ensure that sandbox 778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d in task-service has been cleanup successfully" Jun 21 05:48:17.175504 containerd[1551]: time="2025-06-21T05:48:17.175472559Z" level=info msg="RemovePodSandbox \"778b61456e2efea5128c3bb854a3d0865f20a375abb7e5f7faad8c0307cc2d3d\" returns successfully" Jun 21 05:48:17.175863 containerd[1551]: time="2025-06-21T05:48:17.175829077Z" level=info msg="StopPodSandbox for \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\"" Jun 21 05:48:17.175997 containerd[1551]: time="2025-06-21T05:48:17.175911406Z" level=info msg="TearDown network for sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" successfully" Jun 21 05:48:17.175997 containerd[1551]: time="2025-06-21T05:48:17.175922176Z" level=info msg="StopPodSandbox for \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" returns successfully" Jun 21 05:48:17.176328 containerd[1551]: time="2025-06-21T05:48:17.176267343Z" level=info msg="RemovePodSandbox for \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\"" Jun 21 05:48:17.176328 containerd[1551]: time="2025-06-21T05:48:17.176305703Z" level=info msg="Forcibly stopping sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\"" Jun 21 05:48:17.176413 containerd[1551]: time="2025-06-21T05:48:17.176393881Z" level=info msg="TearDown network for sandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" successfully" Jun 21 05:48:17.177952 containerd[1551]: time="2025-06-21T05:48:17.177918880Z" level=info msg="Ensure that sandbox 34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f in task-service has been cleanup successfully" Jun 21 05:48:17.180275 containerd[1551]: time="2025-06-21T05:48:17.180190613Z" level=info msg="RemovePodSandbox \"34d072d99d43bc856b654dc2361631ef56ec1fccdc2f2a165989e7602fbea41f\" returns successfully" Jun 21 05:48:17.278395 kubelet[2709]: E0621 05:48:17.278317 2709 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 05:48:17.588143 kubelet[2709]: E0621 05:48:17.587409 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:17.591080 containerd[1551]: time="2025-06-21T05:48:17.591027681Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 05:48:17.606711 containerd[1551]: time="2025-06-21T05:48:17.604862152Z" level=info msg="Container c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:48:17.615250 containerd[1551]: time="2025-06-21T05:48:17.615210350Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e\"" Jun 21 05:48:17.616339 containerd[1551]: time="2025-06-21T05:48:17.615759226Z" level=info msg="StartContainer for \"c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e\"" Jun 21 05:48:17.617269 containerd[1551]: time="2025-06-21T05:48:17.617123565Z" level=info msg="connecting to shim c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e" address="unix:///run/containerd/s/26f769b51f4bcd77064645df6c020448dd61b445501072a5e5c90ce4bd692426" protocol=ttrpc version=3 Jun 21 05:48:17.638768 systemd[1]: Started cri-containerd-c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e.scope - libcontainer container c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e. Jun 21 05:48:17.679629 containerd[1551]: time="2025-06-21T05:48:17.679583614Z" level=info msg="StartContainer for \"c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e\" returns successfully" Jun 21 05:48:17.682081 systemd[1]: cri-containerd-c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e.scope: Deactivated successfully. Jun 21 05:48:17.684509 containerd[1551]: time="2025-06-21T05:48:17.684473175Z" level=info msg="received exit event container_id:\"c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e\" id:\"c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e\" pid:4586 exited_at:{seconds:1750484897 nanos:683619552}" Jun 21 05:48:17.684801 containerd[1551]: time="2025-06-21T05:48:17.684678773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e\" id:\"c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e\" pid:4586 exited_at:{seconds:1750484897 nanos:683619552}" Jun 21 05:48:17.710767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c845cf653d011214835f903a53badc851dcd9ff3ae5283a493eabaf88707973e-rootfs.mount: Deactivated successfully. Jun 21 05:48:18.591826 kubelet[2709]: E0621 05:48:18.591607 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:18.594207 containerd[1551]: time="2025-06-21T05:48:18.594170455Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 05:48:18.609113 containerd[1551]: time="2025-06-21T05:48:18.606836876Z" level=info msg="Container 0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:48:18.616981 containerd[1551]: time="2025-06-21T05:48:18.616842088Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2\"" Jun 21 05:48:18.620119 containerd[1551]: time="2025-06-21T05:48:18.620098522Z" level=info msg="StartContainer for \"0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2\"" Jun 21 05:48:18.623296 containerd[1551]: time="2025-06-21T05:48:18.623267848Z" level=info msg="connecting to shim 0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2" address="unix:///run/containerd/s/26f769b51f4bcd77064645df6c020448dd61b445501072a5e5c90ce4bd692426" protocol=ttrpc version=3 Jun 21 05:48:18.647786 systemd[1]: Started cri-containerd-0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2.scope - libcontainer container 0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2. Jun 21 05:48:18.676224 systemd[1]: cri-containerd-0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2.scope: Deactivated successfully. Jun 21 05:48:18.678827 containerd[1551]: time="2025-06-21T05:48:18.678781334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2\" id:\"0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2\" pid:4626 exited_at:{seconds:1750484898 nanos:677729742}" Jun 21 05:48:18.679089 containerd[1551]: time="2025-06-21T05:48:18.679063242Z" level=info msg="received exit event container_id:\"0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2\" id:\"0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2\" pid:4626 exited_at:{seconds:1750484898 nanos:677729742}" Jun 21 05:48:18.681138 containerd[1551]: time="2025-06-21T05:48:18.680846188Z" level=info msg="StartContainer for \"0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2\" returns successfully" Jun 21 05:48:18.699402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ead3cedaab776744426e96705a5f9d73020ee1bbd325a2822fdc3bc08acd2e2-rootfs.mount: Deactivated successfully. Jun 21 05:48:19.596213 kubelet[2709]: E0621 05:48:19.596099 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:19.598842 containerd[1551]: time="2025-06-21T05:48:19.598585144Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 05:48:19.612566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1662386324.mount: Deactivated successfully. Jun 21 05:48:19.617529 containerd[1551]: time="2025-06-21T05:48:19.614072503Z" level=info msg="Container 35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:48:19.623144 containerd[1551]: time="2025-06-21T05:48:19.623109294Z" level=info msg="CreateContainer within sandbox \"59553c72fb4efa836ad2a94f8794baeac2a7e7dbff5d84d1ada10884d9381163\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30\"" Jun 21 05:48:19.623721 containerd[1551]: time="2025-06-21T05:48:19.623685839Z" level=info msg="StartContainer for \"35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30\"" Jun 21 05:48:19.624605 containerd[1551]: time="2025-06-21T05:48:19.624579962Z" level=info msg="connecting to shim 35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30" address="unix:///run/containerd/s/26f769b51f4bcd77064645df6c020448dd61b445501072a5e5c90ce4bd692426" protocol=ttrpc version=3 Jun 21 05:48:19.654776 systemd[1]: Started cri-containerd-35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30.scope - libcontainer container 35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30. Jun 21 05:48:19.691147 containerd[1551]: time="2025-06-21T05:48:19.691082545Z" level=info msg="StartContainer for \"35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30\" returns successfully" Jun 21 05:48:19.765146 containerd[1551]: time="2025-06-21T05:48:19.765104509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30\" id:\"409578d14f8607b35b1b081fffe161512e2cfcf782ef04b1f08b92d002e14f82\" pid:4693 exited_at:{seconds:1750484899 nanos:764824971}" Jun 21 05:48:20.113703 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jun 21 05:48:20.602236 kubelet[2709]: E0621 05:48:20.602211 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:20.615703 kubelet[2709]: I0621 05:48:20.615273 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q9kvh" podStartSLOduration=5.615250761 podStartE2EDuration="5.615250761s" podCreationTimestamp="2025-06-21 05:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:48:20.614403757 +0000 UTC m=+183.533483197" watchObservedRunningTime="2025-06-21 05:48:20.615250761 +0000 UTC m=+183.534330201" Jun 21 05:48:21.010022 containerd[1551]: time="2025-06-21T05:48:21.009877917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30\" id:\"9e30d379ca414d682527bf02750bd5b082bdaebaa7f91fd6229e889b82c75aa4\" pid:4770 exit_status:1 exited_at:{seconds:1750484901 nanos:9378341}" Jun 21 05:48:21.841824 kubelet[2709]: E0621 05:48:21.841782 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:22.823442 systemd-networkd[1461]: lxc_health: Link UP Jun 21 05:48:22.830112 systemd-networkd[1461]: lxc_health: Gained carrier Jun 21 05:48:23.140062 containerd[1551]: time="2025-06-21T05:48:23.140019817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30\" id:\"df35d39bc05e7b6a2b0e8ee4a2172222265baf7aa06d8cf1fea04316ff17d81f\" pid:5206 exited_at:{seconds:1750484903 nanos:139530210}" Jun 21 05:48:23.171299 kubelet[2709]: E0621 05:48:23.171264 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:23.842404 kubelet[2709]: E0621 05:48:23.842298 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:24.137877 systemd-networkd[1461]: lxc_health: Gained IPv6LL Jun 21 05:48:24.613104 kubelet[2709]: E0621 05:48:24.613058 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:25.170675 kubelet[2709]: E0621 05:48:25.170539 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:25.294683 containerd[1551]: time="2025-06-21T05:48:25.294618640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30\" id:\"62e9851ac097671fbb919b2dd275a76fd09e909007c0c434052053742b61cebd\" pid:5237 exited_at:{seconds:1750484905 nanos:294212504}" Jun 21 05:48:25.299926 kubelet[2709]: E0621 05:48:25.299741 2709 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49786->127.0.0.1:35975: write tcp 127.0.0.1:49786->127.0.0.1:35975: write: broken pipe Jun 21 05:48:25.614526 kubelet[2709]: E0621 05:48:25.614317 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jun 21 05:48:27.388084 containerd[1551]: time="2025-06-21T05:48:27.388041523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30\" id:\"585fba6cc910f754656dcb4258d66efe2ee9f720ec678d0c12fd9174af52cd39\" pid:5281 exited_at:{seconds:1750484907 nanos:387524047}" Jun 21 05:48:29.498089 containerd[1551]: time="2025-06-21T05:48:29.498043784Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35a2573518a77aca55962b84b76f1512f53016eecd62584501b1667b6a837e30\" id:\"5a08a1a50ba3f89969f3720aac86ce108fe4bbf78a552c55a8d953631b239eef\" pid:5304 exited_at:{seconds:1750484909 nanos:497325660}" Jun 21 05:48:29.560035 sshd[4558]: Connection closed by 147.75.109.163 port 38992 Jun 21 05:48:29.561964 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Jun 21 05:48:29.565732 systemd[1]: sshd@27-172.233.208.28:22-147.75.109.163:38992.service: Deactivated successfully. Jun 21 05:48:29.568446 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 05:48:29.571376 systemd-logind[1512]: Session 25 logged out. Waiting for processes to exit. Jun 21 05:48:29.572446 systemd-logind[1512]: Removed session 25.