May 15 12:42:28.919225 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 12:42:28.919254 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:42:28.919268 kernel: BIOS-provided physical RAM map: May 15 12:42:28.919281 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 15 12:42:28.919290 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 15 12:42:28.919300 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 12:42:28.919308 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 15 12:42:28.919314 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 15 12:42:28.919320 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 12:42:28.919326 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 12:42:28.919332 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 12:42:28.919338 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 12:42:28.919346 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 15 12:42:28.919352 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 12:42:28.919360 kernel: NX (Execute Disable) protection: active May 15 12:42:28.919366 kernel: APIC: Static calls initialized May 15 12:42:28.919373 kernel: SMBIOS 2.8 present. May 15 12:42:28.919381 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 15 12:42:28.919387 kernel: DMI: Memory slots populated: 1/1 May 15 12:42:28.919394 kernel: Hypervisor detected: KVM May 15 12:42:28.919400 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 12:42:28.919407 kernel: kvm-clock: using sched offset of 6076987327 cycles May 15 12:42:28.919413 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 12:42:28.919420 kernel: tsc: Detected 2000.002 MHz processor May 15 12:42:28.919427 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 12:42:28.919434 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 12:42:28.919441 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 15 12:42:28.919449 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 12:42:28.919456 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 12:42:28.919463 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 15 12:42:28.919469 kernel: Using GB pages for direct mapping May 15 12:42:28.919476 kernel: ACPI: Early table checksum verification disabled May 15 12:42:28.919482 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 15 12:42:28.919489 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:42:28.919495 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:42:28.919502 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:42:28.919510 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 12:42:28.919517 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:42:28.919523 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:42:28.919530 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:42:28.919539 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:42:28.919546 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 15 12:42:28.919555 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 15 12:42:28.919561 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 12:42:28.919568 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 15 12:42:28.919575 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 15 12:42:28.919582 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 15 12:42:28.919588 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 15 12:42:28.919595 kernel: No NUMA configuration found May 15 12:42:28.919602 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 15 12:42:28.919610 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] May 15 12:42:28.919617 kernel: Zone ranges: May 15 12:42:28.919624 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 12:42:28.919630 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 12:42:28.919637 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 15 12:42:28.919644 kernel: Device empty May 15 12:42:28.919650 kernel: Movable zone start for each node May 15 12:42:28.919657 kernel: Early memory node ranges May 15 12:42:28.919664 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 12:42:28.919671 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 15 12:42:28.919679 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 15 12:42:28.919686 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 15 12:42:28.919693 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 12:42:28.919699 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 12:42:28.919706 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 12:42:28.919713 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 12:42:28.919720 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 12:42:28.919727 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 12:42:28.919733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 12:42:28.919742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 12:42:28.919748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 12:42:28.919755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 12:42:28.919762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 12:42:28.919769 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 12:42:28.919776 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 12:42:28.919782 kernel: TSC deadline timer available May 15 12:42:28.919789 kernel: CPU topo: Max. logical packages: 1 May 15 12:42:28.919796 kernel: CPU topo: Max. logical dies: 1 May 15 12:42:28.919804 kernel: CPU topo: Max. dies per package: 1 May 15 12:42:28.919811 kernel: CPU topo: Max. threads per core: 1 May 15 12:42:28.919817 kernel: CPU topo: Num. cores per package: 2 May 15 12:42:28.919824 kernel: CPU topo: Num. threads per package: 2 May 15 12:42:28.919831 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 12:42:28.920097 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 12:42:28.920104 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 12:42:28.920111 kernel: kvm-guest: setup PV sched yield May 15 12:42:28.920118 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 12:42:28.920127 kernel: Booting paravirtualized kernel on KVM May 15 12:42:28.920134 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 12:42:28.920141 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 12:42:28.920148 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 12:42:28.920155 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 12:42:28.920162 kernel: pcpu-alloc: [0] 0 1 May 15 12:42:28.920168 kernel: kvm-guest: PV spinlocks enabled May 15 12:42:28.920175 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 12:42:28.920183 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:42:28.920192 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 12:42:28.920199 kernel: random: crng init done May 15 12:42:28.920206 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 12:42:28.920212 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 12:42:28.920219 kernel: Fallback order for Node 0: 0 May 15 12:42:28.920226 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 15 12:42:28.920233 kernel: Policy zone: Normal May 15 12:42:28.920240 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 12:42:28.920248 kernel: software IO TLB: area num 2. May 15 12:42:28.920255 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 12:42:28.920262 kernel: ftrace: allocating 40065 entries in 157 pages May 15 12:42:28.920268 kernel: ftrace: allocated 157 pages with 5 groups May 15 12:42:28.920275 kernel: Dynamic Preempt: voluntary May 15 12:42:28.920282 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 12:42:28.920289 kernel: rcu: RCU event tracing is enabled. May 15 12:42:28.920296 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 12:42:28.920303 kernel: Trampoline variant of Tasks RCU enabled. May 15 12:42:28.920310 kernel: Rude variant of Tasks RCU enabled. May 15 12:42:28.920319 kernel: Tracing variant of Tasks RCU enabled. May 15 12:42:28.920325 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 12:42:28.920332 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 12:42:28.920339 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:42:28.920352 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:42:28.920400 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:42:28.920408 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 12:42:28.920415 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 12:42:28.920422 kernel: Console: colour VGA+ 80x25 May 15 12:42:28.920429 kernel: printk: legacy console [tty0] enabled May 15 12:42:28.920436 kernel: printk: legacy console [ttyS0] enabled May 15 12:42:28.920443 kernel: ACPI: Core revision 20240827 May 15 12:42:28.920453 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 12:42:28.920460 kernel: APIC: Switch to symmetric I/O mode setup May 15 12:42:28.920468 kernel: x2apic enabled May 15 12:42:28.920475 kernel: APIC: Switched APIC routing to: physical x2apic May 15 12:42:28.920484 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 12:42:28.920491 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 12:42:28.920498 kernel: kvm-guest: setup PV IPIs May 15 12:42:28.920505 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 12:42:28.920513 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns May 15 12:42:28.920520 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) May 15 12:42:28.920527 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 12:42:28.920534 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 12:42:28.920541 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 12:42:28.920550 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 12:42:28.920557 kernel: Spectre V2 : Mitigation: Retpolines May 15 12:42:28.920564 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 12:42:28.920572 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 12:42:28.920579 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 12:42:28.920586 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 12:42:28.920593 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 12:42:28.920600 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 12:42:28.920608 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 12:42:28.920617 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 12:42:28.920624 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 12:42:28.920631 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 12:42:28.920638 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 12:42:28.920645 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 15 12:42:28.920652 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 12:42:28.920659 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 15 12:42:28.920666 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 15 12:42:28.920675 kernel: Freeing SMP alternatives memory: 32K May 15 12:42:28.920681 kernel: pid_max: default: 32768 minimum: 301 May 15 12:42:28.920688 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 12:42:28.920695 kernel: landlock: Up and running. May 15 12:42:28.920702 kernel: SELinux: Initializing. May 15 12:42:28.920709 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:42:28.920716 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:42:28.920723 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 15 12:42:28.920730 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 12:42:28.920739 kernel: ... version: 0 May 15 12:42:28.920746 kernel: ... bit width: 48 May 15 12:42:28.920753 kernel: ... generic registers: 6 May 15 12:42:28.920760 kernel: ... value mask: 0000ffffffffffff May 15 12:42:28.920767 kernel: ... max period: 00007fffffffffff May 15 12:42:28.920773 kernel: ... fixed-purpose events: 0 May 15 12:42:28.920780 kernel: ... event mask: 000000000000003f May 15 12:42:28.920787 kernel: signal: max sigframe size: 3376 May 15 12:42:28.920794 kernel: rcu: Hierarchical SRCU implementation. May 15 12:42:28.920801 kernel: rcu: Max phase no-delay instances is 400. May 15 12:42:28.920810 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 12:42:28.920817 kernel: smp: Bringing up secondary CPUs ... May 15 12:42:28.920824 kernel: smpboot: x86: Booting SMP configuration: May 15 12:42:28.920830 kernel: .... node #0, CPUs: #1 May 15 12:42:28.921891 kernel: smp: Brought up 1 node, 2 CPUs May 15 12:42:28.921900 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 15 12:42:28.921907 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 227296K reserved, 0K cma-reserved) May 15 12:42:28.921914 kernel: devtmpfs: initialized May 15 12:42:28.921921 kernel: x86/mm: Memory block size: 128MB May 15 12:42:28.921932 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 12:42:28.921939 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 12:42:28.921946 kernel: pinctrl core: initialized pinctrl subsystem May 15 12:42:28.921953 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 12:42:28.921959 kernel: audit: initializing netlink subsys (disabled) May 15 12:42:28.921966 kernel: audit: type=2000 audit(1747312946.015:1): state=initialized audit_enabled=0 res=1 May 15 12:42:28.921973 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 12:42:28.921979 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 12:42:28.921988 kernel: cpuidle: using governor menu May 15 12:42:28.921994 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 12:42:28.922001 kernel: dca service started, version 1.12.1 May 15 12:42:28.922008 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 15 12:42:28.922014 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 12:42:28.922021 kernel: PCI: Using configuration type 1 for base access May 15 12:42:28.922028 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 12:42:28.922034 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 12:42:28.922041 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 12:42:28.922050 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 12:42:28.922057 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 12:42:28.922063 kernel: ACPI: Added _OSI(Module Device) May 15 12:42:28.922070 kernel: ACPI: Added _OSI(Processor Device) May 15 12:42:28.922077 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 12:42:28.922083 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 12:42:28.922090 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 12:42:28.922096 kernel: ACPI: Interpreter enabled May 15 12:42:28.922103 kernel: ACPI: PM: (supports S0 S3 S5) May 15 12:42:28.922110 kernel: ACPI: Using IOAPIC for interrupt routing May 15 12:42:28.922118 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 12:42:28.922125 kernel: PCI: Using E820 reservations for host bridge windows May 15 12:42:28.922132 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 12:42:28.922138 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 12:42:28.922302 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 12:42:28.922416 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 12:42:28.922523 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 12:42:28.922535 kernel: PCI host bridge to bus 0000:00 May 15 12:42:28.922645 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 12:42:28.922742 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 12:42:28.922854 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 12:42:28.922955 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 15 12:42:28.923050 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 12:42:28.923144 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 15 12:42:28.923243 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 12:42:28.923370 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 15 12:42:28.923490 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 15 12:42:28.927219 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 15 12:42:28.927378 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 15 12:42:28.927520 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 15 12:42:28.928198 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 12:42:28.928453 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 15 12:42:28.931025 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 15 12:42:28.931148 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 15 12:42:28.931263 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 15 12:42:28.931390 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 12:42:28.931504 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 15 12:42:28.931621 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 15 12:42:28.931938 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 15 12:42:28.932052 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 15 12:42:28.932170 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 15 12:42:28.932283 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 12:42:28.932401 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 15 12:42:28.932517 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 15 12:42:28.932625 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 15 12:42:28.932742 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 15 12:42:28.935638 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 15 12:42:28.935655 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 12:42:28.935664 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 12:42:28.935671 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 12:42:28.935678 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 12:42:28.935688 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 12:42:28.935695 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 12:42:28.935702 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 12:42:28.935709 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 12:42:28.935716 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 12:42:28.935722 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 12:42:28.935729 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 12:42:28.935736 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 12:42:28.935743 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 12:42:28.935752 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 12:42:28.935758 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 12:42:28.935765 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 12:42:28.935772 kernel: iommu: Default domain type: Translated May 15 12:42:28.935817 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 12:42:28.935825 kernel: PCI: Using ACPI for IRQ routing May 15 12:42:28.935845 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 12:42:28.935852 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 15 12:42:28.935859 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 15 12:42:28.935983 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 12:42:28.936094 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 12:42:28.936199 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 12:42:28.936209 kernel: vgaarb: loaded May 15 12:42:28.936216 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 12:42:28.936223 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 12:42:28.936230 kernel: clocksource: Switched to clocksource kvm-clock May 15 12:42:28.936237 kernel: VFS: Disk quotas dquot_6.6.0 May 15 12:42:28.936247 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 12:42:28.936254 kernel: pnp: PnP ACPI init May 15 12:42:28.936373 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 12:42:28.936385 kernel: pnp: PnP ACPI: found 5 devices May 15 12:42:28.936392 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 12:42:28.936399 kernel: NET: Registered PF_INET protocol family May 15 12:42:28.936406 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 12:42:28.936413 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 12:42:28.936422 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 12:42:28.936429 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 12:42:28.936436 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 12:42:28.936442 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 12:42:28.936449 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:42:28.936456 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:42:28.936463 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 12:42:28.936470 kernel: NET: Registered PF_XDP protocol family May 15 12:42:28.936758 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 12:42:28.936874 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 12:42:28.936972 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 12:42:28.937067 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 15 12:42:28.937162 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 12:42:28.937256 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 15 12:42:28.937265 kernel: PCI: CLS 0 bytes, default 64 May 15 12:42:28.937272 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 12:42:28.937279 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 15 12:42:28.937286 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns May 15 12:42:28.937297 kernel: Initialise system trusted keyrings May 15 12:42:28.937304 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 12:42:28.937310 kernel: Key type asymmetric registered May 15 12:42:28.937317 kernel: Asymmetric key parser 'x509' registered May 15 12:42:28.937324 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 12:42:28.937331 kernel: io scheduler mq-deadline registered May 15 12:42:28.937338 kernel: io scheduler kyber registered May 15 12:42:28.937345 kernel: io scheduler bfq registered May 15 12:42:28.937352 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 12:42:28.937361 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 12:42:28.937368 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 12:42:28.937375 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 12:42:28.937382 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 12:42:28.937389 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 12:42:28.937396 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 12:42:28.937403 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 12:42:28.937698 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 12:42:28.937711 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 12:42:28.937809 kernel: rtc_cmos 00:03: registered as rtc0 May 15 12:42:28.937929 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T12:42:28 UTC (1747312948) May 15 12:42:28.938034 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 12:42:28.938044 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 12:42:28.938051 kernel: NET: Registered PF_INET6 protocol family May 15 12:42:28.938058 kernel: Segment Routing with IPv6 May 15 12:42:28.938065 kernel: In-situ OAM (IOAM) with IPv6 May 15 12:42:28.938075 kernel: NET: Registered PF_PACKET protocol family May 15 12:42:28.938081 kernel: Key type dns_resolver registered May 15 12:42:28.938088 kernel: IPI shorthand broadcast: enabled May 15 12:42:28.938095 kernel: sched_clock: Marking stable (2924003169, 225460984)->(3240840868, -91376715) May 15 12:42:28.938102 kernel: registered taskstats version 1 May 15 12:42:28.938109 kernel: Loading compiled-in X.509 certificates May 15 12:42:28.938116 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 12:42:28.938123 kernel: Demotion targets for Node 0: null May 15 12:42:28.938130 kernel: Key type .fscrypt registered May 15 12:42:28.938136 kernel: Key type fscrypt-provisioning registered May 15 12:42:28.938146 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 12:42:28.938153 kernel: ima: Allocated hash algorithm: sha1 May 15 12:42:28.938159 kernel: ima: No architecture policies found May 15 12:42:28.938166 kernel: clk: Disabling unused clocks May 15 12:42:28.938173 kernel: Warning: unable to open an initial console. May 15 12:42:28.938180 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 12:42:28.938187 kernel: Write protecting the kernel read-only data: 24576k May 15 12:42:28.938194 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 12:42:28.938203 kernel: Run /init as init process May 15 12:42:28.938210 kernel: with arguments: May 15 12:42:28.938216 kernel: /init May 15 12:42:28.938223 kernel: with environment: May 15 12:42:28.938230 kernel: HOME=/ May 15 12:42:28.938249 kernel: TERM=linux May 15 12:42:28.938258 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 12:42:28.938266 systemd[1]: Successfully made /usr/ read-only. May 15 12:42:28.938277 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:42:28.938287 systemd[1]: Detected virtualization kvm. May 15 12:42:28.938295 systemd[1]: Detected architecture x86-64. May 15 12:42:28.938302 systemd[1]: Running in initrd. May 15 12:42:28.938310 systemd[1]: No hostname configured, using default hostname. May 15 12:42:28.938318 systemd[1]: Hostname set to . May 15 12:42:28.938325 systemd[1]: Initializing machine ID from random generator. May 15 12:42:28.938332 systemd[1]: Queued start job for default target initrd.target. May 15 12:42:28.938342 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:42:28.938350 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:42:28.938358 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 12:42:28.938366 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:42:28.938373 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 12:42:28.938382 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 12:42:28.938390 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 12:42:28.938400 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 12:42:28.938408 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:42:28.938416 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:42:28.938423 systemd[1]: Reached target paths.target - Path Units. May 15 12:42:28.938431 systemd[1]: Reached target slices.target - Slice Units. May 15 12:42:28.938438 systemd[1]: Reached target swap.target - Swaps. May 15 12:42:28.938446 systemd[1]: Reached target timers.target - Timer Units. May 15 12:42:28.938456 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:42:28.938465 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:42:28.938473 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 12:42:28.938481 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 12:42:28.938488 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:42:28.938496 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:42:28.938504 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:42:28.938515 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:42:28.938523 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 12:42:28.938531 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:42:28.938539 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 12:42:28.938546 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 12:42:28.938554 systemd[1]: Starting systemd-fsck-usr.service... May 15 12:42:28.938562 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:42:28.938569 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:42:28.938579 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:42:28.938587 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 12:42:28.938615 systemd-journald[206]: Collecting audit messages is disabled. May 15 12:42:28.938638 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:42:28.938646 systemd[1]: Finished systemd-fsck-usr.service. May 15 12:42:28.938654 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:42:28.938662 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 12:42:28.938670 systemd-journald[206]: Journal started May 15 12:42:28.938689 systemd-journald[206]: Runtime Journal (/run/log/journal/dbb4bc5e21e443e193d4f33af785727a) is 8M, max 78.5M, 70.5M free. May 15 12:42:28.897238 systemd-modules-load[207]: Inserted module 'overlay' May 15 12:42:28.997903 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:42:28.997927 kernel: Bridge firewalling registered May 15 12:42:28.941482 systemd-modules-load[207]: Inserted module 'br_netfilter' May 15 12:42:28.998960 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:42:28.999897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:42:29.001056 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:42:29.005067 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 12:42:29.007965 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:42:29.010976 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:42:29.014195 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:42:29.028158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:42:29.029630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:42:29.034276 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 12:42:29.036397 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:42:29.039937 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 12:42:29.042424 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:42:29.051930 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:42:29.064486 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:42:29.094170 systemd-resolved[245]: Positive Trust Anchors: May 15 12:42:29.094894 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:42:29.094925 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:42:29.100683 systemd-resolved[245]: Defaulting to hostname 'linux'. May 15 12:42:29.101917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:42:29.102762 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:42:29.150877 kernel: SCSI subsystem initialized May 15 12:42:29.159895 kernel: Loading iSCSI transport class v2.0-870. May 15 12:42:29.169855 kernel: iscsi: registered transport (tcp) May 15 12:42:29.190047 kernel: iscsi: registered transport (qla4xxx) May 15 12:42:29.190078 kernel: QLogic iSCSI HBA Driver May 15 12:42:29.208993 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:42:29.225769 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:42:29.229087 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:42:29.271521 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 12:42:29.273297 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 12:42:29.318860 kernel: raid6: avx2x4 gen() 31580 MB/s May 15 12:42:29.336862 kernel: raid6: avx2x2 gen() 30697 MB/s May 15 12:42:29.355259 kernel: raid6: avx2x1 gen() 21057 MB/s May 15 12:42:29.355284 kernel: raid6: using algorithm avx2x4 gen() 31580 MB/s May 15 12:42:29.374434 kernel: raid6: .... xor() 4927 MB/s, rmw enabled May 15 12:42:29.374471 kernel: raid6: using avx2x2 recovery algorithm May 15 12:42:29.393876 kernel: xor: automatically using best checksumming function avx May 15 12:42:29.531867 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 12:42:29.539754 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 12:42:29.542265 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:42:29.567505 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 15 12:42:29.573186 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:42:29.576464 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 12:42:29.601174 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation May 15 12:42:29.627867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:42:29.629970 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:42:29.690822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:42:29.695370 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 12:42:29.766869 kernel: cryptd: max_cpu_qlen set to 1000 May 15 12:42:29.861284 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:42:29.861451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:42:29.863037 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:42:29.884908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:42:29.892860 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 15 12:42:29.901696 kernel: scsi host0: Virtio SCSI HBA May 15 12:42:29.901871 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 15 12:42:29.897824 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:42:29.906195 kernel: libata version 3.00 loaded. May 15 12:42:29.913856 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 12:42:29.929238 kernel: ahci 0000:00:1f.2: version 3.0 May 15 12:42:29.948558 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 12:42:29.948574 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 15 12:42:29.948715 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 15 12:42:29.948865 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 12:42:29.949614 kernel: AES CTR mode by8 optimization enabled May 15 12:42:29.949654 kernel: scsi host1: ahci May 15 12:42:29.950004 kernel: scsi host2: ahci May 15 12:42:29.950138 kernel: sd 0:0:0:0: Power-on or device reset occurred May 15 12:42:29.956996 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 15 12:42:29.957143 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 12:42:29.957273 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 15 12:42:29.957400 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 15 12:42:29.957531 kernel: scsi host3: ahci May 15 12:42:29.957658 kernel: scsi host4: ahci May 15 12:42:29.957781 kernel: scsi host5: ahci May 15 12:42:29.963845 kernel: scsi host6: ahci May 15 12:42:29.965042 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 May 15 12:42:29.965055 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 May 15 12:42:29.965065 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 May 15 12:42:29.965079 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 May 15 12:42:29.965088 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 May 15 12:42:29.965098 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 May 15 12:42:29.965106 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 12:42:29.965115 kernel: GPT:9289727 != 167739391 May 15 12:42:29.965124 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 12:42:29.965133 kernel: GPT:9289727 != 167739391 May 15 12:42:29.965142 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 12:42:29.965151 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:42:29.965162 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 12:42:30.036016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:42:30.258218 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 12:42:30.258287 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 15 12:42:30.258300 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 12:42:30.266844 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 12:42:30.266885 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 12:42:30.268863 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 12:42:30.331414 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 15 12:42:30.339464 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 15 12:42:30.346675 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 15 12:42:30.348032 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 15 12:42:30.348827 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 12:42:30.358986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 12:42:30.360925 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:42:30.361506 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:42:30.362721 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:42:30.364560 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 12:42:30.367934 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 12:42:30.380896 disk-uuid[637]: Primary Header is updated. May 15 12:42:30.380896 disk-uuid[637]: Secondary Entries is updated. May 15 12:42:30.380896 disk-uuid[637]: Secondary Header is updated. May 15 12:42:30.386476 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 12:42:30.392179 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:42:30.406864 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:42:31.404873 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:42:31.406023 disk-uuid[640]: The operation has completed successfully. May 15 12:42:31.462672 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 12:42:31.462788 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 12:42:31.488527 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 12:42:31.504187 sh[659]: Success May 15 12:42:31.522114 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 12:42:31.522144 kernel: device-mapper: uevent: version 1.0.3 May 15 12:42:31.525373 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 12:42:31.535861 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 15 12:42:31.579942 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 12:42:31.587906 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 12:42:31.590883 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 12:42:31.613035 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 12:42:31.613066 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (671) May 15 12:42:31.615861 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 12:42:31.618353 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 12:42:31.620113 kernel: BTRFS info (device dm-0): using free-space-tree May 15 12:42:31.627657 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 12:42:31.628612 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 12:42:31.629621 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 12:42:31.630348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 12:42:31.634051 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 12:42:31.660870 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (706) May 15 12:42:31.663908 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:42:31.667143 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:42:31.667162 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:42:31.677870 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:42:31.678046 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 12:42:31.679786 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 12:42:31.778313 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:42:31.782971 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:42:31.785208 ignition[765]: Ignition 2.21.0 May 15 12:42:31.785215 ignition[765]: Stage: fetch-offline May 15 12:42:31.785248 ignition[765]: no configs at "/usr/lib/ignition/base.d" May 15 12:42:31.785257 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:42:31.787949 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:42:31.785334 ignition[765]: parsed url from cmdline: "" May 15 12:42:31.785338 ignition[765]: no config URL provided May 15 12:42:31.785342 ignition[765]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:42:31.785350 ignition[765]: no config at "/usr/lib/ignition/user.ign" May 15 12:42:31.785355 ignition[765]: failed to fetch config: resource requires networking May 15 12:42:31.785798 ignition[765]: Ignition finished successfully May 15 12:42:31.823753 systemd-networkd[846]: lo: Link UP May 15 12:42:31.823765 systemd-networkd[846]: lo: Gained carrier May 15 12:42:31.825384 systemd-networkd[846]: Enumeration completed May 15 12:42:31.826206 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:42:31.826428 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:42:31.826432 systemd-networkd[846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:42:31.827740 systemd[1]: Reached target network.target - Network. May 15 12:42:31.828682 systemd-networkd[846]: eth0: Link UP May 15 12:42:31.828685 systemd-networkd[846]: eth0: Gained carrier May 15 12:42:31.828693 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:42:31.829946 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 12:42:31.861444 ignition[850]: Ignition 2.21.0 May 15 12:42:31.861459 ignition[850]: Stage: fetch May 15 12:42:31.861569 ignition[850]: no configs at "/usr/lib/ignition/base.d" May 15 12:42:31.861579 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:42:31.861644 ignition[850]: parsed url from cmdline: "" May 15 12:42:31.861647 ignition[850]: no config URL provided May 15 12:42:31.861652 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:42:31.861659 ignition[850]: no config at "/usr/lib/ignition/user.ign" May 15 12:42:31.861697 ignition[850]: PUT http://169.254.169.254/v1/token: attempt #1 May 15 12:42:31.861826 ignition[850]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 12:42:32.062045 ignition[850]: PUT http://169.254.169.254/v1/token: attempt #2 May 15 12:42:32.062205 ignition[850]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 12:42:32.261911 systemd-networkd[846]: eth0: DHCPv4 address 172.234.214.203/24, gateway 172.234.214.1 acquired from 23.192.121.7 May 15 12:42:32.462793 ignition[850]: PUT http://169.254.169.254/v1/token: attempt #3 May 15 12:42:32.555986 ignition[850]: PUT result: OK May 15 12:42:32.556877 ignition[850]: GET http://169.254.169.254/v1/user-data: attempt #1 May 15 12:42:32.667215 ignition[850]: GET result: OK May 15 12:42:32.667310 ignition[850]: parsing config with SHA512: 638e1c0d1fe985b15dbe814f3c2cd76423291a33c58e6677f6d58d0e22c5e6e605d9b232cc57f0883bef8205e8e727a60b79f13348bc3a838345bca43602b072 May 15 12:42:32.671291 unknown[850]: fetched base config from "system" May 15 12:42:32.671878 unknown[850]: fetched base config from "system" May 15 12:42:32.671886 unknown[850]: fetched user config from "akamai" May 15 12:42:32.672231 ignition[850]: fetch: fetch complete May 15 12:42:32.672237 ignition[850]: fetch: fetch passed May 15 12:42:32.672298 ignition[850]: Ignition finished successfully May 15 12:42:32.676185 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 12:42:32.678958 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 12:42:32.704881 ignition[858]: Ignition 2.21.0 May 15 12:42:32.704893 ignition[858]: Stage: kargs May 15 12:42:32.705011 ignition[858]: no configs at "/usr/lib/ignition/base.d" May 15 12:42:32.705021 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:42:32.705703 ignition[858]: kargs: kargs passed May 15 12:42:32.707875 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 12:42:32.705770 ignition[858]: Ignition finished successfully May 15 12:42:32.710760 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 12:42:32.736027 ignition[865]: Ignition 2.21.0 May 15 12:42:32.736755 ignition[865]: Stage: disks May 15 12:42:32.736925 ignition[865]: no configs at "/usr/lib/ignition/base.d" May 15 12:42:32.736939 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:42:32.737762 ignition[865]: disks: disks passed May 15 12:42:32.737803 ignition[865]: Ignition finished successfully May 15 12:42:32.740378 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 12:42:32.741712 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 12:42:32.742306 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 12:42:32.743572 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:42:32.744791 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:42:32.745979 systemd[1]: Reached target basic.target - Basic System. May 15 12:42:32.748190 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 12:42:32.775903 systemd-fsck[873]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 12:42:32.778253 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 12:42:32.780798 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 12:42:32.900861 kernel: EXT4-fs (sda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 12:42:32.901082 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 12:42:32.902083 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 12:42:32.904072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:42:32.906904 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 12:42:32.908232 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 12:42:32.909697 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 12:42:32.909721 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:42:32.915831 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 12:42:32.918474 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 12:42:32.927855 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (881) May 15 12:42:32.932745 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:42:32.932773 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:42:32.932789 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:42:32.939284 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:42:32.974884 initrd-setup-root[905]: cut: /sysroot/etc/passwd: No such file or directory May 15 12:42:32.980169 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory May 15 12:42:32.985107 initrd-setup-root[919]: cut: /sysroot/etc/shadow: No such file or directory May 15 12:42:32.989755 initrd-setup-root[926]: cut: /sysroot/etc/gshadow: No such file or directory May 15 12:42:33.082780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 12:42:33.085857 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 12:42:33.087798 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 12:42:33.103143 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 12:42:33.106046 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:42:33.121020 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 12:42:33.129508 ignition[994]: INFO : Ignition 2.21.0 May 15 12:42:33.129508 ignition[994]: INFO : Stage: mount May 15 12:42:33.130981 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:42:33.130981 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:42:33.130981 ignition[994]: INFO : mount: mount passed May 15 12:42:33.130981 ignition[994]: INFO : Ignition finished successfully May 15 12:42:33.132329 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 12:42:33.135438 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 12:42:33.224237 systemd-networkd[846]: eth0: Gained IPv6LL May 15 12:42:33.903085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:42:33.928863 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (1007) May 15 12:42:33.932395 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:42:33.932431 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:42:33.934326 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:42:33.940341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:42:33.968867 ignition[1024]: INFO : Ignition 2.21.0 May 15 12:42:33.968867 ignition[1024]: INFO : Stage: files May 15 12:42:33.968867 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:42:33.968867 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:42:33.971538 ignition[1024]: DEBUG : files: compiled without relabeling support, skipping May 15 12:42:33.971538 ignition[1024]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 12:42:33.971538 ignition[1024]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 12:42:33.973780 ignition[1024]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 12:42:33.973780 ignition[1024]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 12:42:33.973780 ignition[1024]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 12:42:33.973552 unknown[1024]: wrote ssh authorized keys file for user: core May 15 12:42:33.976693 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 12:42:33.976693 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 12:42:34.297139 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 12:42:34.489238 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 12:42:34.490328 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 12:42:34.490328 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 12:42:34.781438 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 12:42:34.832042 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 12:42:34.832042 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 12:42:34.834094 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 12:42:34.834094 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 12:42:34.834094 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 12:42:34.834094 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:42:34.834094 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:42:34.834094 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:42:34.834094 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:42:34.840125 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:42:34.840125 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:42:34.840125 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:42:34.840125 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:42:34.840125 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:42:34.840125 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 12:42:35.037570 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 12:42:35.727320 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:42:35.727320 ignition[1024]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 12:42:35.730058 ignition[1024]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:42:35.731215 ignition[1024]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:42:35.731215 ignition[1024]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 12:42:35.731215 ignition[1024]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 12:42:35.731215 ignition[1024]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 12:42:35.731215 ignition[1024]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 12:42:35.731215 ignition[1024]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 12:42:35.731215 ignition[1024]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 15 12:42:35.731215 ignition[1024]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 15 12:42:35.731215 ignition[1024]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 12:42:35.731215 ignition[1024]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 12:42:35.731215 ignition[1024]: INFO : files: files passed May 15 12:42:35.743574 ignition[1024]: INFO : Ignition finished successfully May 15 12:42:35.733816 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 12:42:35.738954 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 12:42:35.741991 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 12:42:35.755346 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 12:42:35.756985 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 12:42:35.761605 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:42:35.761605 initrd-setup-root-after-ignition[1054]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 12:42:35.763958 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:42:35.766154 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:42:35.767809 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 12:42:35.769808 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 12:42:35.821181 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 12:42:35.821312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 12:42:35.822943 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 12:42:35.823970 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 12:42:35.825270 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 12:42:35.826101 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 12:42:35.863943 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:42:35.867243 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 12:42:35.888218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 12:42:35.889618 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:42:35.890261 systemd[1]: Stopped target timers.target - Timer Units. May 15 12:42:35.891122 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 12:42:35.891225 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:42:35.893179 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 12:42:35.894143 systemd[1]: Stopped target basic.target - Basic System. May 15 12:42:35.895251 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 12:42:35.896634 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:42:35.897797 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 12:42:35.899072 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 12:42:35.900422 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 12:42:35.901833 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:42:35.903179 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 12:42:35.904611 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 12:42:35.905894 systemd[1]: Stopped target swap.target - Swaps. May 15 12:42:35.907241 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 12:42:35.907349 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 12:42:35.909229 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 12:42:35.910230 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:42:35.911267 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 12:42:35.911886 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:42:35.912753 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 12:42:35.912916 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 12:42:35.914903 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 12:42:35.915062 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:42:35.916912 systemd[1]: ignition-files.service: Deactivated successfully. May 15 12:42:35.917008 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 12:42:35.919271 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 12:42:35.920785 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 12:42:35.921011 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:42:35.934763 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 12:42:35.937637 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 12:42:35.938462 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:42:35.941344 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 12:42:35.942036 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:42:35.951699 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 12:42:35.952625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 12:42:35.963679 ignition[1078]: INFO : Ignition 2.21.0 May 15 12:42:35.965807 ignition[1078]: INFO : Stage: umount May 15 12:42:35.965807 ignition[1078]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:42:35.965807 ignition[1078]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:42:35.965807 ignition[1078]: INFO : umount: umount passed May 15 12:42:35.965807 ignition[1078]: INFO : Ignition finished successfully May 15 12:42:35.970972 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 12:42:35.971673 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 12:42:35.973406 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 12:42:35.973492 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 12:42:35.974946 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 12:42:35.974996 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 12:42:35.976124 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 12:42:35.976176 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 12:42:35.978821 systemd[1]: Stopped target network.target - Network. May 15 12:42:35.979374 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 12:42:35.979443 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:42:35.980573 systemd[1]: Stopped target paths.target - Path Units. May 15 12:42:35.981731 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 12:42:35.985947 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:42:35.986576 systemd[1]: Stopped target slices.target - Slice Units. May 15 12:42:35.988141 systemd[1]: Stopped target sockets.target - Socket Units. May 15 12:42:35.989505 systemd[1]: iscsid.socket: Deactivated successfully. May 15 12:42:35.989561 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:42:35.990787 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 12:42:35.990826 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:42:35.991961 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 12:42:35.992212 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 12:42:35.993724 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 12:42:35.993986 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 12:42:35.995267 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 12:42:35.996625 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 12:42:35.999401 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 12:42:36.000059 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 12:42:36.000170 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 12:42:36.002544 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 12:42:36.002674 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 12:42:36.006489 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 12:42:36.007423 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 12:42:36.007513 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 12:42:36.008745 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 12:42:36.008799 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:42:36.011359 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 12:42:36.011591 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 12:42:36.011727 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 12:42:36.013815 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 12:42:36.014344 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 12:42:36.015444 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 12:42:36.015490 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 12:42:36.017550 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 12:42:36.018863 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 12:42:36.018923 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:42:36.021024 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 12:42:36.021072 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 12:42:36.025346 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 12:42:36.025393 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 12:42:36.026280 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:42:36.031269 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 12:42:36.042132 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 12:42:36.042278 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 12:42:36.046298 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 12:42:36.046479 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:42:36.048068 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 12:42:36.048132 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 12:42:36.049338 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 12:42:36.049391 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:42:36.050541 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 12:42:36.050591 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 12:42:36.052357 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 12:42:36.052404 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 12:42:36.053862 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 12:42:36.053916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:42:36.056365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 12:42:36.057690 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 12:42:36.057762 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:42:36.061014 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 12:42:36.061067 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:42:36.064934 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:42:36.064987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:42:36.071464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 12:42:36.071580 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 12:42:36.073088 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 12:42:36.074866 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 12:42:36.109390 systemd[1]: Switching root. May 15 12:42:36.144666 systemd-journald[206]: Journal stopped May 15 12:42:37.328975 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). May 15 12:42:37.329023 kernel: SELinux: policy capability network_peer_controls=1 May 15 12:42:37.329041 kernel: SELinux: policy capability open_perms=1 May 15 12:42:37.329061 kernel: SELinux: policy capability extended_socket_class=1 May 15 12:42:37.329075 kernel: SELinux: policy capability always_check_network=0 May 15 12:42:37.329090 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 12:42:37.329105 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 12:42:37.329121 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 12:42:37.329137 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 12:42:37.329151 kernel: SELinux: policy capability userspace_initial_context=0 May 15 12:42:37.329170 kernel: audit: type=1403 audit(1747312956.288:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 12:42:37.329188 systemd[1]: Successfully loaded SELinux policy in 60.209ms. May 15 12:42:37.329206 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.781ms. May 15 12:42:37.329225 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:42:37.329243 systemd[1]: Detected virtualization kvm. May 15 12:42:37.329263 systemd[1]: Detected architecture x86-64. May 15 12:42:37.329279 systemd[1]: Detected first boot. May 15 12:42:37.329297 systemd[1]: Initializing machine ID from random generator. May 15 12:42:37.329312 kernel: Guest personality initialized and is inactive May 15 12:42:37.329327 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 12:42:37.329343 kernel: Initialized host personality May 15 12:42:37.329357 kernel: NET: Registered PF_VSOCK protocol family May 15 12:42:37.329378 zram_generator::config[1121]: No configuration found. May 15 12:42:37.329396 systemd[1]: Populated /etc with preset unit settings. May 15 12:42:37.329413 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 12:42:37.329430 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 12:42:37.329447 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 12:42:37.329464 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 12:42:37.329674 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 12:42:37.329694 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 12:42:37.329705 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 12:42:37.329714 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 12:42:37.329724 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 12:42:37.329734 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 12:42:37.329744 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 12:42:37.329753 systemd[1]: Created slice user.slice - User and Session Slice. May 15 12:42:37.329765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:42:37.329775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:42:37.329785 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 12:42:37.329794 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 12:42:37.329807 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 12:42:37.329817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:42:37.329827 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 12:42:37.331888 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:42:37.331916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:42:37.331934 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 12:42:37.331951 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 12:42:37.331968 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 12:42:37.331985 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 12:42:37.332002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:42:37.332020 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:42:37.332039 systemd[1]: Reached target slices.target - Slice Units. May 15 12:42:37.332060 systemd[1]: Reached target swap.target - Swaps. May 15 12:42:37.332078 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 12:42:37.332095 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 12:42:37.332112 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 12:42:37.332130 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:42:37.332152 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:42:37.332171 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:42:37.332188 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 12:42:37.332205 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 12:42:37.332224 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 12:42:37.332241 systemd[1]: Mounting media.mount - External Media Directory... May 15 12:42:37.332259 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:42:37.332276 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 12:42:37.332298 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 12:42:37.332315 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 12:42:37.332333 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 12:42:37.332350 systemd[1]: Reached target machines.target - Containers. May 15 12:42:37.332367 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 12:42:37.332384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:42:37.332402 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:42:37.332419 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 12:42:37.332439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:42:37.332457 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:42:37.332474 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:42:37.332491 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 12:42:37.332509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:42:37.332528 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 12:42:37.332545 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 12:42:37.332562 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 12:42:37.332580 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 12:42:37.332603 systemd[1]: Stopped systemd-fsck-usr.service. May 15 12:42:37.332622 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:42:37.332639 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:42:37.332656 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:42:37.332674 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:42:37.332691 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 12:42:37.332707 kernel: loop: module loaded May 15 12:42:37.332724 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 12:42:37.332745 kernel: ACPI: bus type drm_connector registered May 15 12:42:37.332762 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:42:37.332779 systemd[1]: verity-setup.service: Deactivated successfully. May 15 12:42:37.332796 systemd[1]: Stopped verity-setup.service. May 15 12:42:37.332814 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:42:37.332831 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 12:42:37.334919 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 12:42:37.334939 kernel: fuse: init (API version 7.41) May 15 12:42:37.334959 systemd[1]: Mounted media.mount - External Media Directory. May 15 12:42:37.334975 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 12:42:37.334991 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 12:42:37.335006 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 12:42:37.335022 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 12:42:37.335037 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:42:37.335052 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 12:42:37.335067 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 12:42:37.335082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:42:37.335101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:42:37.335117 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:42:37.335132 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:42:37.335146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:42:37.335161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:42:37.335176 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 12:42:37.335219 systemd-journald[1205]: Collecting audit messages is disabled. May 15 12:42:37.335258 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 12:42:37.335277 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:42:37.335293 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:42:37.335311 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:42:37.335336 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:42:37.335347 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 12:42:37.335358 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 12:42:37.335371 systemd-journald[1205]: Journal started May 15 12:42:37.335396 systemd-journald[1205]: Runtime Journal (/run/log/journal/e6cbf36cc4154810a6d0d4c09bde7087) is 8M, max 78.5M, 70.5M free. May 15 12:42:36.911702 systemd[1]: Queued start job for default target multi-user.target. May 15 12:42:36.926297 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 12:42:36.926743 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 12:42:37.340990 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:42:37.357505 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:42:37.363116 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 12:42:37.365075 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 12:42:37.367587 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 12:42:37.367632 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:42:37.369571 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 12:42:37.381013 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 12:42:37.384829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:42:37.388081 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 12:42:37.392147 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 12:42:37.393928 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:42:37.395992 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 12:42:37.409890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:42:37.419605 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:42:37.422274 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 12:42:37.430870 systemd-journald[1205]: Time spent on flushing to /var/log/journal/e6cbf36cc4154810a6d0d4c09bde7087 is 113.085ms for 996 entries. May 15 12:42:37.430870 systemd-journald[1205]: System Journal (/var/log/journal/e6cbf36cc4154810a6d0d4c09bde7087) is 8M, max 195.6M, 187.6M free. May 15 12:42:37.612819 systemd-journald[1205]: Received client request to flush runtime journal. May 15 12:42:37.612898 kernel: loop0: detected capacity change from 0 to 8 May 15 12:42:37.612928 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 12:42:37.612945 kernel: loop1: detected capacity change from 0 to 218376 May 15 12:42:37.443110 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 12:42:37.447081 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 12:42:37.453308 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 12:42:37.515252 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 12:42:37.516245 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 12:42:37.531001 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 12:42:37.540253 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:42:37.567171 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:42:37.600308 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 12:42:37.615386 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 12:42:37.623407 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 12:42:37.636731 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:42:37.669619 kernel: loop2: detected capacity change from 0 to 113872 May 15 12:42:37.701631 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 15 12:42:37.701648 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 15 12:42:37.709951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:42:37.717865 kernel: loop3: detected capacity change from 0 to 146240 May 15 12:42:37.751880 kernel: loop4: detected capacity change from 0 to 8 May 15 12:42:37.759867 kernel: loop5: detected capacity change from 0 to 218376 May 15 12:42:37.790320 kernel: loop6: detected capacity change from 0 to 113872 May 15 12:42:37.814870 kernel: loop7: detected capacity change from 0 to 146240 May 15 12:42:37.864727 (sd-merge)[1269]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 15 12:42:37.865326 (sd-merge)[1269]: Merged extensions into '/usr'. May 15 12:42:37.875544 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... May 15 12:42:37.875697 systemd[1]: Reloading... May 15 12:42:38.021875 zram_generator::config[1306]: No configuration found. May 15 12:42:38.133261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:42:38.197985 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 12:42:38.224375 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 12:42:38.224930 systemd[1]: Reloading finished in 348 ms. May 15 12:42:38.253802 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 12:42:38.254942 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 12:42:38.265963 systemd[1]: Starting ensure-sysext.service... May 15 12:42:38.267720 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:42:38.315014 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... May 15 12:42:38.315034 systemd[1]: Reloading... May 15 12:42:38.345743 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 12:42:38.346354 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 12:42:38.347064 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 12:42:38.347382 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 12:42:38.348460 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 12:42:38.349296 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 15 12:42:38.349485 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 15 12:42:38.363260 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:42:38.363876 systemd-tmpfiles[1339]: Skipping /boot May 15 12:42:38.397544 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:42:38.400040 systemd-tmpfiles[1339]: Skipping /boot May 15 12:42:38.440860 zram_generator::config[1372]: No configuration found. May 15 12:42:38.547255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:42:38.647860 systemd[1]: Reloading finished in 332 ms. May 15 12:42:38.670392 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 12:42:38.681242 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:42:38.690717 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:42:38.695013 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 12:42:38.700063 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 12:42:38.706236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:42:38.708244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:42:38.716075 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 12:42:38.720917 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:42:38.721082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:42:38.723394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:42:38.731083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:42:38.742517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:42:38.743181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:42:38.743282 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:42:38.743375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:42:38.745343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:42:38.746730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:42:38.765138 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 12:42:38.768610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:42:38.768824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:42:38.769931 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:42:38.770125 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:42:38.773806 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:42:38.774064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:42:38.776649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:42:38.777292 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:42:38.777379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:42:38.777463 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:42:38.777537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:42:38.779176 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 12:42:38.786170 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 12:42:38.788341 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 12:42:38.800518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:42:38.800746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:42:38.810544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:42:38.814640 systemd-udevd[1415]: Using default interface naming scheme 'v255'. May 15 12:42:38.819085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:42:38.825926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:42:38.826933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:42:38.827042 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:42:38.827168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:42:38.829401 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:42:38.830193 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:42:38.838569 systemd[1]: Finished ensure-sysext.service. May 15 12:42:38.847128 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 12:42:38.849770 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:42:38.850509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:42:38.854652 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 12:42:38.868411 augenrules[1455]: No rules May 15 12:42:38.869027 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 12:42:38.870974 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 12:42:38.871529 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:42:38.872983 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:42:38.874262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:42:38.875293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:42:38.879057 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:42:38.879294 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:42:38.881242 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:42:38.881314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:42:38.889385 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:42:38.894166 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:42:38.902775 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 12:42:39.123871 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 12:42:39.171864 kernel: mousedev: PS/2 mouse device common for all mice May 15 12:42:39.197451 systemd-networkd[1466]: lo: Link UP May 15 12:42:39.197735 systemd-networkd[1466]: lo: Gained carrier May 15 12:42:39.202509 systemd-networkd[1466]: Enumeration completed May 15 12:42:39.203899 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:42:39.205084 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:42:39.207300 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:42:39.207948 systemd-networkd[1466]: eth0: Link UP May 15 12:42:39.208291 systemd-networkd[1466]: eth0: Gained carrier May 15 12:42:39.208305 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:42:39.208727 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 12:42:39.212977 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 12:42:39.223045 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 12:42:39.225201 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 12:42:39.263417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 12:42:39.265353 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 12:42:39.272073 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 12:42:39.272871 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 12:42:39.309041 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 12:42:39.311177 systemd[1]: Reached target time-set.target - System Time Set. May 15 12:42:39.322862 kernel: ACPI: button: Power Button [PWRF] May 15 12:42:39.346743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 12:42:39.356124 systemd-resolved[1414]: Positive Trust Anchors: May 15 12:42:39.356143 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:42:39.356171 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:42:39.360958 systemd-resolved[1414]: Defaulting to hostname 'linux'. May 15 12:42:39.373127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:42:39.374991 systemd[1]: Reached target network.target - Network. May 15 12:42:39.375502 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:42:39.376106 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:42:39.376921 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 12:42:39.378461 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 12:42:39.379055 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 12:42:39.380865 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 12:42:39.381503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 12:42:39.382511 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 12:42:39.383887 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 12:42:39.383927 systemd[1]: Reached target paths.target - Path Units. May 15 12:42:39.384432 systemd[1]: Reached target timers.target - Timer Units. May 15 12:42:39.387308 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 12:42:39.391427 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 12:42:39.404318 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 12:42:39.407118 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 12:42:39.408442 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 12:42:39.417787 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 12:42:39.420994 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 12:42:39.422228 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 12:42:39.429252 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:42:39.430542 systemd[1]: Reached target basic.target - Basic System. May 15 12:42:39.431823 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 12:42:39.432145 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 12:42:39.434477 systemd[1]: Starting containerd.service - containerd container runtime... May 15 12:42:39.437238 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 12:42:39.440145 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 12:42:39.445105 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 12:42:39.448379 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 12:42:39.450312 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 12:42:39.455031 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 12:42:39.458200 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 12:42:39.465220 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 12:42:39.469115 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 12:42:39.484859 jq[1538]: false May 15 12:42:39.480789 oslogin_cache_refresh[1540]: Refreshing passwd entry cache May 15 12:42:39.479211 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 12:42:39.485316 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing passwd entry cache May 15 12:42:39.486634 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting users, quitting May 15 12:42:39.486634 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:42:39.486625 oslogin_cache_refresh[1540]: Failure getting users, quitting May 15 12:42:39.486827 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing group entry cache May 15 12:42:39.486640 oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:42:39.486678 oslogin_cache_refresh[1540]: Refreshing group entry cache May 15 12:42:39.487175 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 12:42:39.489850 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting groups, quitting May 15 12:42:39.489850 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:42:39.488258 oslogin_cache_refresh[1540]: Failure getting groups, quitting May 15 12:42:39.488268 oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:42:39.504146 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 12:42:39.513257 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 12:42:39.514468 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 12:42:39.526424 systemd[1]: Starting update-engine.service - Update Engine... May 15 12:42:39.535226 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 12:42:39.541873 kernel: EDAC MC: Ver: 3.0.0 May 15 12:42:39.545653 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 12:42:39.548505 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 12:42:39.551119 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 12:42:39.551987 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 12:42:39.552376 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 12:42:39.555354 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 12:42:39.555597 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 12:42:39.577007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:42:39.593634 jq[1549]: true May 15 12:42:39.599275 update_engine[1548]: I20250515 12:42:39.599205 1548 main.cc:92] Flatcar Update Engine starting May 15 12:42:39.633559 extend-filesystems[1539]: Found loop4 May 15 12:42:39.634704 extend-filesystems[1539]: Found loop5 May 15 12:42:39.635276 extend-filesystems[1539]: Found loop6 May 15 12:42:39.636417 extend-filesystems[1539]: Found loop7 May 15 12:42:39.636417 extend-filesystems[1539]: Found sda May 15 12:42:39.636417 extend-filesystems[1539]: Found sda1 May 15 12:42:39.636417 extend-filesystems[1539]: Found sda2 May 15 12:42:39.636417 extend-filesystems[1539]: Found sda3 May 15 12:42:39.636417 extend-filesystems[1539]: Found usr May 15 12:42:39.636417 extend-filesystems[1539]: Found sda4 May 15 12:42:39.636417 extend-filesystems[1539]: Found sda6 May 15 12:42:39.636417 extend-filesystems[1539]: Found sda7 May 15 12:42:39.636417 extend-filesystems[1539]: Found sda9 May 15 12:42:39.636417 extend-filesystems[1539]: Checking size of /dev/sda9 May 15 12:42:39.661603 jq[1566]: true May 15 12:42:39.671396 extend-filesystems[1539]: Resized partition /dev/sda9 May 15 12:42:39.674901 extend-filesystems[1582]: resize2fs 1.47.2 (1-Jan-2025) May 15 12:42:39.676897 tar[1553]: linux-amd64/LICENSE May 15 12:42:39.676897 tar[1553]: linux-amd64/helm May 15 12:42:39.679270 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 12:42:39.697217 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 15 12:42:39.694065 systemd-networkd[1466]: eth0: DHCPv4 address 172.234.214.203/24, gateway 172.234.214.1 acquired from 23.192.121.7 May 15 12:42:39.703965 systemd-timesyncd[1457]: Network configuration changed, trying to establish connection. May 15 12:42:39.715198 dbus-daemon[1536]: [system] SELinux support is enabled May 15 12:42:39.715390 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 12:42:39.718674 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 12:42:39.718725 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 12:42:39.721047 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 12:42:39.721078 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 12:42:39.739574 coreos-metadata[1535]: May 15 12:42:39.738 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 12:42:39.744267 dbus-daemon[1536]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1466 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 15 12:42:39.756594 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 15 12:42:39.767089 systemd[1]: Started update-engine.service - Update Engine. May 15 12:42:39.777496 update_engine[1548]: I20250515 12:42:39.777343 1548 update_check_scheduler.cc:74] Next update check in 4m32s May 15 12:42:40.402290 systemd-resolved[1414]: Clock change detected. Flushing caches. May 15 12:42:40.402565 systemd-timesyncd[1457]: Contacted time server 204.2.134.172:123 (0.flatcar.pool.ntp.org). May 15 12:42:40.402783 systemd-timesyncd[1457]: Initial clock synchronization to Thu 2025-05-15 12:42:40.402232 UTC. May 15 12:42:40.411256 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 12:42:40.420976 systemd[1]: motdgen.service: Deactivated successfully. May 15 12:42:40.422523 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 12:42:40.497380 coreos-metadata[1535]: May 15 12:42:40.495 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 15 12:42:40.570827 bash[1604]: Updated "/home/core/.ssh/authorized_keys" May 15 12:42:40.570805 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 12:42:40.587140 systemd[1]: Starting sshkeys.service... May 15 12:42:40.672443 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 12:42:40.678114 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 12:42:40.691854 coreos-metadata[1535]: May 15 12:42:40.691 INFO Fetch successful May 15 12:42:40.691854 coreos-metadata[1535]: May 15 12:42:40.691 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 15 12:42:40.775249 coreos-metadata[1617]: May 15 12:42:40.774 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 12:42:40.777672 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 12:42:40.781058 containerd[1571]: time="2025-05-15T12:42:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 12:42:40.782779 containerd[1571]: time="2025-05-15T12:42:40.782524804Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 12:42:40.825576 containerd[1571]: time="2025-05-15T12:42:40.825524469Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.88µs" May 15 12:42:40.825576 containerd[1571]: time="2025-05-15T12:42:40.825565579Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 12:42:40.825676 containerd[1571]: time="2025-05-15T12:42:40.825585939Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 12:42:40.825827 containerd[1571]: time="2025-05-15T12:42:40.825768119Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 12:42:40.825827 containerd[1571]: time="2025-05-15T12:42:40.825791289Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 12:42:40.825827 containerd[1571]: time="2025-05-15T12:42:40.825816309Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:42:40.825922 containerd[1571]: time="2025-05-15T12:42:40.825882628Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:42:40.825922 containerd[1571]: time="2025-05-15T12:42:40.825895008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:42:40.826985 containerd[1571]: time="2025-05-15T12:42:40.826950737Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:42:40.826985 containerd[1571]: time="2025-05-15T12:42:40.826978497Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:42:40.827050 containerd[1571]: time="2025-05-15T12:42:40.826989437Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:42:40.827050 containerd[1571]: time="2025-05-15T12:42:40.826996847Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 12:42:40.827127 containerd[1571]: time="2025-05-15T12:42:40.827091537Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 12:42:40.827986 containerd[1571]: time="2025-05-15T12:42:40.827872736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:42:40.827986 containerd[1571]: time="2025-05-15T12:42:40.827912445Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:42:40.827986 containerd[1571]: time="2025-05-15T12:42:40.827922495Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 12:42:40.828317 containerd[1571]: time="2025-05-15T12:42:40.828221925Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 12:42:40.840743 containerd[1571]: time="2025-05-15T12:42:40.839751428Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 12:42:40.840743 containerd[1571]: time="2025-05-15T12:42:40.839862728Z" level=info msg="metadata content store policy set" policy=shared May 15 12:42:40.862114 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 15 12:42:40.862750 containerd[1571]: time="2025-05-15T12:42:40.862614303Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 12:42:40.866524 containerd[1571]: time="2025-05-15T12:42:40.866402808Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 12:42:40.866726 containerd[1571]: time="2025-05-15T12:42:40.866480078Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 12:42:40.867580 containerd[1571]: time="2025-05-15T12:42:40.867234306Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 12:42:40.867580 containerd[1571]: time="2025-05-15T12:42:40.867258126Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 12:42:40.867580 containerd[1571]: time="2025-05-15T12:42:40.867269856Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 12:42:40.867580 containerd[1571]: time="2025-05-15T12:42:40.867320996Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 12:42:40.867580 containerd[1571]: time="2025-05-15T12:42:40.867336646Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.868558554Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.868584034Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.868594464Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.872240939Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.872440909Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.873305217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.874243886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.874257996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.874268136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.874278096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.874301186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.874311586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.874322406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.874332526Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 12:42:40.877217 containerd[1571]: time="2025-05-15T12:42:40.874346646Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 12:42:40.875465 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 12:42:40.877526 extend-filesystems[1582]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 15 12:42:40.877526 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 10 May 15 12:42:40.877526 extend-filesystems[1582]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 15 12:42:40.944247 containerd[1571]: time="2025-05-15T12:42:40.874407646Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 12:42:40.944247 containerd[1571]: time="2025-05-15T12:42:40.874421476Z" level=info msg="Start snapshots syncer" May 15 12:42:40.944247 containerd[1571]: time="2025-05-15T12:42:40.874441386Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 12:42:40.875712 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 12:42:40.944369 coreos-metadata[1617]: May 15 12:42:40.892 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 15 12:42:40.944419 extend-filesystems[1539]: Resized filesystem in /dev/sda9 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.874630945Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.874671575Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.879947847Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883284232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883345642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883357832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883367242Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883397722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883407802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883418582Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883451742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883496352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883514572Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883588142Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883611152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883683092Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883703922Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883714352Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883722952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883738032Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883773842Z" level=info msg="runtime interface created" May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883779572Z" level=info msg="created NRI interface" May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883787122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883798712Z" level=info msg="Connect containerd service" May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.883848362Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 12:42:40.946239 containerd[1571]: time="2025-05-15T12:42:40.890573801Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:42:40.949829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:42:40.951257 coreos-metadata[1535]: May 15 12:42:40.950 INFO Fetch successful May 15 12:42:40.980004 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 12:42:40.980415 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 15 12:42:40.985690 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.hostname1' May 15 12:42:40.987594 dbus-daemon[1536]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1586 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 15 12:42:40.993080 systemd[1]: Starting polkit.service - Authorization Manager... May 15 12:42:41.001910 systemd-logind[1545]: Watching system buttons on /dev/input/event2 (Power Button) May 15 12:42:41.001928 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 12:42:41.007573 systemd-logind[1545]: New seat seat0. May 15 12:42:41.012034 systemd[1]: Started systemd-logind.service - User Login Management. May 15 12:42:41.028397 coreos-metadata[1617]: May 15 12:42:41.028 INFO Fetch successful May 15 12:42:41.095721 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 12:42:41.101149 update-ssh-keys[1651]: Updated "/home/core/.ssh/authorized_keys" May 15 12:42:41.103486 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 12:42:41.105437 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 12:42:41.110640 systemd[1]: Finished sshkeys.service. May 15 12:42:41.208106 containerd[1571]: time="2025-05-15T12:42:41.208064405Z" level=info msg="Start subscribing containerd event" May 15 12:42:41.208570 containerd[1571]: time="2025-05-15T12:42:41.208439045Z" level=info msg="Start recovering state" May 15 12:42:41.208708 containerd[1571]: time="2025-05-15T12:42:41.208675914Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 12:42:41.208840 containerd[1571]: time="2025-05-15T12:42:41.208746494Z" level=info msg="Start event monitor" May 15 12:42:41.208866 containerd[1571]: time="2025-05-15T12:42:41.208840914Z" level=info msg="Start cni network conf syncer for default" May 15 12:42:41.208866 containerd[1571]: time="2025-05-15T12:42:41.208849954Z" level=info msg="Start streaming server" May 15 12:42:41.208910 containerd[1571]: time="2025-05-15T12:42:41.208866194Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 12:42:41.208910 containerd[1571]: time="2025-05-15T12:42:41.208874224Z" level=info msg="runtime interface starting up..." May 15 12:42:41.208910 containerd[1571]: time="2025-05-15T12:42:41.208897874Z" level=info msg="starting plugins..." May 15 12:42:41.208961 containerd[1571]: time="2025-05-15T12:42:41.208913174Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 12:42:41.209602 containerd[1571]: time="2025-05-15T12:42:41.209562873Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 12:42:41.212600 systemd[1]: Started containerd.service - containerd container runtime. May 15 12:42:41.213409 containerd[1571]: time="2025-05-15T12:42:41.213381827Z" level=info msg="containerd successfully booted in 0.432832s" May 15 12:42:41.226325 systemd[1]: issuegen.service: Deactivated successfully. May 15 12:42:41.226974 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 12:42:41.232235 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 12:42:41.272627 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 12:42:41.275820 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 12:42:41.278481 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 12:42:41.280377 systemd[1]: Reached target getty.target - Login Prompts. May 15 12:42:41.288647 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 12:42:41.289600 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 12:42:41.291381 polkitd[1642]: Started polkitd version 126 May 15 12:42:41.295404 polkitd[1642]: Loading rules from directory /etc/polkit-1/rules.d May 15 12:42:41.295957 polkitd[1642]: Loading rules from directory /run/polkit-1/rules.d May 15 12:42:41.296044 polkitd[1642]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 12:42:41.296328 polkitd[1642]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 15 12:42:41.296390 polkitd[1642]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 12:42:41.296475 polkitd[1642]: Loading rules from directory /usr/share/polkit-1/rules.d May 15 12:42:41.296986 polkitd[1642]: Finished loading, compiling and executing 2 rules May 15 12:42:41.297268 systemd[1]: Started polkit.service - Authorization Manager. May 15 12:42:41.299028 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 15 12:42:41.299697 polkitd[1642]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 15 12:42:41.308571 systemd-resolved[1414]: System hostname changed to '172-234-214-203'. May 15 12:42:41.308877 systemd-hostnamed[1586]: Hostname set to <172-234-214-203> (transient) May 15 12:42:41.523386 tar[1553]: linux-amd64/README.md May 15 12:42:41.542492 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 12:42:41.800373 systemd-networkd[1466]: eth0: Gained IPv6LL May 15 12:42:41.806218 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 12:42:41.807506 systemd[1]: Reached target network-online.target - Network is Online. May 15 12:42:41.810519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:42:41.813383 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 12:42:41.841134 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 12:42:43.424103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:42:43.425437 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 12:42:43.426892 systemd[1]: Startup finished in 3.002s (kernel) + 7.634s (initrd) + 6.620s (userspace) = 17.257s. May 15 12:42:43.463514 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:42:43.544174 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 12:42:43.546506 systemd[1]: Started sshd@0-172.234.214.203:22-139.178.89.65:38704.service - OpenSSH per-connection server daemon (139.178.89.65:38704). May 15 12:42:43.917569 sshd[1716]: Accepted publickey for core from 139.178.89.65 port 38704 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:43.919582 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:43.937567 systemd-logind[1545]: New session 1 of user core. May 15 12:42:43.939072 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 12:42:43.941363 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 12:42:43.981774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 12:42:43.985962 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 12:42:43.999945 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 12:42:44.004255 systemd-logind[1545]: New session c1 of user core. May 15 12:42:44.161842 systemd[1725]: Queued start job for default target default.target. May 15 12:42:44.167534 systemd[1725]: Created slice app.slice - User Application Slice. May 15 12:42:44.167561 systemd[1725]: Reached target paths.target - Paths. May 15 12:42:44.168108 systemd[1725]: Reached target timers.target - Timers. May 15 12:42:44.170782 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 12:42:44.198360 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 12:42:44.201448 systemd[1725]: Reached target sockets.target - Sockets. May 15 12:42:44.202494 systemd[1725]: Reached target basic.target - Basic System. May 15 12:42:44.202817 systemd[1725]: Reached target default.target - Main User Target. May 15 12:42:44.202924 systemd[1725]: Startup finished in 183ms. May 15 12:42:44.203456 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 12:42:44.213532 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 12:42:44.228148 kubelet[1710]: E0515 12:42:44.228113 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:42:44.231603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:42:44.231788 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:42:44.232175 systemd[1]: kubelet.service: Consumed 1.778s CPU time, 252.8M memory peak. May 15 12:42:44.479372 systemd[1]: Started sshd@1-172.234.214.203:22-139.178.89.65:38712.service - OpenSSH per-connection server daemon (139.178.89.65:38712). May 15 12:42:44.819343 sshd[1737]: Accepted publickey for core from 139.178.89.65 port 38712 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:44.820964 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:44.826692 systemd-logind[1545]: New session 2 of user core. May 15 12:42:44.837371 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 12:42:45.065708 sshd[1739]: Connection closed by 139.178.89.65 port 38712 May 15 12:42:45.066425 sshd-session[1737]: pam_unix(sshd:session): session closed for user core May 15 12:42:45.071317 systemd[1]: sshd@1-172.234.214.203:22-139.178.89.65:38712.service: Deactivated successfully. May 15 12:42:45.073840 systemd[1]: session-2.scope: Deactivated successfully. May 15 12:42:45.074656 systemd-logind[1545]: Session 2 logged out. Waiting for processes to exit. May 15 12:42:45.076334 systemd-logind[1545]: Removed session 2. May 15 12:42:45.126143 systemd[1]: Started sshd@2-172.234.214.203:22-139.178.89.65:38718.service - OpenSSH per-connection server daemon (139.178.89.65:38718). May 15 12:42:45.462880 sshd[1745]: Accepted publickey for core from 139.178.89.65 port 38718 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:45.464776 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:45.471238 systemd-logind[1545]: New session 3 of user core. May 15 12:42:45.480345 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 12:42:45.707830 sshd[1747]: Connection closed by 139.178.89.65 port 38718 May 15 12:42:45.708481 sshd-session[1745]: pam_unix(sshd:session): session closed for user core May 15 12:42:45.712905 systemd-logind[1545]: Session 3 logged out. Waiting for processes to exit. May 15 12:42:45.713695 systemd[1]: sshd@2-172.234.214.203:22-139.178.89.65:38718.service: Deactivated successfully. May 15 12:42:45.715844 systemd[1]: session-3.scope: Deactivated successfully. May 15 12:42:45.717568 systemd-logind[1545]: Removed session 3. May 15 12:42:45.768279 systemd[1]: Started sshd@3-172.234.214.203:22-139.178.89.65:38724.service - OpenSSH per-connection server daemon (139.178.89.65:38724). May 15 12:42:46.115337 sshd[1753]: Accepted publickey for core from 139.178.89.65 port 38724 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:46.116892 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:46.122469 systemd-logind[1545]: New session 4 of user core. May 15 12:42:46.129362 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 12:42:46.357453 sshd[1755]: Connection closed by 139.178.89.65 port 38724 May 15 12:42:46.358433 sshd-session[1753]: pam_unix(sshd:session): session closed for user core May 15 12:42:46.362911 systemd[1]: sshd@3-172.234.214.203:22-139.178.89.65:38724.service: Deactivated successfully. May 15 12:42:46.364900 systemd[1]: session-4.scope: Deactivated successfully. May 15 12:42:46.365758 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. May 15 12:42:46.367344 systemd-logind[1545]: Removed session 4. May 15 12:42:46.422635 systemd[1]: Started sshd@4-172.234.214.203:22-139.178.89.65:38736.service - OpenSSH per-connection server daemon (139.178.89.65:38736). May 15 12:42:46.776729 sshd[1761]: Accepted publickey for core from 139.178.89.65 port 38736 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:46.778568 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:46.784702 systemd-logind[1545]: New session 5 of user core. May 15 12:42:46.791324 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 12:42:46.989240 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 12:42:46.989608 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:42:47.007771 sudo[1764]: pam_unix(sudo:session): session closed for user root May 15 12:42:47.060961 sshd[1763]: Connection closed by 139.178.89.65 port 38736 May 15 12:42:47.061630 sshd-session[1761]: pam_unix(sshd:session): session closed for user core May 15 12:42:47.066262 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. May 15 12:42:47.067064 systemd[1]: sshd@4-172.234.214.203:22-139.178.89.65:38736.service: Deactivated successfully. May 15 12:42:47.068938 systemd[1]: session-5.scope: Deactivated successfully. May 15 12:42:47.070760 systemd-logind[1545]: Removed session 5. May 15 12:42:47.123664 systemd[1]: Started sshd@5-172.234.214.203:22-139.178.89.65:46382.service - OpenSSH per-connection server daemon (139.178.89.65:46382). May 15 12:42:47.481846 sshd[1770]: Accepted publickey for core from 139.178.89.65 port 46382 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:47.483402 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:47.489221 systemd-logind[1545]: New session 6 of user core. May 15 12:42:47.498335 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 12:42:47.683440 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 12:42:47.683745 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:42:47.688732 sudo[1774]: pam_unix(sudo:session): session closed for user root May 15 12:42:47.694586 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 12:42:47.694875 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:42:47.704772 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:42:47.754683 augenrules[1796]: No rules May 15 12:42:47.756344 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:42:47.756617 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:42:47.757585 sudo[1773]: pam_unix(sudo:session): session closed for user root May 15 12:42:47.809979 sshd[1772]: Connection closed by 139.178.89.65 port 46382 May 15 12:42:47.810510 sshd-session[1770]: pam_unix(sshd:session): session closed for user core May 15 12:42:47.814390 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. May 15 12:42:47.814671 systemd[1]: sshd@5-172.234.214.203:22-139.178.89.65:46382.service: Deactivated successfully. May 15 12:42:47.816851 systemd[1]: session-6.scope: Deactivated successfully. May 15 12:42:47.818539 systemd-logind[1545]: Removed session 6. May 15 12:42:47.866910 systemd[1]: Started sshd@6-172.234.214.203:22-139.178.89.65:46384.service - OpenSSH per-connection server daemon (139.178.89.65:46384). May 15 12:42:48.195039 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 46384 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:42:48.196569 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:42:48.201667 systemd-logind[1545]: New session 7 of user core. May 15 12:42:48.210335 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 12:42:48.389434 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 12:42:48.389733 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:42:49.448864 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1213507962 wd_nsec: 1213507320 May 15 12:42:50.673911 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 12:42:50.691552 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 12:42:51.493689 dockerd[1826]: time="2025-05-15T12:42:51.493629515Z" level=info msg="Starting up" May 15 12:42:51.497987 dockerd[1826]: time="2025-05-15T12:42:51.497939959Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 12:42:51.533569 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport936127801-merged.mount: Deactivated successfully. May 15 12:42:51.543760 systemd[1]: var-lib-docker-metacopy\x2dcheck3482898828-merged.mount: Deactivated successfully. May 15 12:42:51.574898 dockerd[1826]: time="2025-05-15T12:42:51.574698874Z" level=info msg="Loading containers: start." May 15 12:42:51.585164 kernel: Initializing XFRM netlink socket May 15 12:42:51.828458 systemd-networkd[1466]: docker0: Link UP May 15 12:42:51.831593 dockerd[1826]: time="2025-05-15T12:42:51.831563679Z" level=info msg="Loading containers: done." May 15 12:42:51.850736 dockerd[1826]: time="2025-05-15T12:42:51.850701870Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 12:42:51.850881 dockerd[1826]: time="2025-05-15T12:42:51.850781510Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 12:42:51.850923 dockerd[1826]: time="2025-05-15T12:42:51.850904130Z" level=info msg="Initializing buildkit" May 15 12:42:51.871219 dockerd[1826]: time="2025-05-15T12:42:51.871131229Z" level=info msg="Completed buildkit initialization" May 15 12:42:51.874663 dockerd[1826]: time="2025-05-15T12:42:51.874634384Z" level=info msg="Daemon has completed initialization" May 15 12:42:51.874891 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 12:42:51.875560 dockerd[1826]: time="2025-05-15T12:42:51.874766114Z" level=info msg="API listen on /run/docker.sock" May 15 12:42:52.549289 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2454682389-merged.mount: Deactivated successfully. May 15 12:42:52.927520 containerd[1571]: time="2025-05-15T12:42:52.927399585Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 12:42:53.645997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294844560.mount: Deactivated successfully. May 15 12:42:54.248129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 12:42:54.250786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:42:54.693779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:42:54.704835 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:42:54.798350 kubelet[2089]: E0515 12:42:54.798308 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:42:54.806321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:42:54.806593 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:42:54.807425 systemd[1]: kubelet.service: Consumed 477ms CPU time, 104.2M memory peak. May 15 12:42:55.699209 containerd[1571]: time="2025-05-15T12:42:55.698162548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:42:55.699209 containerd[1571]: time="2025-05-15T12:42:55.699144057Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:42:55.699628 containerd[1571]: time="2025-05-15T12:42:55.699250357Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 15 12:42:55.703570 containerd[1571]: time="2025-05-15T12:42:55.703542840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:42:55.704356 containerd[1571]: time="2025-05-15T12:42:55.704216149Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.776747134s" May 15 12:42:55.704485 containerd[1571]: time="2025-05-15T12:42:55.704461659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 12:42:55.707670 containerd[1571]: time="2025-05-15T12:42:55.707621954Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 12:42:57.901469 containerd[1571]: time="2025-05-15T12:42:57.901426003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:42:57.902414 containerd[1571]: time="2025-05-15T12:42:57.902391192Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 15 12:42:57.904383 containerd[1571]: time="2025-05-15T12:42:57.902879771Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:42:57.905173 containerd[1571]: time="2025-05-15T12:42:57.905146597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:42:57.907496 containerd[1571]: time="2025-05-15T12:42:57.907420944Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.19971762s" May 15 12:42:57.907496 containerd[1571]: time="2025-05-15T12:42:57.907466414Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 12:42:57.909271 containerd[1571]: time="2025-05-15T12:42:57.909213511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 12:42:59.722591 containerd[1571]: time="2025-05-15T12:42:59.722525621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:42:59.723875 containerd[1571]: time="2025-05-15T12:42:59.723840859Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 15 12:42:59.724685 containerd[1571]: time="2025-05-15T12:42:59.724644388Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:42:59.727413 containerd[1571]: time="2025-05-15T12:42:59.727350084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:42:59.728324 containerd[1571]: time="2025-05-15T12:42:59.728289042Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.819051511s" May 15 12:42:59.728373 containerd[1571]: time="2025-05-15T12:42:59.728325982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 12:42:59.729759 containerd[1571]: time="2025-05-15T12:42:59.729006651Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 12:43:01.142710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650470344.mount: Deactivated successfully. May 15 12:43:01.956067 containerd[1571]: time="2025-05-15T12:43:01.955458731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:01.956067 containerd[1571]: time="2025-05-15T12:43:01.956039511Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 15 12:43:01.956593 containerd[1571]: time="2025-05-15T12:43:01.956569900Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:01.957992 containerd[1571]: time="2025-05-15T12:43:01.957973198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:01.958380 containerd[1571]: time="2025-05-15T12:43:01.958337237Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.229300756s" May 15 12:43:01.958415 containerd[1571]: time="2025-05-15T12:43:01.958381357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 12:43:01.958879 containerd[1571]: time="2025-05-15T12:43:01.958853586Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 12:43:02.626923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164515610.mount: Deactivated successfully. May 15 12:43:03.869567 containerd[1571]: time="2025-05-15T12:43:03.869507780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:03.870856 containerd[1571]: time="2025-05-15T12:43:03.870791018Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 12:43:03.872457 containerd[1571]: time="2025-05-15T12:43:03.872234846Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:03.878095 containerd[1571]: time="2025-05-15T12:43:03.878054977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:03.878857 containerd[1571]: time="2025-05-15T12:43:03.878831496Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.91994932s" May 15 12:43:03.878941 containerd[1571]: time="2025-05-15T12:43:03.878925336Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 12:43:03.880085 containerd[1571]: time="2025-05-15T12:43:03.880048194Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 12:43:04.489029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330254039.mount: Deactivated successfully. May 15 12:43:04.495351 containerd[1571]: time="2025-05-15T12:43:04.494677092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:43:04.495351 containerd[1571]: time="2025-05-15T12:43:04.495328231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 12:43:04.495782 containerd[1571]: time="2025-05-15T12:43:04.495747091Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:43:04.497548 containerd[1571]: time="2025-05-15T12:43:04.497529118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:43:04.498346 containerd[1571]: time="2025-05-15T12:43:04.498318477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 618.229273ms" May 15 12:43:04.498379 containerd[1571]: time="2025-05-15T12:43:04.498349107Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 12:43:04.499382 containerd[1571]: time="2025-05-15T12:43:04.499361255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 12:43:04.998153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 12:43:05.001567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:43:05.152110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160047033.mount: Deactivated successfully. May 15 12:43:05.203336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:43:05.215708 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:43:05.425305 kubelet[2189]: E0515 12:43:05.424416 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:43:05.429921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:43:05.430139 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:43:05.432775 systemd[1]: kubelet.service: Consumed 334ms CPU time, 103.7M memory peak. May 15 12:43:07.006228 containerd[1571]: time="2025-05-15T12:43:07.006156285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:07.007155 containerd[1571]: time="2025-05-15T12:43:07.007129363Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 15 12:43:07.007931 containerd[1571]: time="2025-05-15T12:43:07.007866312Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:07.010998 containerd[1571]: time="2025-05-15T12:43:07.010962627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:07.012295 containerd[1571]: time="2025-05-15T12:43:07.012093816Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.512705791s" May 15 12:43:07.012295 containerd[1571]: time="2025-05-15T12:43:07.012136356Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 12:43:09.334276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:43:09.334417 systemd[1]: kubelet.service: Consumed 334ms CPU time, 103.7M memory peak. May 15 12:43:09.336869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:43:09.366836 systemd[1]: Reload requested from client PID 2264 ('systemctl') (unit session-7.scope)... May 15 12:43:09.366962 systemd[1]: Reloading... May 15 12:43:09.671212 zram_generator::config[2308]: No configuration found. May 15 12:43:09.769958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:43:09.873118 systemd[1]: Reloading finished in 505 ms. May 15 12:43:09.934815 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 12:43:09.934920 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 12:43:09.935223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:43:09.935289 systemd[1]: kubelet.service: Consumed 308ms CPU time, 91.8M memory peak. May 15 12:43:09.937042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:43:10.120983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:43:10.129439 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:43:10.199746 kubelet[2363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:43:10.200650 kubelet[2363]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 12:43:10.200650 kubelet[2363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:43:10.201307 kubelet[2363]: I0515 12:43:10.201046 2363 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:43:10.358429 kubelet[2363]: I0515 12:43:10.358376 2363 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 12:43:10.358429 kubelet[2363]: I0515 12:43:10.358414 2363 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:43:10.358722 kubelet[2363]: I0515 12:43:10.358695 2363 server.go:954] "Client rotation is on, will bootstrap in background" May 15 12:43:10.407746 kubelet[2363]: E0515 12:43:10.407626 2363 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.214.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.214.203:6443: connect: connection refused" logger="UnhandledError" May 15 12:43:10.409255 kubelet[2363]: I0515 12:43:10.409107 2363 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:43:10.422684 kubelet[2363]: I0515 12:43:10.422638 2363 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:43:10.427513 kubelet[2363]: I0515 12:43:10.426484 2363 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:43:10.427513 kubelet[2363]: I0515 12:43:10.426758 2363 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:43:10.427513 kubelet[2363]: I0515 12:43:10.426784 2363 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-214-203","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:43:10.427513 kubelet[2363]: I0515 12:43:10.426963 2363 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:43:10.427701 kubelet[2363]: I0515 12:43:10.426973 2363 container_manager_linux.go:304] "Creating device plugin manager" May 15 12:43:10.427701 kubelet[2363]: I0515 12:43:10.427130 2363 state_mem.go:36] "Initialized new in-memory state store" May 15 12:43:10.430911 kubelet[2363]: I0515 12:43:10.430890 2363 kubelet.go:446] "Attempting to sync node with API server" May 15 12:43:10.430911 kubelet[2363]: I0515 12:43:10.430909 2363 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:43:10.431170 kubelet[2363]: I0515 12:43:10.430935 2363 kubelet.go:352] "Adding apiserver pod source" May 15 12:43:10.431170 kubelet[2363]: I0515 12:43:10.430957 2363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:43:10.437220 kubelet[2363]: W0515 12:43:10.436986 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.214.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.214.203:6443: connect: connection refused May 15 12:43:10.437220 kubelet[2363]: E0515 12:43:10.437092 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.214.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.214.203:6443: connect: connection refused" logger="UnhandledError" May 15 12:43:10.437293 kubelet[2363]: W0515 12:43:10.437211 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.214.203:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-214-203&limit=500&resourceVersion=0": dial tcp 172.234.214.203:6443: connect: connection refused May 15 12:43:10.437293 kubelet[2363]: E0515 12:43:10.437248 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.214.203:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-214-203&limit=500&resourceVersion=0\": dial tcp 172.234.214.203:6443: connect: connection refused" logger="UnhandledError" May 15 12:43:10.437872 kubelet[2363]: I0515 12:43:10.437777 2363 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:43:10.438659 kubelet[2363]: I0515 12:43:10.438631 2363 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:43:10.439547 kubelet[2363]: W0515 12:43:10.439515 2363 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 12:43:10.447223 kubelet[2363]: I0515 12:43:10.446886 2363 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 12:43:10.447223 kubelet[2363]: I0515 12:43:10.446948 2363 server.go:1287] "Started kubelet" May 15 12:43:10.455015 kubelet[2363]: E0515 12:43:10.453791 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.214.203:6443/api/v1/namespaces/default/events\": dial tcp 172.234.214.203:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-214-203.183fb3e83b037e39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-214-203,UID:172-234-214-203,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-214-203,},FirstTimestamp:2025-05-15 12:43:10.446911033 +0000 UTC m=+0.308023169,LastTimestamp:2025-05-15 12:43:10.446911033 +0000 UTC m=+0.308023169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-214-203,}" May 15 12:43:10.455847 kubelet[2363]: I0515 12:43:10.455812 2363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:43:10.457215 kubelet[2363]: I0515 12:43:10.456038 2363 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:43:10.457215 kubelet[2363]: I0515 12:43:10.457120 2363 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:43:10.458523 kubelet[2363]: I0515 12:43:10.458502 2363 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 12:43:10.458711 kubelet[2363]: E0515 12:43:10.458696 2363 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:10.459304 kubelet[2363]: I0515 12:43:10.459290 2363 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 12:43:10.459369 kubelet[2363]: I0515 12:43:10.458520 2363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:43:10.459554 kubelet[2363]: I0515 12:43:10.459493 2363 server.go:490] "Adding debug handlers to kubelet server" May 15 12:43:10.459947 kubelet[2363]: I0515 12:43:10.459929 2363 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:43:10.460164 kubelet[2363]: I0515 12:43:10.460148 2363 reconciler.go:26] "Reconciler: start to sync state" May 15 12:43:10.461666 kubelet[2363]: W0515 12:43:10.461627 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.214.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.214.203:6443: connect: connection refused May 15 12:43:10.461704 kubelet[2363]: E0515 12:43:10.461673 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.214.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.214.203:6443: connect: connection refused" logger="UnhandledError" May 15 12:43:10.461984 kubelet[2363]: E0515 12:43:10.461941 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.214.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-214-203?timeout=10s\": dial tcp 172.234.214.203:6443: connect: connection refused" interval="200ms" May 15 12:43:10.462146 kubelet[2363]: I0515 12:43:10.462125 2363 factory.go:221] Registration of the systemd container factory successfully May 15 12:43:10.462286 kubelet[2363]: I0515 12:43:10.462262 2363 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:43:10.463656 kubelet[2363]: I0515 12:43:10.463634 2363 factory.go:221] Registration of the containerd container factory successfully May 15 12:43:10.478987 kubelet[2363]: I0515 12:43:10.478946 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:43:10.480054 kubelet[2363]: I0515 12:43:10.480031 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:43:10.480099 kubelet[2363]: I0515 12:43:10.480073 2363 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 12:43:10.480124 kubelet[2363]: I0515 12:43:10.480104 2363 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 12:43:10.480124 kubelet[2363]: I0515 12:43:10.480111 2363 kubelet.go:2388] "Starting kubelet main sync loop" May 15 12:43:10.480207 kubelet[2363]: E0515 12:43:10.480160 2363 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:43:10.488287 kubelet[2363]: E0515 12:43:10.487438 2363 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:43:10.488287 kubelet[2363]: W0515 12:43:10.487568 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.214.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.214.203:6443: connect: connection refused May 15 12:43:10.488287 kubelet[2363]: E0515 12:43:10.487604 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.214.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.214.203:6443: connect: connection refused" logger="UnhandledError" May 15 12:43:10.493708 kubelet[2363]: I0515 12:43:10.493693 2363 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 12:43:10.493708 kubelet[2363]: I0515 12:43:10.493705 2363 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 12:43:10.493786 kubelet[2363]: I0515 12:43:10.493720 2363 state_mem.go:36] "Initialized new in-memory state store" May 15 12:43:10.495285 kubelet[2363]: I0515 12:43:10.495260 2363 policy_none.go:49] "None policy: Start" May 15 12:43:10.495340 kubelet[2363]: I0515 12:43:10.495297 2363 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 12:43:10.495340 kubelet[2363]: I0515 12:43:10.495323 2363 state_mem.go:35] "Initializing new in-memory state store" May 15 12:43:10.502005 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 12:43:10.517991 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 12:43:10.521911 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 12:43:10.532387 kubelet[2363]: I0515 12:43:10.532302 2363 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:43:10.532494 kubelet[2363]: I0515 12:43:10.532478 2363 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:43:10.532539 kubelet[2363]: I0515 12:43:10.532505 2363 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:43:10.532738 kubelet[2363]: I0515 12:43:10.532720 2363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:43:10.533786 kubelet[2363]: E0515 12:43:10.533769 2363 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 12:43:10.534108 kubelet[2363]: E0515 12:43:10.534092 2363 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-214-203\" not found" May 15 12:43:10.592444 systemd[1]: Created slice kubepods-burstable-pod2afa848f6dad28d7b6db4154135614a7.slice - libcontainer container kubepods-burstable-pod2afa848f6dad28d7b6db4154135614a7.slice. May 15 12:43:10.606579 kubelet[2363]: E0515 12:43:10.605091 2363 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:10.606466 systemd[1]: Created slice kubepods-burstable-pod0b9f3de1a82ee7e817b0a0de29e85194.slice - libcontainer container kubepods-burstable-pod0b9f3de1a82ee7e817b0a0de29e85194.slice. May 15 12:43:10.615624 kubelet[2363]: E0515 12:43:10.615271 2363 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:10.619037 systemd[1]: Created slice kubepods-burstable-pod6bdef6b9dc027cfb419376d4a07ae9bb.slice - libcontainer container kubepods-burstable-pod6bdef6b9dc027cfb419376d4a07ae9bb.slice. May 15 12:43:10.621405 kubelet[2363]: E0515 12:43:10.621376 2363 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:10.635611 kubelet[2363]: I0515 12:43:10.635588 2363 kubelet_node_status.go:76] "Attempting to register node" node="172-234-214-203" May 15 12:43:10.636083 kubelet[2363]: E0515 12:43:10.636030 2363 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.234.214.203:6443/api/v1/nodes\": dial tcp 172.234.214.203:6443: connect: connection refused" node="172-234-214-203" May 15 12:43:10.661858 kubelet[2363]: I0515 12:43:10.661629 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-k8s-certs\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:10.661858 kubelet[2363]: I0515 12:43:10.661669 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2afa848f6dad28d7b6db4154135614a7-k8s-certs\") pod \"kube-apiserver-172-234-214-203\" (UID: \"2afa848f6dad28d7b6db4154135614a7\") " pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:10.661858 kubelet[2363]: I0515 12:43:10.661699 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2afa848f6dad28d7b6db4154135614a7-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-214-203\" (UID: \"2afa848f6dad28d7b6db4154135614a7\") " pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:10.661858 kubelet[2363]: I0515 12:43:10.661757 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-ca-certs\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:10.661858 kubelet[2363]: I0515 12:43:10.661836 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-flexvolume-dir\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:10.662027 kubelet[2363]: I0515 12:43:10.661917 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6bdef6b9dc027cfb419376d4a07ae9bb-kubeconfig\") pod \"kube-scheduler-172-234-214-203\" (UID: \"6bdef6b9dc027cfb419376d4a07ae9bb\") " pod="kube-system/kube-scheduler-172-234-214-203" May 15 12:43:10.662027 kubelet[2363]: I0515 12:43:10.661945 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2afa848f6dad28d7b6db4154135614a7-ca-certs\") pod \"kube-apiserver-172-234-214-203\" (UID: \"2afa848f6dad28d7b6db4154135614a7\") " pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:10.662027 kubelet[2363]: I0515 12:43:10.661997 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-kubeconfig\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:10.662027 kubelet[2363]: I0515 12:43:10.662016 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:10.663104 kubelet[2363]: E0515 12:43:10.663057 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.214.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-214-203?timeout=10s\": dial tcp 172.234.214.203:6443: connect: connection refused" interval="400ms" May 15 12:43:10.839657 kubelet[2363]: I0515 12:43:10.839613 2363 kubelet_node_status.go:76] "Attempting to register node" node="172-234-214-203" May 15 12:43:10.839975 kubelet[2363]: E0515 12:43:10.839950 2363 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.234.214.203:6443/api/v1/nodes\": dial tcp 172.234.214.203:6443: connect: connection refused" node="172-234-214-203" May 15 12:43:10.906132 kubelet[2363]: E0515 12:43:10.906086 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:10.907063 containerd[1571]: time="2025-05-15T12:43:10.907008713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-214-203,Uid:2afa848f6dad28d7b6db4154135614a7,Namespace:kube-system,Attempt:0,}" May 15 12:43:10.920256 kubelet[2363]: E0515 12:43:10.917330 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:10.920620 containerd[1571]: time="2025-05-15T12:43:10.920592733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-214-203,Uid:0b9f3de1a82ee7e817b0a0de29e85194,Namespace:kube-system,Attempt:0,}" May 15 12:43:10.926965 kubelet[2363]: E0515 12:43:10.926636 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:10.927439 containerd[1571]: time="2025-05-15T12:43:10.927157803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-214-203,Uid:6bdef6b9dc027cfb419376d4a07ae9bb,Namespace:kube-system,Attempt:0,}" May 15 12:43:10.945353 containerd[1571]: time="2025-05-15T12:43:10.945318605Z" level=info msg="connecting to shim cf03fcd870b3941ba9fbe32bfd8625169175252d09e635ecbc0eeca29c225d62" address="unix:///run/containerd/s/6154a2ea0f7e4cb8a184c604fb87953054e15b399eb49bda1a845ba9197a2d6f" namespace=k8s.io protocol=ttrpc version=3 May 15 12:43:11.008853 containerd[1571]: time="2025-05-15T12:43:11.008796310Z" level=info msg="connecting to shim 50bb8b21f12e1abb7dbdc894ee1c08369efee2ddc3fab950f6aaafbae1d0613b" address="unix:///run/containerd/s/0b76b02e919f69a4d7de4676361a1ef1d362e1392d3bfc6ba704bcfe29fb69d6" namespace=k8s.io protocol=ttrpc version=3 May 15 12:43:11.163719 kubelet[2363]: E0515 12:43:11.163629 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.214.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-214-203?timeout=10s\": dial tcp 172.234.214.203:6443: connect: connection refused" interval="800ms" May 15 12:43:11.187474 systemd[1]: Started cri-containerd-cf03fcd870b3941ba9fbe32bfd8625169175252d09e635ecbc0eeca29c225d62.scope - libcontainer container cf03fcd870b3941ba9fbe32bfd8625169175252d09e635ecbc0eeca29c225d62. May 15 12:43:11.193024 containerd[1571]: time="2025-05-15T12:43:11.192350935Z" level=info msg="connecting to shim 0c662c4f104098f9818db9a841e991c25b4b512de64d8966d836929a5c51c867" address="unix:///run/containerd/s/ac380b792c9bf7d35bddd6e093264067d2a5be437500fa76e59f7f9ee05869da" namespace=k8s.io protocol=ttrpc version=3 May 15 12:43:11.239450 systemd[1]: Started cri-containerd-50bb8b21f12e1abb7dbdc894ee1c08369efee2ddc3fab950f6aaafbae1d0613b.scope - libcontainer container 50bb8b21f12e1abb7dbdc894ee1c08369efee2ddc3fab950f6aaafbae1d0613b. May 15 12:43:11.245371 kubelet[2363]: I0515 12:43:11.245341 2363 kubelet_node_status.go:76] "Attempting to register node" node="172-234-214-203" May 15 12:43:11.246038 kubelet[2363]: E0515 12:43:11.245936 2363 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.234.214.203:6443/api/v1/nodes\": dial tcp 172.234.214.203:6443: connect: connection refused" node="172-234-214-203" May 15 12:43:11.279318 systemd[1]: Started cri-containerd-0c662c4f104098f9818db9a841e991c25b4b512de64d8966d836929a5c51c867.scope - libcontainer container 0c662c4f104098f9818db9a841e991c25b4b512de64d8966d836929a5c51c867. May 15 12:43:11.350318 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 15 12:43:11.367951 containerd[1571]: time="2025-05-15T12:43:11.367922391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-214-203,Uid:2afa848f6dad28d7b6db4154135614a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf03fcd870b3941ba9fbe32bfd8625169175252d09e635ecbc0eeca29c225d62\"" May 15 12:43:11.370562 kubelet[2363]: E0515 12:43:11.370519 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:11.374035 containerd[1571]: time="2025-05-15T12:43:11.374012182Z" level=info msg="CreateContainer within sandbox \"cf03fcd870b3941ba9fbe32bfd8625169175252d09e635ecbc0eeca29c225d62\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 12:43:11.386572 containerd[1571]: time="2025-05-15T12:43:11.386509984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-214-203,Uid:6bdef6b9dc027cfb419376d4a07ae9bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"50bb8b21f12e1abb7dbdc894ee1c08369efee2ddc3fab950f6aaafbae1d0613b\"" May 15 12:43:11.387686 kubelet[2363]: W0515 12:43:11.387010 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.214.203:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-214-203&limit=500&resourceVersion=0": dial tcp 172.234.214.203:6443: connect: connection refused May 15 12:43:11.387686 kubelet[2363]: E0515 12:43:11.387626 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.214.203:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-214-203&limit=500&resourceVersion=0\": dial tcp 172.234.214.203:6443: connect: connection refused" logger="UnhandledError" May 15 12:43:11.388484 kubelet[2363]: E0515 12:43:11.388312 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:11.391236 containerd[1571]: time="2025-05-15T12:43:11.391208887Z" level=info msg="CreateContainer within sandbox \"50bb8b21f12e1abb7dbdc894ee1c08369efee2ddc3fab950f6aaafbae1d0613b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 12:43:11.391909 containerd[1571]: time="2025-05-15T12:43:11.391866916Z" level=info msg="Container 5173c32aecb62c95fb200a27b9846ff054bf3b1eb8244e1889ec442779b1f194: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:11.398856 containerd[1571]: time="2025-05-15T12:43:11.398818345Z" level=info msg="CreateContainer within sandbox \"cf03fcd870b3941ba9fbe32bfd8625169175252d09e635ecbc0eeca29c225d62\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5173c32aecb62c95fb200a27b9846ff054bf3b1eb8244e1889ec442779b1f194\"" May 15 12:43:11.399678 containerd[1571]: time="2025-05-15T12:43:11.399654594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-214-203,Uid:0b9f3de1a82ee7e817b0a0de29e85194,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c662c4f104098f9818db9a841e991c25b4b512de64d8966d836929a5c51c867\"" May 15 12:43:11.400324 containerd[1571]: time="2025-05-15T12:43:11.400092363Z" level=info msg="StartContainer for \"5173c32aecb62c95fb200a27b9846ff054bf3b1eb8244e1889ec442779b1f194\"" May 15 12:43:11.401712 containerd[1571]: time="2025-05-15T12:43:11.401685261Z" level=info msg="connecting to shim 5173c32aecb62c95fb200a27b9846ff054bf3b1eb8244e1889ec442779b1f194" address="unix:///run/containerd/s/6154a2ea0f7e4cb8a184c604fb87953054e15b399eb49bda1a845ba9197a2d6f" protocol=ttrpc version=3 May 15 12:43:11.402361 kubelet[2363]: E0515 12:43:11.402340 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:11.404747 containerd[1571]: time="2025-05-15T12:43:11.404718466Z" level=info msg="Container 3ba05e54c9ce9bb93142ef80ebfda876e22505509e9bb6973db301e3cbda59f8: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:11.406103 containerd[1571]: time="2025-05-15T12:43:11.406064524Z" level=info msg="CreateContainer within sandbox \"0c662c4f104098f9818db9a841e991c25b4b512de64d8966d836929a5c51c867\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 12:43:11.420128 containerd[1571]: time="2025-05-15T12:43:11.420067353Z" level=info msg="CreateContainer within sandbox \"50bb8b21f12e1abb7dbdc894ee1c08369efee2ddc3fab950f6aaafbae1d0613b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3ba05e54c9ce9bb93142ef80ebfda876e22505509e9bb6973db301e3cbda59f8\"" May 15 12:43:11.421764 containerd[1571]: time="2025-05-15T12:43:11.421724961Z" level=info msg="StartContainer for \"3ba05e54c9ce9bb93142ef80ebfda876e22505509e9bb6973db301e3cbda59f8\"" May 15 12:43:11.425115 containerd[1571]: time="2025-05-15T12:43:11.424229197Z" level=info msg="connecting to shim 3ba05e54c9ce9bb93142ef80ebfda876e22505509e9bb6973db301e3cbda59f8" address="unix:///run/containerd/s/0b76b02e919f69a4d7de4676361a1ef1d362e1392d3bfc6ba704bcfe29fb69d6" protocol=ttrpc version=3 May 15 12:43:11.426876 containerd[1571]: time="2025-05-15T12:43:11.426821093Z" level=info msg="Container 8d57b33631ff9462fd0203de3128461d5d2bf6cefb0145f69274ac61324630e8: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:11.427581 systemd[1]: Started cri-containerd-5173c32aecb62c95fb200a27b9846ff054bf3b1eb8244e1889ec442779b1f194.scope - libcontainer container 5173c32aecb62c95fb200a27b9846ff054bf3b1eb8244e1889ec442779b1f194. May 15 12:43:11.438253 containerd[1571]: time="2025-05-15T12:43:11.438208246Z" level=info msg="CreateContainer within sandbox \"0c662c4f104098f9818db9a841e991c25b4b512de64d8966d836929a5c51c867\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8d57b33631ff9462fd0203de3128461d5d2bf6cefb0145f69274ac61324630e8\"" May 15 12:43:11.438665 containerd[1571]: time="2025-05-15T12:43:11.438591205Z" level=info msg="StartContainer for \"8d57b33631ff9462fd0203de3128461d5d2bf6cefb0145f69274ac61324630e8\"" May 15 12:43:11.440799 containerd[1571]: time="2025-05-15T12:43:11.439987183Z" level=info msg="connecting to shim 8d57b33631ff9462fd0203de3128461d5d2bf6cefb0145f69274ac61324630e8" address="unix:///run/containerd/s/ac380b792c9bf7d35bddd6e093264067d2a5be437500fa76e59f7f9ee05869da" protocol=ttrpc version=3 May 15 12:43:11.471728 systemd[1]: Started cri-containerd-3ba05e54c9ce9bb93142ef80ebfda876e22505509e9bb6973db301e3cbda59f8.scope - libcontainer container 3ba05e54c9ce9bb93142ef80ebfda876e22505509e9bb6973db301e3cbda59f8. May 15 12:43:11.481442 systemd[1]: Started cri-containerd-8d57b33631ff9462fd0203de3128461d5d2bf6cefb0145f69274ac61324630e8.scope - libcontainer container 8d57b33631ff9462fd0203de3128461d5d2bf6cefb0145f69274ac61324630e8. May 15 12:43:11.532122 kubelet[2363]: W0515 12:43:11.532057 2363 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.214.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.214.203:6443: connect: connection refused May 15 12:43:11.532672 kubelet[2363]: E0515 12:43:11.532243 2363 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.214.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.214.203:6443: connect: connection refused" logger="UnhandledError" May 15 12:43:11.537042 containerd[1571]: time="2025-05-15T12:43:11.537005658Z" level=info msg="StartContainer for \"5173c32aecb62c95fb200a27b9846ff054bf3b1eb8244e1889ec442779b1f194\" returns successfully" May 15 12:43:11.613636 containerd[1571]: time="2025-05-15T12:43:11.613591673Z" level=info msg="StartContainer for \"8d57b33631ff9462fd0203de3128461d5d2bf6cefb0145f69274ac61324630e8\" returns successfully" May 15 12:43:11.623921 containerd[1571]: time="2025-05-15T12:43:11.623870988Z" level=info msg="StartContainer for \"3ba05e54c9ce9bb93142ef80ebfda876e22505509e9bb6973db301e3cbda59f8\" returns successfully" May 15 12:43:12.049842 kubelet[2363]: I0515 12:43:12.049799 2363 kubelet_node_status.go:76] "Attempting to register node" node="172-234-214-203" May 15 12:43:12.515213 kubelet[2363]: E0515 12:43:12.514845 2363 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:12.515213 kubelet[2363]: E0515 12:43:12.515011 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:12.516860 kubelet[2363]: E0515 12:43:12.516581 2363 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:12.516860 kubelet[2363]: E0515 12:43:12.516667 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:12.518907 kubelet[2363]: E0515 12:43:12.518714 2363 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:12.518907 kubelet[2363]: E0515 12:43:12.518806 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:13.521247 kubelet[2363]: E0515 12:43:13.521210 2363 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:13.521710 kubelet[2363]: E0515 12:43:13.521356 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:13.521710 kubelet[2363]: E0515 12:43:13.521571 2363 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:13.521710 kubelet[2363]: E0515 12:43:13.521656 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:13.524906 kubelet[2363]: E0515 12:43:13.524878 2363 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:13.524989 kubelet[2363]: E0515 12:43:13.524966 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:13.638642 kubelet[2363]: E0515 12:43:13.638590 2363 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-214-203\" not found" node="172-234-214-203" May 15 12:43:13.705207 kubelet[2363]: I0515 12:43:13.704124 2363 kubelet_node_status.go:79] "Successfully registered node" node="172-234-214-203" May 15 12:43:13.705207 kubelet[2363]: E0515 12:43:13.704370 2363 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172-234-214-203\": node \"172-234-214-203\" not found" May 15 12:43:13.715947 kubelet[2363]: E0515 12:43:13.715910 2363 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:13.817688 kubelet[2363]: E0515 12:43:13.816607 2363 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:13.917119 kubelet[2363]: E0515 12:43:13.917058 2363 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:14.017496 kubelet[2363]: E0515 12:43:14.017443 2363 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:14.118122 kubelet[2363]: E0515 12:43:14.117996 2363 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:14.218961 kubelet[2363]: E0515 12:43:14.218894 2363 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:14.319655 kubelet[2363]: E0515 12:43:14.319592 2363 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:14.420620 kubelet[2363]: E0515 12:43:14.420498 2363 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:14.521163 kubelet[2363]: I0515 12:43:14.521112 2363 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:14.523411 kubelet[2363]: I0515 12:43:14.523353 2363 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-214-203" May 15 12:43:14.527867 kubelet[2363]: E0515 12:43:14.527827 2363 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-214-203\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:14.528209 kubelet[2363]: E0515 12:43:14.528088 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:14.528209 kubelet[2363]: E0515 12:43:14.528095 2363 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-214-203\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-214-203" May 15 12:43:14.528843 kubelet[2363]: E0515 12:43:14.528348 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:14.560042 kubelet[2363]: I0515 12:43:14.559588 2363 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:14.567998 kubelet[2363]: I0515 12:43:14.567829 2363 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:14.573765 kubelet[2363]: I0515 12:43:14.573731 2363 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-214-203" May 15 12:43:15.437211 kubelet[2363]: I0515 12:43:15.436863 2363 apiserver.go:52] "Watching apiserver" May 15 12:43:15.440978 kubelet[2363]: E0515 12:43:15.440950 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:15.459585 kubelet[2363]: I0515 12:43:15.459556 2363 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 12:43:15.523726 kubelet[2363]: E0515 12:43:15.523375 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:15.523726 kubelet[2363]: I0515 12:43:15.523594 2363 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-214-203" May 15 12:43:15.529301 kubelet[2363]: E0515 12:43:15.529273 2363 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-214-203\" already exists" pod="kube-system/kube-scheduler-172-234-214-203" May 15 12:43:15.529493 kubelet[2363]: E0515 12:43:15.529471 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:16.078735 systemd[1]: Reload requested from client PID 2634 ('systemctl') (unit session-7.scope)... May 15 12:43:16.078750 systemd[1]: Reloading... May 15 12:43:16.244960 zram_generator::config[2695]: No configuration found. May 15 12:43:16.318990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:43:16.450474 systemd[1]: Reloading finished in 371 ms. May 15 12:43:16.489434 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:43:16.510754 systemd[1]: kubelet.service: Deactivated successfully. May 15 12:43:16.511075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:43:16.511136 systemd[1]: kubelet.service: Consumed 777ms CPU time, 122.7M memory peak. May 15 12:43:16.514760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:43:16.770067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:43:16.779695 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:43:16.840978 kubelet[2728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:43:16.841813 kubelet[2728]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 12:43:16.841813 kubelet[2728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:43:16.841813 kubelet[2728]: I0515 12:43:16.841409 2728 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:43:16.851686 kubelet[2728]: I0515 12:43:16.850996 2728 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 12:43:16.851686 kubelet[2728]: I0515 12:43:16.851015 2728 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:43:16.851889 kubelet[2728]: I0515 12:43:16.851876 2728 server.go:954] "Client rotation is on, will bootstrap in background" May 15 12:43:16.859659 kubelet[2728]: I0515 12:43:16.859641 2728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 12:43:16.863479 kubelet[2728]: I0515 12:43:16.863463 2728 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:43:16.869627 kubelet[2728]: I0515 12:43:16.869602 2728 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:43:16.877206 kubelet[2728]: I0515 12:43:16.876936 2728 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:43:16.877922 kubelet[2728]: I0515 12:43:16.877800 2728 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:43:16.878685 kubelet[2728]: I0515 12:43:16.877826 2728 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-214-203","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:43:16.878970 kubelet[2728]: I0515 12:43:16.878849 2728 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:43:16.878970 kubelet[2728]: I0515 12:43:16.878865 2728 container_manager_linux.go:304] "Creating device plugin manager" May 15 12:43:16.878970 kubelet[2728]: I0515 12:43:16.878906 2728 state_mem.go:36] "Initialized new in-memory state store" May 15 12:43:16.879231 kubelet[2728]: I0515 12:43:16.879219 2728 kubelet.go:446] "Attempting to sync node with API server" May 15 12:43:16.880210 kubelet[2728]: I0515 12:43:16.879655 2728 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:43:16.880210 kubelet[2728]: I0515 12:43:16.879685 2728 kubelet.go:352] "Adding apiserver pod source" May 15 12:43:16.880210 kubelet[2728]: I0515 12:43:16.879699 2728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:43:16.882519 kubelet[2728]: I0515 12:43:16.881991 2728 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:43:16.885312 kubelet[2728]: I0515 12:43:16.885177 2728 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:43:16.885752 kubelet[2728]: I0515 12:43:16.885732 2728 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 12:43:16.885794 kubelet[2728]: I0515 12:43:16.885769 2728 server.go:1287] "Started kubelet" May 15 12:43:16.892613 kubelet[2728]: I0515 12:43:16.892584 2728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:43:16.895585 kubelet[2728]: I0515 12:43:16.894826 2728 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:43:16.896767 kubelet[2728]: I0515 12:43:16.896177 2728 server.go:490] "Adding debug handlers to kubelet server" May 15 12:43:16.898667 kubelet[2728]: I0515 12:43:16.898614 2728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:43:16.898857 kubelet[2728]: I0515 12:43:16.898832 2728 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:43:16.906409 kubelet[2728]: I0515 12:43:16.906351 2728 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:43:16.909051 kubelet[2728]: I0515 12:43:16.907936 2728 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 12:43:16.909051 kubelet[2728]: E0515 12:43:16.908073 2728 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-234-214-203\" not found" May 15 12:43:16.912713 kubelet[2728]: I0515 12:43:16.911147 2728 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 12:43:16.915480 kubelet[2728]: I0515 12:43:16.913590 2728 reconciler.go:26] "Reconciler: start to sync state" May 15 12:43:16.917070 kubelet[2728]: I0515 12:43:16.917047 2728 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:43:16.921748 kubelet[2728]: I0515 12:43:16.921637 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:43:16.924011 kubelet[2728]: I0515 12:43:16.923981 2728 factory.go:221] Registration of the containerd container factory successfully May 15 12:43:16.924011 kubelet[2728]: I0515 12:43:16.924004 2728 factory.go:221] Registration of the systemd container factory successfully May 15 12:43:16.924349 kubelet[2728]: I0515 12:43:16.924298 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:43:16.924396 kubelet[2728]: I0515 12:43:16.924353 2728 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 12:43:16.924396 kubelet[2728]: I0515 12:43:16.924375 2728 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 12:43:16.924396 kubelet[2728]: I0515 12:43:16.924383 2728 kubelet.go:2388] "Starting kubelet main sync loop" May 15 12:43:16.924475 kubelet[2728]: E0515 12:43:16.924454 2728 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:43:16.944304 kubelet[2728]: E0515 12:43:16.944239 2728 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:43:16.981481 kubelet[2728]: I0515 12:43:16.981444 2728 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 12:43:16.981481 kubelet[2728]: I0515 12:43:16.981464 2728 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 12:43:16.981481 kubelet[2728]: I0515 12:43:16.981485 2728 state_mem.go:36] "Initialized new in-memory state store" May 15 12:43:16.981721 kubelet[2728]: I0515 12:43:16.981702 2728 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 12:43:16.981749 kubelet[2728]: I0515 12:43:16.981725 2728 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 12:43:16.981749 kubelet[2728]: I0515 12:43:16.981746 2728 policy_none.go:49] "None policy: Start" May 15 12:43:16.981806 kubelet[2728]: I0515 12:43:16.981755 2728 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 12:43:16.981806 kubelet[2728]: I0515 12:43:16.981767 2728 state_mem.go:35] "Initializing new in-memory state store" May 15 12:43:16.981880 kubelet[2728]: I0515 12:43:16.981865 2728 state_mem.go:75] "Updated machine memory state" May 15 12:43:16.992265 kubelet[2728]: I0515 12:43:16.991847 2728 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:43:16.992265 kubelet[2728]: I0515 12:43:16.992055 2728 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:43:16.992265 kubelet[2728]: I0515 12:43:16.992067 2728 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:43:16.993443 kubelet[2728]: I0515 12:43:16.993428 2728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:43:17.005255 kubelet[2728]: E0515 12:43:17.003538 2728 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 12:43:17.026788 kubelet[2728]: I0515 12:43:17.026043 2728 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-214-203" May 15 12:43:17.027033 kubelet[2728]: I0515 12:43:17.026086 2728 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:17.027789 kubelet[2728]: I0515 12:43:17.026217 2728 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:17.036887 kubelet[2728]: E0515 12:43:17.036863 2728 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-214-203\" already exists" pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:17.037151 kubelet[2728]: E0515 12:43:17.037137 2728 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-214-203\" already exists" pod="kube-system/kube-scheduler-172-234-214-203" May 15 12:43:17.037359 kubelet[2728]: E0515 12:43:17.037290 2728 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-214-203\" already exists" pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:17.080529 sudo[2763]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 12:43:17.081075 sudo[2763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 12:43:17.113488 kubelet[2728]: I0515 12:43:17.113452 2728 kubelet_node_status.go:76] "Attempting to register node" node="172-234-214-203" May 15 12:43:17.116816 kubelet[2728]: I0515 12:43:17.116789 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6bdef6b9dc027cfb419376d4a07ae9bb-kubeconfig\") pod \"kube-scheduler-172-234-214-203\" (UID: \"6bdef6b9dc027cfb419376d4a07ae9bb\") " pod="kube-system/kube-scheduler-172-234-214-203" May 15 12:43:17.116890 kubelet[2728]: I0515 12:43:17.116820 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2afa848f6dad28d7b6db4154135614a7-k8s-certs\") pod \"kube-apiserver-172-234-214-203\" (UID: \"2afa848f6dad28d7b6db4154135614a7\") " pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:17.116890 kubelet[2728]: I0515 12:43:17.116841 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2afa848f6dad28d7b6db4154135614a7-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-214-203\" (UID: \"2afa848f6dad28d7b6db4154135614a7\") " pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:17.116890 kubelet[2728]: I0515 12:43:17.116869 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-ca-certs\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:17.116890 kubelet[2728]: I0515 12:43:17.116888 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-kubeconfig\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:17.117038 kubelet[2728]: I0515 12:43:17.116907 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:17.117038 kubelet[2728]: I0515 12:43:17.116928 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2afa848f6dad28d7b6db4154135614a7-ca-certs\") pod \"kube-apiserver-172-234-214-203\" (UID: \"2afa848f6dad28d7b6db4154135614a7\") " pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:17.117038 kubelet[2728]: I0515 12:43:17.116949 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-flexvolume-dir\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:17.117038 kubelet[2728]: I0515 12:43:17.116978 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b9f3de1a82ee7e817b0a0de29e85194-k8s-certs\") pod \"kube-controller-manager-172-234-214-203\" (UID: \"0b9f3de1a82ee7e817b0a0de29e85194\") " pod="kube-system/kube-controller-manager-172-234-214-203" May 15 12:43:17.124237 kubelet[2728]: I0515 12:43:17.124210 2728 kubelet_node_status.go:125] "Node was previously registered" node="172-234-214-203" May 15 12:43:17.124334 kubelet[2728]: I0515 12:43:17.124315 2728 kubelet_node_status.go:79] "Successfully registered node" node="172-234-214-203" May 15 12:43:17.457658 kubelet[2728]: E0515 12:43:17.455793 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:17.457658 kubelet[2728]: E0515 12:43:17.456294 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:17.457658 kubelet[2728]: E0515 12:43:17.456406 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:17.881786 kubelet[2728]: I0515 12:43:17.881652 2728 apiserver.go:52] "Watching apiserver" May 15 12:43:17.913755 kubelet[2728]: I0515 12:43:17.913707 2728 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 12:43:17.963922 sudo[2763]: pam_unix(sudo:session): session closed for user root May 15 12:43:17.964955 kubelet[2728]: E0515 12:43:17.964920 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:17.966416 kubelet[2728]: E0515 12:43:17.966387 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:17.966654 kubelet[2728]: I0515 12:43:17.966533 2728 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:17.974286 kubelet[2728]: I0515 12:43:17.974177 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-214-203" podStartSLOduration=3.974150212 podStartE2EDuration="3.974150212s" podCreationTimestamp="2025-05-15 12:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:43:17.973950628 +0000 UTC m=+1.187522085" watchObservedRunningTime="2025-05-15 12:43:17.974150212 +0000 UTC m=+1.187721669" May 15 12:43:17.975365 kubelet[2728]: E0515 12:43:17.974559 2728 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-214-203\" already exists" pod="kube-system/kube-apiserver-172-234-214-203" May 15 12:43:17.975365 kubelet[2728]: E0515 12:43:17.974672 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:18.022251 kubelet[2728]: I0515 12:43:18.002110 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-214-203" podStartSLOduration=4.002092247 podStartE2EDuration="4.002092247s" podCreationTimestamp="2025-05-15 12:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:43:17.99112226 +0000 UTC m=+1.204693717" watchObservedRunningTime="2025-05-15 12:43:18.002092247 +0000 UTC m=+1.215663704" May 15 12:43:18.022251 kubelet[2728]: I0515 12:43:18.015031 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-214-203" podStartSLOduration=4.015012189 podStartE2EDuration="4.015012189s" podCreationTimestamp="2025-05-15 12:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:43:18.002555336 +0000 UTC m=+1.216126813" watchObservedRunningTime="2025-05-15 12:43:18.015012189 +0000 UTC m=+1.228583646" May 15 12:43:18.967209 kubelet[2728]: E0515 12:43:18.967083 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:18.968164 kubelet[2728]: E0515 12:43:18.967943 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:20.011265 kubelet[2728]: E0515 12:43:20.011204 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:20.175999 sudo[1808]: pam_unix(sudo:session): session closed for user root May 15 12:43:20.225974 sshd[1807]: Connection closed by 139.178.89.65 port 46384 May 15 12:43:20.226806 sshd-session[1805]: pam_unix(sshd:session): session closed for user core May 15 12:43:20.233149 systemd[1]: sshd@6-172.234.214.203:22-139.178.89.65:46384.service: Deactivated successfully. May 15 12:43:20.235623 systemd[1]: session-7.scope: Deactivated successfully. May 15 12:43:20.235973 systemd[1]: session-7.scope: Consumed 7.692s CPU time, 270.5M memory peak. May 15 12:43:20.237744 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. May 15 12:43:20.240520 systemd-logind[1545]: Removed session 7. May 15 12:43:21.465358 kubelet[2728]: I0515 12:43:21.465316 2728 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 12:43:21.466509 kubelet[2728]: I0515 12:43:21.465943 2728 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 12:43:21.466570 containerd[1571]: time="2025-05-15T12:43:21.465619451Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 12:43:21.516946 kubelet[2728]: E0515 12:43:21.516639 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:22.013701 kubelet[2728]: E0515 12:43:22.013651 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:22.123613 kubelet[2728]: I0515 12:43:22.123571 2728 status_manager.go:890] "Failed to get status for pod" podUID="5b801c27-9e8b-4738-8a00-153768d1aff8" pod="kube-system/kube-proxy-74v7l" err="pods \"kube-proxy-74v7l\" is forbidden: User \"system:node:172-234-214-203\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-214-203' and this object" May 15 12:43:22.124098 kubelet[2728]: W0515 12:43:22.124077 2728 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:172-234-214-203" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-234-214-203' and this object May 15 12:43:22.125269 kubelet[2728]: E0515 12:43:22.125238 2728 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:172-234-214-203\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-214-203' and this object" logger="UnhandledError" May 15 12:43:22.125409 kubelet[2728]: W0515 12:43:22.124305 2728 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-234-214-203" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-234-214-203' and this object May 15 12:43:22.125658 kubelet[2728]: E0515 12:43:22.125386 2728 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-234-214-203\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-214-203' and this object" logger="UnhandledError" May 15 12:43:22.127276 systemd[1]: Created slice kubepods-besteffort-pod5b801c27_9e8b_4738_8a00_153768d1aff8.slice - libcontainer container kubepods-besteffort-pod5b801c27_9e8b_4738_8a00_153768d1aff8.slice. May 15 12:43:22.161827 systemd[1]: Created slice kubepods-burstable-pode6afcb57_1d8c_489b_9677_6ae0c469ccfa.slice - libcontainer container kubepods-burstable-pode6afcb57_1d8c_489b_9677_6ae0c469ccfa.slice. May 15 12:43:22.218649 kubelet[2728]: I0515 12:43:22.218599 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b801c27-9e8b-4738-8a00-153768d1aff8-kube-proxy\") pod \"kube-proxy-74v7l\" (UID: \"5b801c27-9e8b-4738-8a00-153768d1aff8\") " pod="kube-system/kube-proxy-74v7l" May 15 12:43:22.218649 kubelet[2728]: I0515 12:43:22.218643 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b801c27-9e8b-4738-8a00-153768d1aff8-xtables-lock\") pod \"kube-proxy-74v7l\" (UID: \"5b801c27-9e8b-4738-8a00-153768d1aff8\") " pod="kube-system/kube-proxy-74v7l" May 15 12:43:22.218649 kubelet[2728]: I0515 12:43:22.218658 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b801c27-9e8b-4738-8a00-153768d1aff8-lib-modules\") pod \"kube-proxy-74v7l\" (UID: \"5b801c27-9e8b-4738-8a00-153768d1aff8\") " pod="kube-system/kube-proxy-74v7l" May 15 12:43:22.218649 kubelet[2728]: I0515 12:43:22.218673 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6rw\" (UniqueName: \"kubernetes.io/projected/5b801c27-9e8b-4738-8a00-153768d1aff8-kube-api-access-db6rw\") pod \"kube-proxy-74v7l\" (UID: \"5b801c27-9e8b-4738-8a00-153768d1aff8\") " pod="kube-system/kube-proxy-74v7l" May 15 12:43:22.319216 kubelet[2728]: I0515 12:43:22.319076 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-config-path\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319216 kubelet[2728]: I0515 12:43:22.319132 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-lib-modules\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319216 kubelet[2728]: I0515 12:43:22.319149 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-clustermesh-secrets\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319216 kubelet[2728]: I0515 12:43:22.319166 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-host-proc-sys-net\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319216 kubelet[2728]: I0515 12:43:22.319205 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-cgroup\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319216 kubelet[2728]: I0515 12:43:22.319222 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cni-path\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319460 kubelet[2728]: I0515 12:43:22.319255 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-hostproc\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319460 kubelet[2728]: I0515 12:43:22.319271 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-xtables-lock\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319460 kubelet[2728]: I0515 12:43:22.319296 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-etc-cni-netd\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319460 kubelet[2728]: I0515 12:43:22.319313 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfff\" (UniqueName: \"kubernetes.io/projected/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-kube-api-access-7lfff\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319460 kubelet[2728]: I0515 12:43:22.319328 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-run\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319460 kubelet[2728]: I0515 12:43:22.319346 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-bpf-maps\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319650 kubelet[2728]: I0515 12:43:22.319361 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-host-proc-sys-kernel\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.319650 kubelet[2728]: I0515 12:43:22.319380 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-hubble-tls\") pod \"cilium-vjh87\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " pod="kube-system/cilium-vjh87" May 15 12:43:22.554486 kubelet[2728]: I0515 12:43:22.554407 2728 status_manager.go:890] "Failed to get status for pod" podUID="a0946246-280f-4b9d-b6f6-6ca9a197ea84" pod="kube-system/cilium-operator-6c4d7847fc-54gxr" err="pods \"cilium-operator-6c4d7847fc-54gxr\" is forbidden: User \"system:node:172-234-214-203\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-214-203' and this object" May 15 12:43:22.562316 systemd[1]: Created slice kubepods-besteffort-poda0946246_280f_4b9d_b6f6_6ca9a197ea84.slice - libcontainer container kubepods-besteffort-poda0946246_280f_4b9d_b6f6_6ca9a197ea84.slice. May 15 12:43:22.722011 kubelet[2728]: I0515 12:43:22.721874 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0946246-280f-4b9d-b6f6-6ca9a197ea84-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-54gxr\" (UID: \"a0946246-280f-4b9d-b6f6-6ca9a197ea84\") " pod="kube-system/cilium-operator-6c4d7847fc-54gxr" May 15 12:43:22.722011 kubelet[2728]: I0515 12:43:22.721924 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9dmp\" (UniqueName: \"kubernetes.io/projected/a0946246-280f-4b9d-b6f6-6ca9a197ea84-kube-api-access-q9dmp\") pod \"cilium-operator-6c4d7847fc-54gxr\" (UID: \"a0946246-280f-4b9d-b6f6-6ca9a197ea84\") " pod="kube-system/cilium-operator-6c4d7847fc-54gxr" May 15 12:43:23.334014 kubelet[2728]: E0515 12:43:23.333975 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:23.334614 containerd[1571]: time="2025-05-15T12:43:23.334578627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-74v7l,Uid:5b801c27-9e8b-4738-8a00-153768d1aff8,Namespace:kube-system,Attempt:0,}" May 15 12:43:23.360235 containerd[1571]: time="2025-05-15T12:43:23.358898493Z" level=info msg="connecting to shim 73f2344495c9c8990aaf4083d2e9a30c5ab3c290d4ad4525839dc27a48257a64" address="unix:///run/containerd/s/eca64f2b4ee2e7a7159261cbf4ff2a0fe86f08f7bbb18c6999ade2a564a54a39" namespace=k8s.io protocol=ttrpc version=3 May 15 12:43:23.367339 kubelet[2728]: E0515 12:43:23.367247 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:23.368845 containerd[1571]: time="2025-05-15T12:43:23.368802964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjh87,Uid:e6afcb57-1d8c-489b-9677-6ae0c469ccfa,Namespace:kube-system,Attempt:0,}" May 15 12:43:23.400976 containerd[1571]: time="2025-05-15T12:43:23.400934942Z" level=info msg="connecting to shim 069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939" address="unix:///run/containerd/s/a9b8bcf5b2d751adf4d6bc4f8f8f20153d6ee51ad7ff18a8d4cd63d27089e3d9" namespace=k8s.io protocol=ttrpc version=3 May 15 12:43:23.414335 systemd[1]: Started cri-containerd-73f2344495c9c8990aaf4083d2e9a30c5ab3c290d4ad4525839dc27a48257a64.scope - libcontainer container 73f2344495c9c8990aaf4083d2e9a30c5ab3c290d4ad4525839dc27a48257a64. May 15 12:43:23.624691 kubelet[2728]: E0515 12:43:23.619613 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:23.625990 containerd[1571]: time="2025-05-15T12:43:23.625961408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-54gxr,Uid:a0946246-280f-4b9d-b6f6-6ca9a197ea84,Namespace:kube-system,Attempt:0,}" May 15 12:43:23.674745 containerd[1571]: time="2025-05-15T12:43:23.674713092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-74v7l,Uid:5b801c27-9e8b-4738-8a00-153768d1aff8,Namespace:kube-system,Attempt:0,} returns sandbox id \"73f2344495c9c8990aaf4083d2e9a30c5ab3c290d4ad4525839dc27a48257a64\"" May 15 12:43:23.676374 kubelet[2728]: E0515 12:43:23.675794 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:23.679658 containerd[1571]: time="2025-05-15T12:43:23.679636013Z" level=info msg="CreateContainer within sandbox \"73f2344495c9c8990aaf4083d2e9a30c5ab3c290d4ad4525839dc27a48257a64\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 12:43:23.696879 containerd[1571]: time="2025-05-15T12:43:23.696842628Z" level=info msg="Container 00d561a562333d7ea999e798d3b3840b1176a2ef669a2304d0acb866c1ded5a4: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:23.699059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1240066539.mount: Deactivated successfully. May 15 12:43:23.702945 containerd[1571]: time="2025-05-15T12:43:23.702897854Z" level=info msg="connecting to shim 905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9" address="unix:///run/containerd/s/06b680b413d6461bdd9da6c242efc5cc8b4824c37884783c35fad058d04f32bb" namespace=k8s.io protocol=ttrpc version=3 May 15 12:43:23.709697 systemd[1]: Started cri-containerd-069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939.scope - libcontainer container 069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939. May 15 12:43:23.718028 containerd[1571]: time="2025-05-15T12:43:23.717984359Z" level=info msg="CreateContainer within sandbox \"73f2344495c9c8990aaf4083d2e9a30c5ab3c290d4ad4525839dc27a48257a64\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00d561a562333d7ea999e798d3b3840b1176a2ef669a2304d0acb866c1ded5a4\"" May 15 12:43:23.719053 containerd[1571]: time="2025-05-15T12:43:23.718942313Z" level=info msg="StartContainer for \"00d561a562333d7ea999e798d3b3840b1176a2ef669a2304d0acb866c1ded5a4\"" May 15 12:43:23.723349 containerd[1571]: time="2025-05-15T12:43:23.723325155Z" level=info msg="connecting to shim 00d561a562333d7ea999e798d3b3840b1176a2ef669a2304d0acb866c1ded5a4" address="unix:///run/containerd/s/eca64f2b4ee2e7a7159261cbf4ff2a0fe86f08f7bbb18c6999ade2a564a54a39" protocol=ttrpc version=3 May 15 12:43:23.760348 systemd[1]: Started cri-containerd-905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9.scope - libcontainer container 905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9. May 15 12:43:23.794578 systemd[1]: Started cri-containerd-00d561a562333d7ea999e798d3b3840b1176a2ef669a2304d0acb866c1ded5a4.scope - libcontainer container 00d561a562333d7ea999e798d3b3840b1176a2ef669a2304d0acb866c1ded5a4. May 15 12:43:23.836237 containerd[1571]: time="2025-05-15T12:43:23.810523097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjh87,Uid:e6afcb57-1d8c-489b-9677-6ae0c469ccfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\"" May 15 12:43:23.837383 kubelet[2728]: E0515 12:43:23.837364 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:23.842140 containerd[1571]: time="2025-05-15T12:43:23.842102547Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 12:43:23.882985 containerd[1571]: time="2025-05-15T12:43:23.882874878Z" level=info msg="StartContainer for \"00d561a562333d7ea999e798d3b3840b1176a2ef669a2304d0acb866c1ded5a4\" returns successfully" May 15 12:43:23.917013 containerd[1571]: time="2025-05-15T12:43:23.916761681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-54gxr,Uid:a0946246-280f-4b9d-b6f6-6ca9a197ea84,Namespace:kube-system,Attempt:0,} returns sandbox id \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\"" May 15 12:43:23.918018 kubelet[2728]: E0515 12:43:23.917987 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:24.027098 kubelet[2728]: E0515 12:43:24.026697 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:24.051224 kubelet[2728]: I0515 12:43:24.051132 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-74v7l" podStartSLOduration=2.051114125 podStartE2EDuration="2.051114125s" podCreationTimestamp="2025-05-15 12:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:43:24.048956787 +0000 UTC m=+7.262528244" watchObservedRunningTime="2025-05-15 12:43:24.051114125 +0000 UTC m=+7.264685582" May 15 12:43:25.174994 update_engine[1548]: I20250515 12:43:25.170249 1548 update_attempter.cc:509] Updating boot flags... May 15 12:43:27.961227 kubelet[2728]: E0515 12:43:27.952852 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:28.187800 kubelet[2728]: E0515 12:43:28.186359 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:29.290642 kubelet[2728]: E0515 12:43:29.290606 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:30.004355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403099401.mount: Deactivated successfully. May 15 12:43:33.248060 containerd[1571]: time="2025-05-15T12:43:33.248019700Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:33.249015 containerd[1571]: time="2025-05-15T12:43:33.248992137Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 12:43:33.249570 containerd[1571]: time="2025-05-15T12:43:33.249513501Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:33.250852 containerd[1571]: time="2025-05-15T12:43:33.250830310Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.408436788s" May 15 12:43:33.251011 containerd[1571]: time="2025-05-15T12:43:33.250920230Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 12:43:33.252329 containerd[1571]: time="2025-05-15T12:43:33.252309820Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 12:43:33.254311 containerd[1571]: time="2025-05-15T12:43:33.254283983Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 12:43:33.262295 containerd[1571]: time="2025-05-15T12:43:33.261795813Z" level=info msg="Container fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:33.276771 containerd[1571]: time="2025-05-15T12:43:33.276748385Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\"" May 15 12:43:33.277491 containerd[1571]: time="2025-05-15T12:43:33.277444409Z" level=info msg="StartContainer for \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\"" May 15 12:43:33.278630 containerd[1571]: time="2025-05-15T12:43:33.278612307Z" level=info msg="connecting to shim fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884" address="unix:///run/containerd/s/a9b8bcf5b2d751adf4d6bc4f8f8f20153d6ee51ad7ff18a8d4cd63d27089e3d9" protocol=ttrpc version=3 May 15 12:43:33.320316 systemd[1]: Started cri-containerd-fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884.scope - libcontainer container fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884. May 15 12:43:33.363398 containerd[1571]: time="2025-05-15T12:43:33.363362510Z" level=info msg="StartContainer for \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\" returns successfully" May 15 12:43:33.386262 systemd[1]: cri-containerd-fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884.scope: Deactivated successfully. May 15 12:43:33.389913 containerd[1571]: time="2025-05-15T12:43:33.389870379Z" level=info msg="received exit event container_id:\"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\" id:\"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\" pid:3162 exited_at:{seconds:1747313013 nanos:389116244}" May 15 12:43:33.390152 containerd[1571]: time="2025-05-15T12:43:33.390095561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\" id:\"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\" pid:3162 exited_at:{seconds:1747313013 nanos:389116244}" May 15 12:43:33.418439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884-rootfs.mount: Deactivated successfully. May 15 12:43:33.556051 kubelet[2728]: E0515 12:43:33.555702 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:33.562326 containerd[1571]: time="2025-05-15T12:43:33.562092703Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 12:43:33.571823 containerd[1571]: time="2025-05-15T12:43:33.571595008Z" level=info msg="Container ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:33.576355 containerd[1571]: time="2025-05-15T12:43:33.576064267Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\"" May 15 12:43:33.577396 containerd[1571]: time="2025-05-15T12:43:33.577361136Z" level=info msg="StartContainer for \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\"" May 15 12:43:33.580025 containerd[1571]: time="2025-05-15T12:43:33.579954404Z" level=info msg="connecting to shim ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150" address="unix:///run/containerd/s/a9b8bcf5b2d751adf4d6bc4f8f8f20153d6ee51ad7ff18a8d4cd63d27089e3d9" protocol=ttrpc version=3 May 15 12:43:33.605328 systemd[1]: Started cri-containerd-ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150.scope - libcontainer container ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150. May 15 12:43:33.642545 containerd[1571]: time="2025-05-15T12:43:33.642418256Z" level=info msg="StartContainer for \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\" returns successfully" May 15 12:43:33.661624 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 12:43:33.661938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 12:43:33.662309 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 12:43:33.666589 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:43:33.670309 systemd[1]: cri-containerd-ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150.scope: Deactivated successfully. May 15 12:43:33.672963 containerd[1571]: time="2025-05-15T12:43:33.672926092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\" id:\"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\" pid:3206 exited_at:{seconds:1747313013 nanos:672297338}" May 15 12:43:33.673341 containerd[1571]: time="2025-05-15T12:43:33.673051923Z" level=info msg="received exit event container_id:\"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\" id:\"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\" pid:3206 exited_at:{seconds:1747313013 nanos:672297338}" May 15 12:43:33.700335 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:43:34.288921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3750209973.mount: Deactivated successfully. May 15 12:43:34.703275 kubelet[2728]: E0515 12:43:34.702855 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:34.707305 containerd[1571]: time="2025-05-15T12:43:34.707212377Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 12:43:34.726824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869433549.mount: Deactivated successfully. May 15 12:43:34.729342 containerd[1571]: time="2025-05-15T12:43:34.729303105Z" level=info msg="Container ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:34.738984 containerd[1571]: time="2025-05-15T12:43:34.738939985Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\"" May 15 12:43:34.740213 containerd[1571]: time="2025-05-15T12:43:34.740138753Z" level=info msg="StartContainer for \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\"" May 15 12:43:34.743902 containerd[1571]: time="2025-05-15T12:43:34.743879486Z" level=info msg="connecting to shim ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009" address="unix:///run/containerd/s/a9b8bcf5b2d751adf4d6bc4f8f8f20153d6ee51ad7ff18a8d4cd63d27089e3d9" protocol=ttrpc version=3 May 15 12:43:34.810480 systemd[1]: Started cri-containerd-ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009.scope - libcontainer container ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009. May 15 12:43:34.887760 systemd[1]: cri-containerd-ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009.scope: Deactivated successfully. May 15 12:43:34.894810 containerd[1571]: time="2025-05-15T12:43:34.894770149Z" level=info msg="received exit event container_id:\"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\" id:\"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\" pid:3265 exited_at:{seconds:1747313014 nanos:892621335}" May 15 12:43:34.895357 containerd[1571]: time="2025-05-15T12:43:34.894781309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\" id:\"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\" pid:3265 exited_at:{seconds:1747313014 nanos:892621335}" May 15 12:43:34.895357 containerd[1571]: time="2025-05-15T12:43:34.894859029Z" level=info msg="StartContainer for \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\" returns successfully" May 15 12:43:35.533770 containerd[1571]: time="2025-05-15T12:43:35.533721089Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:35.534314 containerd[1571]: time="2025-05-15T12:43:35.534287192Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 12:43:35.534711 containerd[1571]: time="2025-05-15T12:43:35.534684335Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:43:35.535778 containerd[1571]: time="2025-05-15T12:43:35.535747671Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.283331391s" May 15 12:43:35.535778 containerd[1571]: time="2025-05-15T12:43:35.535777701Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 12:43:35.539752 containerd[1571]: time="2025-05-15T12:43:35.539702124Z" level=info msg="CreateContainer within sandbox \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 12:43:35.546887 containerd[1571]: time="2025-05-15T12:43:35.546441143Z" level=info msg="Container c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:35.556942 containerd[1571]: time="2025-05-15T12:43:35.556899982Z" level=info msg="CreateContainer within sandbox \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\"" May 15 12:43:35.557523 containerd[1571]: time="2025-05-15T12:43:35.557485937Z" level=info msg="StartContainer for \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\"" May 15 12:43:35.557523 containerd[1571]: time="2025-05-15T12:43:35.558460802Z" level=info msg="connecting to shim c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50" address="unix:///run/containerd/s/06b680b413d6461bdd9da6c242efc5cc8b4824c37884783c35fad058d04f32bb" protocol=ttrpc version=3 May 15 12:43:35.601588 systemd[1]: Started cri-containerd-c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50.scope - libcontainer container c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50. May 15 12:43:35.638527 containerd[1571]: time="2025-05-15T12:43:35.638483833Z" level=info msg="StartContainer for \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" returns successfully" May 15 12:43:35.708694 kubelet[2728]: E0515 12:43:35.708650 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:35.718015 kubelet[2728]: E0515 12:43:35.717985 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:35.722382 containerd[1571]: time="2025-05-15T12:43:35.722325415Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 12:43:35.741946 containerd[1571]: time="2025-05-15T12:43:35.741399116Z" level=info msg="Container e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:35.748854 containerd[1571]: time="2025-05-15T12:43:35.748826708Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\"" May 15 12:43:35.750954 containerd[1571]: time="2025-05-15T12:43:35.750910290Z" level=info msg="StartContainer for \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\"" May 15 12:43:35.752637 containerd[1571]: time="2025-05-15T12:43:35.752596170Z" level=info msg="connecting to shim e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07" address="unix:///run/containerd/s/a9b8bcf5b2d751adf4d6bc4f8f8f20153d6ee51ad7ff18a8d4cd63d27089e3d9" protocol=ttrpc version=3 May 15 12:43:35.762374 kubelet[2728]: I0515 12:43:35.760600 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-54gxr" podStartSLOduration=2.141940735 podStartE2EDuration="13.759158768s" podCreationTimestamp="2025-05-15 12:43:22 +0000 UTC" firstStartedPulling="2025-05-15 12:43:23.920022127 +0000 UTC m=+7.133593584" lastFinishedPulling="2025-05-15 12:43:35.53724016 +0000 UTC m=+18.750811617" observedRunningTime="2025-05-15 12:43:35.726412819 +0000 UTC m=+18.939984276" watchObservedRunningTime="2025-05-15 12:43:35.759158768 +0000 UTC m=+18.972730225" May 15 12:43:35.793331 systemd[1]: Started cri-containerd-e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07.scope - libcontainer container e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07. May 15 12:43:35.975018 containerd[1571]: time="2025-05-15T12:43:35.974974220Z" level=info msg="StartContainer for \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\" returns successfully" May 15 12:43:35.979221 systemd[1]: cri-containerd-e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07.scope: Deactivated successfully. May 15 12:43:35.980501 containerd[1571]: time="2025-05-15T12:43:35.980473242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\" id:\"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\" pid:3341 exited_at:{seconds:1747313015 nanos:979321496}" May 15 12:43:35.980666 containerd[1571]: time="2025-05-15T12:43:35.980640963Z" level=info msg="received exit event container_id:\"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\" id:\"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\" pid:3341 exited_at:{seconds:1747313015 nanos:979321496}" May 15 12:43:36.724895 kubelet[2728]: E0515 12:43:36.723610 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:36.725685 kubelet[2728]: E0515 12:43:36.725669 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:36.728277 containerd[1571]: time="2025-05-15T12:43:36.727987137Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 12:43:36.742235 containerd[1571]: time="2025-05-15T12:43:36.742201832Z" level=info msg="Container 07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:36.747105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847692935.mount: Deactivated successfully. May 15 12:43:36.754609 containerd[1571]: time="2025-05-15T12:43:36.754568818Z" level=info msg="CreateContainer within sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\"" May 15 12:43:36.755201 containerd[1571]: time="2025-05-15T12:43:36.755166611Z" level=info msg="StartContainer for \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\"" May 15 12:43:36.757740 containerd[1571]: time="2025-05-15T12:43:36.757720405Z" level=info msg="connecting to shim 07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20" address="unix:///run/containerd/s/a9b8bcf5b2d751adf4d6bc4f8f8f20153d6ee51ad7ff18a8d4cd63d27089e3d9" protocol=ttrpc version=3 May 15 12:43:36.829317 systemd[1]: Started cri-containerd-07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20.scope - libcontainer container 07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20. May 15 12:43:36.948938 containerd[1571]: time="2025-05-15T12:43:36.948886309Z" level=info msg="StartContainer for \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" returns successfully" May 15 12:43:37.282081 containerd[1571]: time="2025-05-15T12:43:37.282046202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" id:\"0de4d8b8749d47090fb19b0672a3fa02d1714552d895783c286f183564b1873a\" pid:3407 exited_at:{seconds:1747313017 nanos:281773901}" May 15 12:43:37.325095 kubelet[2728]: I0515 12:43:37.325040 2728 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 12:43:37.359799 systemd[1]: Created slice kubepods-burstable-pode6e8b911_e081_4c5e_9a8c_74062517b218.slice - libcontainer container kubepods-burstable-pode6e8b911_e081_4c5e_9a8c_74062517b218.slice. May 15 12:43:37.374312 systemd[1]: Created slice kubepods-burstable-poda24cdeac_4dcc_4115_9fd5_6b5a62363f5b.slice - libcontainer container kubepods-burstable-poda24cdeac_4dcc_4115_9fd5_6b5a62363f5b.slice. May 15 12:43:37.516627 kubelet[2728]: I0515 12:43:37.516579 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv5qj\" (UniqueName: \"kubernetes.io/projected/e6e8b911-e081-4c5e-9a8c-74062517b218-kube-api-access-nv5qj\") pod \"coredns-668d6bf9bc-rtwk2\" (UID: \"e6e8b911-e081-4c5e-9a8c-74062517b218\") " pod="kube-system/coredns-668d6bf9bc-rtwk2" May 15 12:43:37.516752 kubelet[2728]: I0515 12:43:37.516671 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24cdeac-4dcc-4115-9fd5-6b5a62363f5b-config-volume\") pod \"coredns-668d6bf9bc-c84pp\" (UID: \"a24cdeac-4dcc-4115-9fd5-6b5a62363f5b\") " pod="kube-system/coredns-668d6bf9bc-c84pp" May 15 12:43:37.516752 kubelet[2728]: I0515 12:43:37.516696 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j6lp\" (UniqueName: \"kubernetes.io/projected/a24cdeac-4dcc-4115-9fd5-6b5a62363f5b-kube-api-access-6j6lp\") pod \"coredns-668d6bf9bc-c84pp\" (UID: \"a24cdeac-4dcc-4115-9fd5-6b5a62363f5b\") " pod="kube-system/coredns-668d6bf9bc-c84pp" May 15 12:43:37.516752 kubelet[2728]: I0515 12:43:37.516725 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6e8b911-e081-4c5e-9a8c-74062517b218-config-volume\") pod \"coredns-668d6bf9bc-rtwk2\" (UID: \"e6e8b911-e081-4c5e-9a8c-74062517b218\") " pod="kube-system/coredns-668d6bf9bc-rtwk2" May 15 12:43:37.672586 kubelet[2728]: E0515 12:43:37.671941 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:37.674421 containerd[1571]: time="2025-05-15T12:43:37.674390477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rtwk2,Uid:e6e8b911-e081-4c5e-9a8c-74062517b218,Namespace:kube-system,Attempt:0,}" May 15 12:43:37.680089 kubelet[2728]: E0515 12:43:37.679428 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:37.680172 containerd[1571]: time="2025-05-15T12:43:37.679897154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c84pp,Uid:a24cdeac-4dcc-4115-9fd5-6b5a62363f5b,Namespace:kube-system,Attempt:0,}" May 15 12:43:37.734460 kubelet[2728]: E0515 12:43:37.734428 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:38.738114 kubelet[2728]: E0515 12:43:38.738022 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:39.740816 kubelet[2728]: E0515 12:43:39.740534 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:39.785939 systemd-networkd[1466]: cilium_host: Link UP May 15 12:43:39.786413 systemd-networkd[1466]: cilium_net: Link UP May 15 12:43:39.786608 systemd-networkd[1466]: cilium_net: Gained carrier May 15 12:43:39.786785 systemd-networkd[1466]: cilium_host: Gained carrier May 15 12:43:39.818333 systemd-networkd[1466]: cilium_net: Gained IPv6LL May 15 12:43:39.927288 systemd-networkd[1466]: cilium_vxlan: Link UP May 15 12:43:39.927298 systemd-networkd[1466]: cilium_vxlan: Gained carrier May 15 12:43:40.369237 kernel: NET: Registered PF_ALG protocol family May 15 12:43:40.683770 systemd-networkd[1466]: cilium_host: Gained IPv6LL May 15 12:43:41.093764 systemd-networkd[1466]: lxc_health: Link UP May 15 12:43:41.104094 systemd-networkd[1466]: lxc_health: Gained carrier May 15 12:43:41.269705 systemd-networkd[1466]: lxca9be8af68d0d: Link UP May 15 12:43:41.277211 kernel: eth0: renamed from tmp5f96d May 15 12:43:41.279563 systemd-networkd[1466]: lxca9be8af68d0d: Gained carrier May 15 12:43:41.362302 systemd-networkd[1466]: lxc1eed120e618c: Link UP May 15 12:43:41.366217 kernel: eth0: renamed from tmpa1089 May 15 12:43:41.369655 systemd-networkd[1466]: lxc1eed120e618c: Gained carrier May 15 12:43:41.374753 kubelet[2728]: E0515 12:43:41.374730 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:41.386300 systemd-networkd[1466]: cilium_vxlan: Gained IPv6LL May 15 12:43:41.416508 kubelet[2728]: I0515 12:43:41.416446 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vjh87" podStartSLOduration=10.003148819 podStartE2EDuration="19.416426207s" podCreationTimestamp="2025-05-15 12:43:22 +0000 UTC" firstStartedPulling="2025-05-15 12:43:23.838705569 +0000 UTC m=+7.052277026" lastFinishedPulling="2025-05-15 12:43:33.251982957 +0000 UTC m=+16.465554414" observedRunningTime="2025-05-15 12:43:37.753875055 +0000 UTC m=+20.967446522" watchObservedRunningTime="2025-05-15 12:43:41.416426207 +0000 UTC m=+24.629997664" May 15 12:43:41.745358 kubelet[2728]: E0515 12:43:41.745256 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:42.280419 systemd-networkd[1466]: lxc_health: Gained IPv6LL May 15 12:43:42.747705 kubelet[2728]: E0515 12:43:42.747130 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:42.920342 systemd-networkd[1466]: lxca9be8af68d0d: Gained IPv6LL May 15 12:43:43.049410 systemd-networkd[1466]: lxc1eed120e618c: Gained IPv6LL May 15 12:43:45.186752 containerd[1571]: time="2025-05-15T12:43:45.186684323Z" level=info msg="connecting to shim 5f96d4f18f638438558e0bd82f637a47e9754983e4705847f5deb8bd398065b7" address="unix:///run/containerd/s/c0fb93e3c272f62bc6ff8cf613b30777681c93ea936ad6938599f4e99a523c6f" namespace=k8s.io protocol=ttrpc version=3 May 15 12:43:45.208107 containerd[1571]: time="2025-05-15T12:43:45.208048713Z" level=info msg="connecting to shim a1089fd032fa0385312c35fe313a75c6b7038c72716b35612fc4f2921c8f98e9" address="unix:///run/containerd/s/f7608023f63391466ebfcde90bb474b72f5802370df9613643ae7ef7a717d397" namespace=k8s.io protocol=ttrpc version=3 May 15 12:43:45.239329 systemd[1]: Started cri-containerd-5f96d4f18f638438558e0bd82f637a47e9754983e4705847f5deb8bd398065b7.scope - libcontainer container 5f96d4f18f638438558e0bd82f637a47e9754983e4705847f5deb8bd398065b7. May 15 12:43:45.252453 systemd[1]: Started cri-containerd-a1089fd032fa0385312c35fe313a75c6b7038c72716b35612fc4f2921c8f98e9.scope - libcontainer container a1089fd032fa0385312c35fe313a75c6b7038c72716b35612fc4f2921c8f98e9. May 15 12:43:45.337037 containerd[1571]: time="2025-05-15T12:43:45.336017148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c84pp,Uid:a24cdeac-4dcc-4115-9fd5-6b5a62363f5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f96d4f18f638438558e0bd82f637a47e9754983e4705847f5deb8bd398065b7\"" May 15 12:43:45.339076 kubelet[2728]: E0515 12:43:45.339053 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:45.342178 containerd[1571]: time="2025-05-15T12:43:45.342122211Z" level=info msg="CreateContainer within sandbox \"5f96d4f18f638438558e0bd82f637a47e9754983e4705847f5deb8bd398065b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:43:45.352417 containerd[1571]: time="2025-05-15T12:43:45.352364785Z" level=info msg="Container 93922f32c8722685798f1d940a2ebc10d01371531ac7c7e5a566bec5ac1890c2: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:45.358866 containerd[1571]: time="2025-05-15T12:43:45.358838620Z" level=info msg="CreateContainer within sandbox \"5f96d4f18f638438558e0bd82f637a47e9754983e4705847f5deb8bd398065b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93922f32c8722685798f1d940a2ebc10d01371531ac7c7e5a566bec5ac1890c2\"" May 15 12:43:45.359842 containerd[1571]: time="2025-05-15T12:43:45.359823822Z" level=info msg="StartContainer for \"93922f32c8722685798f1d940a2ebc10d01371531ac7c7e5a566bec5ac1890c2\"" May 15 12:43:45.361062 containerd[1571]: time="2025-05-15T12:43:45.361018325Z" level=info msg="connecting to shim 93922f32c8722685798f1d940a2ebc10d01371531ac7c7e5a566bec5ac1890c2" address="unix:///run/containerd/s/c0fb93e3c272f62bc6ff8cf613b30777681c93ea936ad6938599f4e99a523c6f" protocol=ttrpc version=3 May 15 12:43:45.396317 systemd[1]: Started cri-containerd-93922f32c8722685798f1d940a2ebc10d01371531ac7c7e5a566bec5ac1890c2.scope - libcontainer container 93922f32c8722685798f1d940a2ebc10d01371531ac7c7e5a566bec5ac1890c2. May 15 12:43:45.400822 containerd[1571]: time="2025-05-15T12:43:45.400314275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rtwk2,Uid:e6e8b911-e081-4c5e-9a8c-74062517b218,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1089fd032fa0385312c35fe313a75c6b7038c72716b35612fc4f2921c8f98e9\"" May 15 12:43:45.404496 kubelet[2728]: E0515 12:43:45.401291 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:45.405319 containerd[1571]: time="2025-05-15T12:43:45.405291407Z" level=info msg="CreateContainer within sandbox \"a1089fd032fa0385312c35fe313a75c6b7038c72716b35612fc4f2921c8f98e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:43:45.420294 containerd[1571]: time="2025-05-15T12:43:45.419572320Z" level=info msg="Container a09ccf901623aa12af92cca6972dbb4ee74f411009cec3d4d65d2e8ff6340804: CDI devices from CRI Config.CDIDevices: []" May 15 12:43:45.427072 containerd[1571]: time="2025-05-15T12:43:45.426838557Z" level=info msg="CreateContainer within sandbox \"a1089fd032fa0385312c35fe313a75c6b7038c72716b35612fc4f2921c8f98e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a09ccf901623aa12af92cca6972dbb4ee74f411009cec3d4d65d2e8ff6340804\"" May 15 12:43:45.428202 containerd[1571]: time="2025-05-15T12:43:45.428098219Z" level=info msg="StartContainer for \"a09ccf901623aa12af92cca6972dbb4ee74f411009cec3d4d65d2e8ff6340804\"" May 15 12:43:45.428844 containerd[1571]: time="2025-05-15T12:43:45.428805931Z" level=info msg="connecting to shim a09ccf901623aa12af92cca6972dbb4ee74f411009cec3d4d65d2e8ff6340804" address="unix:///run/containerd/s/f7608023f63391466ebfcde90bb474b72f5802370df9613643ae7ef7a717d397" protocol=ttrpc version=3 May 15 12:43:45.456474 systemd[1]: Started cri-containerd-a09ccf901623aa12af92cca6972dbb4ee74f411009cec3d4d65d2e8ff6340804.scope - libcontainer container a09ccf901623aa12af92cca6972dbb4ee74f411009cec3d4d65d2e8ff6340804. May 15 12:43:45.462015 containerd[1571]: time="2025-05-15T12:43:45.461776627Z" level=info msg="StartContainer for \"93922f32c8722685798f1d940a2ebc10d01371531ac7c7e5a566bec5ac1890c2\" returns successfully" May 15 12:43:45.582713 containerd[1571]: time="2025-05-15T12:43:45.582670807Z" level=info msg="StartContainer for \"a09ccf901623aa12af92cca6972dbb4ee74f411009cec3d4d65d2e8ff6340804\" returns successfully" May 15 12:43:45.755652 kubelet[2728]: E0515 12:43:45.755626 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:45.759725 kubelet[2728]: E0515 12:43:45.759466 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:45.790856 kubelet[2728]: I0515 12:43:45.790577 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rtwk2" podStartSLOduration=23.790556556 podStartE2EDuration="23.790556556s" podCreationTimestamp="2025-05-15 12:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:43:45.772165833 +0000 UTC m=+28.985737290" watchObservedRunningTime="2025-05-15 12:43:45.790556556 +0000 UTC m=+29.004128013" May 15 12:43:46.167721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124579908.mount: Deactivated successfully. May 15 12:43:46.762003 kubelet[2728]: E0515 12:43:46.761814 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:46.762003 kubelet[2728]: E0515 12:43:46.761932 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:47.766042 kubelet[2728]: E0515 12:43:47.763659 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:43:47.766042 kubelet[2728]: E0515 12:43:47.763702 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:44:25.926423 kubelet[2728]: E0515 12:44:25.926297 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:44:29.925335 kubelet[2728]: E0515 12:44:29.925291 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:44:31.926042 kubelet[2728]: E0515 12:44:31.926000 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:44:48.926212 kubelet[2728]: E0515 12:44:48.925726 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:44:48.928670 kubelet[2728]: E0515 12:44:48.928650 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:44:50.926577 kubelet[2728]: E0515 12:44:50.925766 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:44:50.927282 kubelet[2728]: E0515 12:44:50.927229 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:45:12.583425 systemd[1]: Started sshd@7-172.234.214.203:22-139.178.89.65:35866.service - OpenSSH per-connection server daemon (139.178.89.65:35866). May 15 12:45:12.945021 sshd[4044]: Accepted publickey for core from 139.178.89.65 port 35866 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:12.947009 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:12.954234 systemd-logind[1545]: New session 8 of user core. May 15 12:45:12.964312 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 12:45:13.405883 sshd[4046]: Connection closed by 139.178.89.65 port 35866 May 15 12:45:13.406558 sshd-session[4044]: pam_unix(sshd:session): session closed for user core May 15 12:45:13.412065 systemd[1]: sshd@7-172.234.214.203:22-139.178.89.65:35866.service: Deactivated successfully. May 15 12:45:13.414165 systemd[1]: session-8.scope: Deactivated successfully. May 15 12:45:13.415625 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. May 15 12:45:13.417722 systemd-logind[1545]: Removed session 8. May 15 12:45:16.925950 kubelet[2728]: E0515 12:45:16.925503 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:45:18.474713 systemd[1]: Started sshd@8-172.234.214.203:22-139.178.89.65:38140.service - OpenSSH per-connection server daemon (139.178.89.65:38140). May 15 12:45:18.824302 sshd[4061]: Accepted publickey for core from 139.178.89.65 port 38140 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:18.825959 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:18.831656 systemd-logind[1545]: New session 9 of user core. May 15 12:45:18.834330 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 12:45:19.151243 sshd[4063]: Connection closed by 139.178.89.65 port 38140 May 15 12:45:19.152423 sshd-session[4061]: pam_unix(sshd:session): session closed for user core May 15 12:45:19.157454 systemd[1]: sshd@8-172.234.214.203:22-139.178.89.65:38140.service: Deactivated successfully. May 15 12:45:19.159870 systemd[1]: session-9.scope: Deactivated successfully. May 15 12:45:19.162633 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. May 15 12:45:19.164124 systemd-logind[1545]: Removed session 9. May 15 12:45:24.226915 systemd[1]: Started sshd@9-172.234.214.203:22-139.178.89.65:38146.service - OpenSSH per-connection server daemon (139.178.89.65:38146). May 15 12:45:24.577477 sshd[4078]: Accepted publickey for core from 139.178.89.65 port 38146 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:24.579367 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:24.585936 systemd-logind[1545]: New session 10 of user core. May 15 12:45:24.591432 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 12:45:24.889517 sshd[4080]: Connection closed by 139.178.89.65 port 38146 May 15 12:45:24.890327 sshd-session[4078]: pam_unix(sshd:session): session closed for user core May 15 12:45:24.894768 systemd[1]: sshd@9-172.234.214.203:22-139.178.89.65:38146.service: Deactivated successfully. May 15 12:45:24.897170 systemd[1]: session-10.scope: Deactivated successfully. May 15 12:45:24.899124 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. May 15 12:45:24.901025 systemd-logind[1545]: Removed session 10. May 15 12:45:24.954281 systemd[1]: Started sshd@10-172.234.214.203:22-139.178.89.65:38152.service - OpenSSH per-connection server daemon (139.178.89.65:38152). May 15 12:45:25.312515 sshd[4093]: Accepted publickey for core from 139.178.89.65 port 38152 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:25.313940 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:25.319256 systemd-logind[1545]: New session 11 of user core. May 15 12:45:25.323307 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 12:45:25.688025 sshd[4095]: Connection closed by 139.178.89.65 port 38152 May 15 12:45:25.688629 sshd-session[4093]: pam_unix(sshd:session): session closed for user core May 15 12:45:25.693035 systemd[1]: sshd@10-172.234.214.203:22-139.178.89.65:38152.service: Deactivated successfully. May 15 12:45:25.695113 systemd[1]: session-11.scope: Deactivated successfully. May 15 12:45:25.698363 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. May 15 12:45:25.699631 systemd-logind[1545]: Removed session 11. May 15 12:45:25.747582 systemd[1]: Started sshd@11-172.234.214.203:22-139.178.89.65:38156.service - OpenSSH per-connection server daemon (139.178.89.65:38156). May 15 12:45:26.091890 sshd[4105]: Accepted publickey for core from 139.178.89.65 port 38156 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:26.093542 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:26.098516 systemd-logind[1545]: New session 12 of user core. May 15 12:45:26.106316 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 12:45:26.422444 sshd[4107]: Connection closed by 139.178.89.65 port 38156 May 15 12:45:26.423448 sshd-session[4105]: pam_unix(sshd:session): session closed for user core May 15 12:45:26.427790 systemd[1]: sshd@11-172.234.214.203:22-139.178.89.65:38156.service: Deactivated successfully. May 15 12:45:26.430585 systemd[1]: session-12.scope: Deactivated successfully. May 15 12:45:26.431881 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. May 15 12:45:26.433795 systemd-logind[1545]: Removed session 12. May 15 12:45:31.489020 systemd[1]: Started sshd@12-172.234.214.203:22-139.178.89.65:54818.service - OpenSSH per-connection server daemon (139.178.89.65:54818). May 15 12:45:31.839956 sshd[4119]: Accepted publickey for core from 139.178.89.65 port 54818 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:31.842018 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:31.847617 systemd-logind[1545]: New session 13 of user core. May 15 12:45:31.855345 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 12:45:32.141042 sshd[4121]: Connection closed by 139.178.89.65 port 54818 May 15 12:45:32.141879 sshd-session[4119]: pam_unix(sshd:session): session closed for user core May 15 12:45:32.145594 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. May 15 12:45:32.146291 systemd[1]: sshd@12-172.234.214.203:22-139.178.89.65:54818.service: Deactivated successfully. May 15 12:45:32.148400 systemd[1]: session-13.scope: Deactivated successfully. May 15 12:45:32.150257 systemd-logind[1545]: Removed session 13. May 15 12:45:37.209273 systemd[1]: Started sshd@13-172.234.214.203:22-139.178.89.65:52238.service - OpenSSH per-connection server daemon (139.178.89.65:52238). May 15 12:45:37.569849 sshd[4132]: Accepted publickey for core from 139.178.89.65 port 52238 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:37.571736 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:37.577130 systemd-logind[1545]: New session 14 of user core. May 15 12:45:37.586335 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 12:45:37.880831 sshd[4134]: Connection closed by 139.178.89.65 port 52238 May 15 12:45:37.881639 sshd-session[4132]: pam_unix(sshd:session): session closed for user core May 15 12:45:37.885361 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. May 15 12:45:37.886272 systemd[1]: sshd@13-172.234.214.203:22-139.178.89.65:52238.service: Deactivated successfully. May 15 12:45:37.888079 systemd[1]: session-14.scope: Deactivated successfully. May 15 12:45:37.889612 systemd-logind[1545]: Removed session 14. May 15 12:45:37.950270 systemd[1]: Started sshd@14-172.234.214.203:22-139.178.89.65:52244.service - OpenSSH per-connection server daemon (139.178.89.65:52244). May 15 12:45:38.291716 sshd[4146]: Accepted publickey for core from 139.178.89.65 port 52244 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:38.292997 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:38.298744 systemd-logind[1545]: New session 15 of user core. May 15 12:45:38.304326 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 12:45:38.669981 sshd[4148]: Connection closed by 139.178.89.65 port 52244 May 15 12:45:38.670818 sshd-session[4146]: pam_unix(sshd:session): session closed for user core May 15 12:45:38.675086 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. May 15 12:45:38.675488 systemd[1]: sshd@14-172.234.214.203:22-139.178.89.65:52244.service: Deactivated successfully. May 15 12:45:38.677821 systemd[1]: session-15.scope: Deactivated successfully. May 15 12:45:38.679645 systemd-logind[1545]: Removed session 15. May 15 12:45:38.733462 systemd[1]: Started sshd@15-172.234.214.203:22-139.178.89.65:52256.service - OpenSSH per-connection server daemon (139.178.89.65:52256). May 15 12:45:39.064480 sshd[4158]: Accepted publickey for core from 139.178.89.65 port 52256 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:39.065915 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:39.070246 systemd-logind[1545]: New session 16 of user core. May 15 12:45:39.075340 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 12:45:40.018288 sshd[4160]: Connection closed by 139.178.89.65 port 52256 May 15 12:45:40.019224 sshd-session[4158]: pam_unix(sshd:session): session closed for user core May 15 12:45:40.025022 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. May 15 12:45:40.027699 systemd[1]: sshd@15-172.234.214.203:22-139.178.89.65:52256.service: Deactivated successfully. May 15 12:45:40.030751 systemd[1]: session-16.scope: Deactivated successfully. May 15 12:45:40.035770 systemd-logind[1545]: Removed session 16. May 15 12:45:40.078613 systemd[1]: Started sshd@16-172.234.214.203:22-139.178.89.65:52266.service - OpenSSH per-connection server daemon (139.178.89.65:52266). May 15 12:45:40.420947 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 52266 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:40.422093 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:40.427330 systemd-logind[1545]: New session 17 of user core. May 15 12:45:40.439522 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 12:45:40.829328 sshd[4179]: Connection closed by 139.178.89.65 port 52266 May 15 12:45:40.829905 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 15 12:45:40.833962 systemd[1]: sshd@16-172.234.214.203:22-139.178.89.65:52266.service: Deactivated successfully. May 15 12:45:40.836312 systemd[1]: session-17.scope: Deactivated successfully. May 15 12:45:40.837798 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. May 15 12:45:40.839937 systemd-logind[1545]: Removed session 17. May 15 12:45:40.891939 systemd[1]: Started sshd@17-172.234.214.203:22-139.178.89.65:52274.service - OpenSSH per-connection server daemon (139.178.89.65:52274). May 15 12:45:41.234289 sshd[4189]: Accepted publickey for core from 139.178.89.65 port 52274 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:41.236866 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:41.243172 systemd-logind[1545]: New session 18 of user core. May 15 12:45:41.250319 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 12:45:41.536015 sshd[4192]: Connection closed by 139.178.89.65 port 52274 May 15 12:45:41.536586 sshd-session[4189]: pam_unix(sshd:session): session closed for user core May 15 12:45:41.540848 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. May 15 12:45:41.541524 systemd[1]: sshd@17-172.234.214.203:22-139.178.89.65:52274.service: Deactivated successfully. May 15 12:45:41.544071 systemd[1]: session-18.scope: Deactivated successfully. May 15 12:45:41.545731 systemd-logind[1545]: Removed session 18. May 15 12:45:42.927209 kubelet[2728]: E0515 12:45:42.926537 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:45:46.597621 systemd[1]: Started sshd@18-172.234.214.203:22-139.178.89.65:35664.service - OpenSSH per-connection server daemon (139.178.89.65:35664). May 15 12:45:46.931917 sshd[4205]: Accepted publickey for core from 139.178.89.65 port 35664 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:46.934221 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:46.940000 systemd-logind[1545]: New session 19 of user core. May 15 12:45:46.948342 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 12:45:47.232648 sshd[4207]: Connection closed by 139.178.89.65 port 35664 May 15 12:45:47.233612 sshd-session[4205]: pam_unix(sshd:session): session closed for user core May 15 12:45:47.237557 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. May 15 12:45:47.238270 systemd[1]: sshd@18-172.234.214.203:22-139.178.89.65:35664.service: Deactivated successfully. May 15 12:45:47.241081 systemd[1]: session-19.scope: Deactivated successfully. May 15 12:45:47.243584 systemd-logind[1545]: Removed session 19. May 15 12:45:49.925620 kubelet[2728]: E0515 12:45:49.925501 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:45:52.298597 systemd[1]: Started sshd@19-172.234.214.203:22-139.178.89.65:35678.service - OpenSSH per-connection server daemon (139.178.89.65:35678). May 15 12:45:52.655829 sshd[4219]: Accepted publickey for core from 139.178.89.65 port 35678 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:52.657456 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:52.662770 systemd-logind[1545]: New session 20 of user core. May 15 12:45:52.675566 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 12:45:52.926470 kubelet[2728]: E0515 12:45:52.925883 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:45:52.970550 sshd[4221]: Connection closed by 139.178.89.65 port 35678 May 15 12:45:52.971439 sshd-session[4219]: pam_unix(sshd:session): session closed for user core May 15 12:45:52.977323 systemd[1]: sshd@19-172.234.214.203:22-139.178.89.65:35678.service: Deactivated successfully. May 15 12:45:52.980256 systemd[1]: session-20.scope: Deactivated successfully. May 15 12:45:52.981306 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. May 15 12:45:52.983875 systemd-logind[1545]: Removed session 20. May 15 12:45:55.925957 kubelet[2728]: E0515 12:45:55.925912 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:45:58.038566 systemd[1]: Started sshd@20-172.234.214.203:22-139.178.89.65:49494.service - OpenSSH per-connection server daemon (139.178.89.65:49494). May 15 12:45:58.395811 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 49494 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:58.397314 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:58.401881 systemd-logind[1545]: New session 21 of user core. May 15 12:45:58.407344 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 12:45:58.704165 sshd[4237]: Connection closed by 139.178.89.65 port 49494 May 15 12:45:58.704499 sshd-session[4235]: pam_unix(sshd:session): session closed for user core May 15 12:45:58.709116 systemd[1]: sshd@20-172.234.214.203:22-139.178.89.65:49494.service: Deactivated successfully. May 15 12:45:58.711548 systemd[1]: session-21.scope: Deactivated successfully. May 15 12:45:58.712427 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. May 15 12:45:58.714520 systemd-logind[1545]: Removed session 21. May 15 12:45:58.764713 systemd[1]: Started sshd@21-172.234.214.203:22-139.178.89.65:49500.service - OpenSSH per-connection server daemon (139.178.89.65:49500). May 15 12:45:59.104913 sshd[4249]: Accepted publickey for core from 139.178.89.65 port 49500 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:45:59.106380 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:45:59.111415 systemd-logind[1545]: New session 22 of user core. May 15 12:45:59.119310 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 12:46:00.570213 kubelet[2728]: I0515 12:46:00.569782 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c84pp" podStartSLOduration=158.569717906 podStartE2EDuration="2m38.569717906s" podCreationTimestamp="2025-05-15 12:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:43:45.809301829 +0000 UTC m=+29.022873286" watchObservedRunningTime="2025-05-15 12:46:00.569717906 +0000 UTC m=+163.783289363" May 15 12:46:00.593759 containerd[1571]: time="2025-05-15T12:46:00.593672022Z" level=info msg="StopContainer for \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" with timeout 30 (s)" May 15 12:46:00.595388 containerd[1571]: time="2025-05-15T12:46:00.595364418Z" level=info msg="Stop container \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" with signal terminated" May 15 12:46:00.625521 containerd[1571]: time="2025-05-15T12:46:00.620739479Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:46:00.658333 containerd[1571]: time="2025-05-15T12:46:00.658283587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" id:\"22719df3220ae358eda63df0e6710d149b39e880bb61ebe79d8deb96fef89de7\" pid:4271 exited_at:{seconds:1747313160 nanos:656839243}" May 15 12:46:00.661688 containerd[1571]: time="2025-05-15T12:46:00.661572728Z" level=info msg="StopContainer for \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" with timeout 2 (s)" May 15 12:46:00.662720 containerd[1571]: time="2025-05-15T12:46:00.662649032Z" level=info msg="Stop container \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" with signal terminated" May 15 12:46:00.667857 systemd[1]: cri-containerd-c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50.scope: Deactivated successfully. May 15 12:46:00.670139 containerd[1571]: time="2025-05-15T12:46:00.670104625Z" level=info msg="received exit event container_id:\"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" id:\"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" pid:3309 exited_at:{seconds:1747313160 nanos:669253922}" May 15 12:46:00.670524 containerd[1571]: time="2025-05-15T12:46:00.670357946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" id:\"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" pid:3309 exited_at:{seconds:1747313160 nanos:669253922}" May 15 12:46:00.679204 systemd-networkd[1466]: lxc_health: Link DOWN May 15 12:46:00.679377 systemd-networkd[1466]: lxc_health: Lost carrier May 15 12:46:00.694897 systemd[1]: cri-containerd-07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20.scope: Deactivated successfully. May 15 12:46:00.695559 systemd[1]: cri-containerd-07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20.scope: Consumed 7.881s CPU time, 130.4M memory peak, 152K read from disk, 13.3M written to disk. May 15 12:46:00.697160 containerd[1571]: time="2025-05-15T12:46:00.697125361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" id:\"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" pid:3378 exited_at:{seconds:1747313160 nanos:696498309}" May 15 12:46:00.697448 containerd[1571]: time="2025-05-15T12:46:00.697267341Z" level=info msg="received exit event container_id:\"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" id:\"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" pid:3378 exited_at:{seconds:1747313160 nanos:696498309}" May 15 12:46:00.715083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50-rootfs.mount: Deactivated successfully. May 15 12:46:00.731658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20-rootfs.mount: Deactivated successfully. May 15 12:46:00.740982 containerd[1571]: time="2025-05-15T12:46:00.740716789Z" level=info msg="StopContainer for \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" returns successfully" May 15 12:46:00.743827 containerd[1571]: time="2025-05-15T12:46:00.743776019Z" level=info msg="StopContainer for \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" returns successfully" May 15 12:46:00.744239 containerd[1571]: time="2025-05-15T12:46:00.744072870Z" level=info msg="StopPodSandbox for \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\"" May 15 12:46:00.744382 containerd[1571]: time="2025-05-15T12:46:00.744358010Z" level=info msg="Container to stop \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:46:00.744443 containerd[1571]: time="2025-05-15T12:46:00.744429611Z" level=info msg="Container to stop \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:46:00.744489 containerd[1571]: time="2025-05-15T12:46:00.744478871Z" level=info msg="Container to stop \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:46:00.744535 containerd[1571]: time="2025-05-15T12:46:00.744522581Z" level=info msg="Container to stop \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:46:00.744578 containerd[1571]: time="2025-05-15T12:46:00.744567461Z" level=info msg="Container to stop \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:46:00.745377 containerd[1571]: time="2025-05-15T12:46:00.745160843Z" level=info msg="StopPodSandbox for \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\"" May 15 12:46:00.745377 containerd[1571]: time="2025-05-15T12:46:00.745220703Z" level=info msg="Container to stop \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:46:00.752789 systemd[1]: cri-containerd-069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939.scope: Deactivated successfully. May 15 12:46:00.758337 containerd[1571]: time="2025-05-15T12:46:00.758292945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" id:\"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" pid:2900 exit_status:137 exited_at:{seconds:1747313160 nanos:757900754}" May 15 12:46:00.759465 systemd[1]: cri-containerd-905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9.scope: Deactivated successfully. May 15 12:46:00.802172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9-rootfs.mount: Deactivated successfully. May 15 12:46:00.813207 containerd[1571]: time="2025-05-15T12:46:00.811490624Z" level=info msg="shim disconnected" id=905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9 namespace=k8s.io May 15 12:46:00.813207 containerd[1571]: time="2025-05-15T12:46:00.811931375Z" level=warning msg="cleaning up after shim disconnected" id=905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9 namespace=k8s.io May 15 12:46:00.813207 containerd[1571]: time="2025-05-15T12:46:00.811941455Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:46:00.813695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939-rootfs.mount: Deactivated successfully. May 15 12:46:00.819729 containerd[1571]: time="2025-05-15T12:46:00.819688000Z" level=info msg="shim disconnected" id=069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939 namespace=k8s.io May 15 12:46:00.819729 containerd[1571]: time="2025-05-15T12:46:00.819716370Z" level=warning msg="cleaning up after shim disconnected" id=069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939 namespace=k8s.io May 15 12:46:00.820043 containerd[1571]: time="2025-05-15T12:46:00.819724580Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:46:00.834266 containerd[1571]: time="2025-05-15T12:46:00.834067766Z" level=info msg="received exit event sandbox_id:\"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" exit_status:137 exited_at:{seconds:1747313160 nanos:764545545}" May 15 12:46:00.837767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9-shm.mount: Deactivated successfully. May 15 12:46:00.838829 containerd[1571]: time="2025-05-15T12:46:00.836453153Z" level=info msg="TearDown network for sandbox \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" successfully" May 15 12:46:00.839112 containerd[1571]: time="2025-05-15T12:46:00.838946681Z" level=info msg="StopPodSandbox for \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" returns successfully" May 15 12:46:00.845252 containerd[1571]: time="2025-05-15T12:46:00.845227691Z" level=info msg="received exit event sandbox_id:\"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" exit_status:137 exited_at:{seconds:1747313160 nanos:757900754}" May 15 12:46:00.845941 containerd[1571]: time="2025-05-15T12:46:00.845918953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" id:\"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" pid:2926 exit_status:137 exited_at:{seconds:1747313160 nanos:764545545}" May 15 12:46:00.846717 containerd[1571]: time="2025-05-15T12:46:00.846684636Z" level=info msg="TearDown network for sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" successfully" May 15 12:46:00.847207 containerd[1571]: time="2025-05-15T12:46:00.846713486Z" level=info msg="StopPodSandbox for \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" returns successfully" May 15 12:46:00.930277 kubelet[2728]: I0515 12:46:00.929334 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-cgroup\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.930277 kubelet[2728]: I0515 12:46:00.929374 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-xtables-lock\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.930277 kubelet[2728]: I0515 12:46:00.929397 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-config-path\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.930277 kubelet[2728]: I0515 12:46:00.929416 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lfff\" (UniqueName: \"kubernetes.io/projected/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-kube-api-access-7lfff\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.930277 kubelet[2728]: I0515 12:46:00.929436 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-clustermesh-secrets\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.930277 kubelet[2728]: I0515 12:46:00.929457 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0946246-280f-4b9d-b6f6-6ca9a197ea84-cilium-config-path\") pod \"a0946246-280f-4b9d-b6f6-6ca9a197ea84\" (UID: \"a0946246-280f-4b9d-b6f6-6ca9a197ea84\") " May 15 12:46:00.931484 kubelet[2728]: I0515 12:46:00.929472 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-host-proc-sys-net\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.931484 kubelet[2728]: I0515 12:46:00.929492 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-bpf-maps\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.931484 kubelet[2728]: I0515 12:46:00.929506 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cni-path\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.931484 kubelet[2728]: I0515 12:46:00.929525 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9dmp\" (UniqueName: \"kubernetes.io/projected/a0946246-280f-4b9d-b6f6-6ca9a197ea84-kube-api-access-q9dmp\") pod \"a0946246-280f-4b9d-b6f6-6ca9a197ea84\" (UID: \"a0946246-280f-4b9d-b6f6-6ca9a197ea84\") " May 15 12:46:00.931484 kubelet[2728]: I0515 12:46:00.929550 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-host-proc-sys-kernel\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.931484 kubelet[2728]: I0515 12:46:00.929572 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-hostproc\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.931610 kubelet[2728]: I0515 12:46:00.929596 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-hubble-tls\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.931610 kubelet[2728]: I0515 12:46:00.929615 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-lib-modules\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.931610 kubelet[2728]: I0515 12:46:00.929635 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-etc-cni-netd\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.931610 kubelet[2728]: I0515 12:46:00.929655 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-run\") pod \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\" (UID: \"e6afcb57-1d8c-489b-9677-6ae0c469ccfa\") " May 15 12:46:00.931610 kubelet[2728]: I0515 12:46:00.929761 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.931610 kubelet[2728]: I0515 12:46:00.929821 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.931765 kubelet[2728]: I0515 12:46:00.929843 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.931765 kubelet[2728]: I0515 12:46:00.929899 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cni-path" (OuterVolumeSpecName: "cni-path") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.934697 kubelet[2728]: I0515 12:46:00.934532 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.934762 kubelet[2728]: I0515 12:46:00.934725 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-hostproc" (OuterVolumeSpecName: "hostproc") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.935873 kubelet[2728]: I0515 12:46:00.935828 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.935873 kubelet[2728]: I0515 12:46:00.935869 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.938686 kubelet[2728]: I0515 12:46:00.938653 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-kube-api-access-7lfff" (OuterVolumeSpecName: "kube-api-access-7lfff") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "kube-api-access-7lfff". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 12:46:00.938928 kubelet[2728]: I0515 12:46:00.938778 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.939329 kubelet[2728]: I0515 12:46:00.938864 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:46:00.940002 kubelet[2728]: I0515 12:46:00.939969 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 12:46:00.943485 kubelet[2728]: I0515 12:46:00.943217 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0946246-280f-4b9d-b6f6-6ca9a197ea84-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a0946246-280f-4b9d-b6f6-6ca9a197ea84" (UID: "a0946246-280f-4b9d-b6f6-6ca9a197ea84"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 12:46:00.943657 kubelet[2728]: I0515 12:46:00.943634 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0946246-280f-4b9d-b6f6-6ca9a197ea84-kube-api-access-q9dmp" (OuterVolumeSpecName: "kube-api-access-q9dmp") pod "a0946246-280f-4b9d-b6f6-6ca9a197ea84" (UID: "a0946246-280f-4b9d-b6f6-6ca9a197ea84"). InnerVolumeSpecName "kube-api-access-q9dmp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 12:46:00.944512 kubelet[2728]: I0515 12:46:00.944490 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 12:46:00.945096 kubelet[2728]: I0515 12:46:00.945067 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e6afcb57-1d8c-489b-9677-6ae0c469ccfa" (UID: "e6afcb57-1d8c-489b-9677-6ae0c469ccfa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 12:46:01.029966 kubelet[2728]: I0515 12:46:01.029920 2728 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-cgroup\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.029966 kubelet[2728]: I0515 12:46:01.029948 2728 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-xtables-lock\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.029966 kubelet[2728]: I0515 12:46:01.029958 2728 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-config-path\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.029966 kubelet[2728]: I0515 12:46:01.029969 2728 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7lfff\" (UniqueName: \"kubernetes.io/projected/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-kube-api-access-7lfff\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.029966 kubelet[2728]: I0515 12:46:01.029978 2728 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-clustermesh-secrets\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.029966 kubelet[2728]: I0515 12:46:01.029987 2728 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0946246-280f-4b9d-b6f6-6ca9a197ea84-cilium-config-path\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030264 kubelet[2728]: I0515 12:46:01.029996 2728 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-host-proc-sys-net\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030264 kubelet[2728]: I0515 12:46:01.030004 2728 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-bpf-maps\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030264 kubelet[2728]: I0515 12:46:01.030011 2728 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cni-path\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030264 kubelet[2728]: I0515 12:46:01.030020 2728 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q9dmp\" (UniqueName: \"kubernetes.io/projected/a0946246-280f-4b9d-b6f6-6ca9a197ea84-kube-api-access-q9dmp\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030264 kubelet[2728]: I0515 12:46:01.030028 2728 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-host-proc-sys-kernel\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030264 kubelet[2728]: I0515 12:46:01.030035 2728 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-hostproc\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030264 kubelet[2728]: I0515 12:46:01.030043 2728 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-lib-modules\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030264 kubelet[2728]: I0515 12:46:01.030052 2728 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-etc-cni-netd\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030547 kubelet[2728]: I0515 12:46:01.030060 2728 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-cilium-run\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.030547 kubelet[2728]: I0515 12:46:01.030071 2728 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6afcb57-1d8c-489b-9677-6ae0c469ccfa-hubble-tls\") on node \"172-234-214-203\" DevicePath \"\"" May 15 12:46:01.034202 kubelet[2728]: I0515 12:46:01.033238 2728 scope.go:117] "RemoveContainer" containerID="c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50" May 15 12:46:01.038063 containerd[1571]: time="2025-05-15T12:46:01.037918881Z" level=info msg="RemoveContainer for \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\"" May 15 12:46:01.044711 containerd[1571]: time="2025-05-15T12:46:01.044682143Z" level=info msg="RemoveContainer for \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" returns successfully" May 15 12:46:01.045575 systemd[1]: Removed slice kubepods-besteffort-poda0946246_280f_4b9d_b6f6_6ca9a197ea84.slice - libcontainer container kubepods-besteffort-poda0946246_280f_4b9d_b6f6_6ca9a197ea84.slice. May 15 12:46:01.046386 kubelet[2728]: I0515 12:46:01.046353 2728 scope.go:117] "RemoveContainer" containerID="c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50" May 15 12:46:01.050477 containerd[1571]: time="2025-05-15T12:46:01.050419970Z" level=error msg="ContainerStatus for \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\": not found" May 15 12:46:01.050923 kubelet[2728]: E0515 12:46:01.050585 2728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\": not found" containerID="c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50" May 15 12:46:01.050923 kubelet[2728]: I0515 12:46:01.050635 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50"} err="failed to get container status \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\": rpc error: code = NotFound desc = an error occurred when try to find container \"c32188257e54de6413cbdf03b62cfe4414fb9c7b57cf3d514c6c57558ad29b50\": not found" May 15 12:46:01.050923 kubelet[2728]: I0515 12:46:01.050735 2728 scope.go:117] "RemoveContainer" containerID="07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20" May 15 12:46:01.052950 systemd[1]: Removed slice kubepods-burstable-pode6afcb57_1d8c_489b_9677_6ae0c469ccfa.slice - libcontainer container kubepods-burstable-pode6afcb57_1d8c_489b_9677_6ae0c469ccfa.slice. May 15 12:46:01.053039 systemd[1]: kubepods-burstable-pode6afcb57_1d8c_489b_9677_6ae0c469ccfa.slice: Consumed 8.024s CPU time, 130.8M memory peak, 152K read from disk, 13.3M written to disk. May 15 12:46:01.056601 containerd[1571]: time="2025-05-15T12:46:01.055917028Z" level=info msg="RemoveContainer for \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\"" May 15 12:46:01.062048 containerd[1571]: time="2025-05-15T12:46:01.062010887Z" level=info msg="RemoveContainer for \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" returns successfully" May 15 12:46:01.062224 kubelet[2728]: I0515 12:46:01.062167 2728 scope.go:117] "RemoveContainer" containerID="e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07" May 15 12:46:01.064691 containerd[1571]: time="2025-05-15T12:46:01.064245974Z" level=info msg="RemoveContainer for \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\"" May 15 12:46:01.068017 containerd[1571]: time="2025-05-15T12:46:01.067974856Z" level=info msg="RemoveContainer for \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\" returns successfully" May 15 12:46:01.068729 kubelet[2728]: I0515 12:46:01.068710 2728 scope.go:117] "RemoveContainer" containerID="ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009" May 15 12:46:01.074983 containerd[1571]: time="2025-05-15T12:46:01.074793418Z" level=info msg="RemoveContainer for \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\"" May 15 12:46:01.079934 containerd[1571]: time="2025-05-15T12:46:01.079900763Z" level=info msg="RemoveContainer for \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\" returns successfully" May 15 12:46:01.080141 kubelet[2728]: I0515 12:46:01.080089 2728 scope.go:117] "RemoveContainer" containerID="ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150" May 15 12:46:01.081389 containerd[1571]: time="2025-05-15T12:46:01.081363528Z" level=info msg="RemoveContainer for \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\"" May 15 12:46:01.088495 containerd[1571]: time="2025-05-15T12:46:01.087398787Z" level=info msg="RemoveContainer for \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\" returns successfully" May 15 12:46:01.088606 kubelet[2728]: I0515 12:46:01.087597 2728 scope.go:117] "RemoveContainer" containerID="fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884" May 15 12:46:01.091447 containerd[1571]: time="2025-05-15T12:46:01.091424140Z" level=info msg="RemoveContainer for \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\"" May 15 12:46:01.094891 containerd[1571]: time="2025-05-15T12:46:01.094808280Z" level=info msg="RemoveContainer for \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\" returns successfully" May 15 12:46:01.096308 kubelet[2728]: I0515 12:46:01.096280 2728 scope.go:117] "RemoveContainer" containerID="07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20" May 15 12:46:01.096968 containerd[1571]: time="2025-05-15T12:46:01.096854847Z" level=error msg="ContainerStatus for \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\": not found" May 15 12:46:01.097105 kubelet[2728]: E0515 12:46:01.096980 2728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\": not found" containerID="07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20" May 15 12:46:01.097105 kubelet[2728]: I0515 12:46:01.097035 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20"} err="failed to get container status \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\": rpc error: code = NotFound desc = an error occurred when try to find container \"07db95294a9285080ccd0abf553685d8015c2517afcf31c91543f583f11ffa20\": not found" May 15 12:46:01.097105 kubelet[2728]: I0515 12:46:01.097059 2728 scope.go:117] "RemoveContainer" containerID="e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07" May 15 12:46:01.097745 containerd[1571]: time="2025-05-15T12:46:01.097707099Z" level=error msg="ContainerStatus for \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\": not found" May 15 12:46:01.099292 kubelet[2728]: E0515 12:46:01.099237 2728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\": not found" containerID="e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07" May 15 12:46:01.099373 kubelet[2728]: I0515 12:46:01.099294 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07"} err="failed to get container status \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9c62ea1c8c6b3605e32ecdc4a0806dcfd3a9ed5273a2bcf21a610a7f35b2c07\": not found" May 15 12:46:01.099373 kubelet[2728]: I0515 12:46:01.099312 2728 scope.go:117] "RemoveContainer" containerID="ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009" May 15 12:46:01.100114 containerd[1571]: time="2025-05-15T12:46:01.100083177Z" level=error msg="ContainerStatus for \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\": not found" May 15 12:46:01.100242 kubelet[2728]: E0515 12:46:01.100204 2728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\": not found" containerID="ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009" May 15 12:46:01.100242 kubelet[2728]: I0515 12:46:01.100222 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009"} err="failed to get container status \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca593653652586da40d6d8dbe26eb698b9965ba9c470da5b001732bf07e59009\": not found" May 15 12:46:01.100242 kubelet[2728]: I0515 12:46:01.100236 2728 scope.go:117] "RemoveContainer" containerID="ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150" May 15 12:46:01.100688 containerd[1571]: time="2025-05-15T12:46:01.100656889Z" level=error msg="ContainerStatus for \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\": not found" May 15 12:46:01.101343 kubelet[2728]: E0515 12:46:01.101309 2728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\": not found" containerID="ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150" May 15 12:46:01.101343 kubelet[2728]: I0515 12:46:01.101332 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150"} err="failed to get container status \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce24f25e1fcb072425d56ecca02525408fe43313378e0bf21ad922a5d2aab150\": not found" May 15 12:46:01.101343 kubelet[2728]: I0515 12:46:01.101346 2728 scope.go:117] "RemoveContainer" containerID="fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884" May 15 12:46:01.103710 containerd[1571]: time="2025-05-15T12:46:01.103600917Z" level=error msg="ContainerStatus for \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\": not found" May 15 12:46:01.103802 kubelet[2728]: E0515 12:46:01.103774 2728 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\": not found" containerID="fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884" May 15 12:46:01.103882 kubelet[2728]: I0515 12:46:01.103795 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884"} err="failed to get container status \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbe04f777ed86d895c86fa865084cf87fc2d969c938acf6c95a852a533c0e884\": not found" May 15 12:46:01.715007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939-shm.mount: Deactivated successfully. May 15 12:46:01.715764 systemd[1]: var-lib-kubelet-pods-e6afcb57\x2d1d8c\x2d489b\x2d9677\x2d6ae0c469ccfa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7lfff.mount: Deactivated successfully. May 15 12:46:01.715848 systemd[1]: var-lib-kubelet-pods-a0946246\x2d280f\x2d4b9d\x2db6f6\x2d6ca9a197ea84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq9dmp.mount: Deactivated successfully. May 15 12:46:01.715918 systemd[1]: var-lib-kubelet-pods-e6afcb57\x2d1d8c\x2d489b\x2d9677\x2d6ae0c469ccfa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 12:46:01.715989 systemd[1]: var-lib-kubelet-pods-e6afcb57\x2d1d8c\x2d489b\x2d9677\x2d6ae0c469ccfa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 12:46:02.161357 kubelet[2728]: E0515 12:46:02.161290 2728 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 12:46:02.590002 sshd[4251]: Connection closed by 139.178.89.65 port 49500 May 15 12:46:02.590363 sshd-session[4249]: pam_unix(sshd:session): session closed for user core May 15 12:46:02.594876 systemd[1]: sshd@21-172.234.214.203:22-139.178.89.65:49500.service: Deactivated successfully. May 15 12:46:02.596998 systemd[1]: session-22.scope: Deactivated successfully. May 15 12:46:02.600076 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. May 15 12:46:02.601373 systemd-logind[1545]: Removed session 22. May 15 12:46:02.655499 systemd[1]: Started sshd@22-172.234.214.203:22-139.178.89.65:49510.service - OpenSSH per-connection server daemon (139.178.89.65:49510). May 15 12:46:02.927628 kubelet[2728]: I0515 12:46:02.927520 2728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0946246-280f-4b9d-b6f6-6ca9a197ea84" path="/var/lib/kubelet/pods/a0946246-280f-4b9d-b6f6-6ca9a197ea84/volumes" May 15 12:46:02.928135 kubelet[2728]: I0515 12:46:02.928093 2728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6afcb57-1d8c-489b-9677-6ae0c469ccfa" path="/var/lib/kubelet/pods/e6afcb57-1d8c-489b-9677-6ae0c469ccfa/volumes" May 15 12:46:03.004855 sshd[4403]: Accepted publickey for core from 139.178.89.65 port 49510 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:46:03.006424 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:46:03.012259 systemd-logind[1545]: New session 23 of user core. May 15 12:46:03.017320 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 12:46:03.676427 kubelet[2728]: I0515 12:46:03.676365 2728 memory_manager.go:355] "RemoveStaleState removing state" podUID="e6afcb57-1d8c-489b-9677-6ae0c469ccfa" containerName="cilium-agent" May 15 12:46:03.676427 kubelet[2728]: I0515 12:46:03.676396 2728 memory_manager.go:355] "RemoveStaleState removing state" podUID="a0946246-280f-4b9d-b6f6-6ca9a197ea84" containerName="cilium-operator" May 15 12:46:03.688813 systemd[1]: Created slice kubepods-burstable-podfef49dad_d094_42d3_a7be_c88f837fdbde.slice - libcontainer container kubepods-burstable-podfef49dad_d094_42d3_a7be_c88f837fdbde.slice. May 15 12:46:03.705406 sshd[4405]: Connection closed by 139.178.89.65 port 49510 May 15 12:46:03.707087 sshd-session[4403]: pam_unix(sshd:session): session closed for user core May 15 12:46:03.714020 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. May 15 12:46:03.716000 systemd[1]: sshd@22-172.234.214.203:22-139.178.89.65:49510.service: Deactivated successfully. May 15 12:46:03.719154 systemd[1]: session-23.scope: Deactivated successfully. May 15 12:46:03.723456 systemd-logind[1545]: Removed session 23. May 15 12:46:03.745917 kubelet[2728]: I0515 12:46:03.745861 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fef49dad-d094-42d3-a7be-c88f837fdbde-clustermesh-secrets\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746135 kubelet[2728]: I0515 12:46:03.746067 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-host-proc-sys-net\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746135 kubelet[2728]: I0515 12:46:03.746131 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-hostproc\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746135 kubelet[2728]: I0515 12:46:03.746151 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-cilium-cgroup\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746435 kubelet[2728]: I0515 12:46:03.746173 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-etc-cni-netd\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746435 kubelet[2728]: I0515 12:46:03.746215 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-xtables-lock\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746435 kubelet[2728]: I0515 12:46:03.746231 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fef49dad-d094-42d3-a7be-c88f837fdbde-cilium-ipsec-secrets\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746435 kubelet[2728]: I0515 12:46:03.746248 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fef49dad-d094-42d3-a7be-c88f837fdbde-hubble-tls\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746435 kubelet[2728]: I0515 12:46:03.746264 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-cilium-run\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746435 kubelet[2728]: I0515 12:46:03.746284 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-bpf-maps\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746677 kubelet[2728]: I0515 12:46:03.746299 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5lps\" (UniqueName: \"kubernetes.io/projected/fef49dad-d094-42d3-a7be-c88f837fdbde-kube-api-access-m5lps\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746677 kubelet[2728]: I0515 12:46:03.746315 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fef49dad-d094-42d3-a7be-c88f837fdbde-cilium-config-path\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746677 kubelet[2728]: I0515 12:46:03.746337 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-lib-modules\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746677 kubelet[2728]: I0515 12:46:03.746354 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-cni-path\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.746677 kubelet[2728]: I0515 12:46:03.746371 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fef49dad-d094-42d3-a7be-c88f837fdbde-host-proc-sys-kernel\") pod \"cilium-7lpt9\" (UID: \"fef49dad-d094-42d3-a7be-c88f837fdbde\") " pod="kube-system/cilium-7lpt9" May 15 12:46:03.775582 systemd[1]: Started sshd@23-172.234.214.203:22-139.178.89.65:49520.service - OpenSSH per-connection server daemon (139.178.89.65:49520). May 15 12:46:03.995285 kubelet[2728]: E0515 12:46:03.995151 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:03.995969 containerd[1571]: time="2025-05-15T12:46:03.995931383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7lpt9,Uid:fef49dad-d094-42d3-a7be-c88f837fdbde,Namespace:kube-system,Attempt:0,}" May 15 12:46:04.015989 containerd[1571]: time="2025-05-15T12:46:04.015732403Z" level=info msg="connecting to shim f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672" address="unix:///run/containerd/s/490c4693089ecd994141b0557304d937c1e777c1d4050a4786cf56cf56abb304" namespace=k8s.io protocol=ttrpc version=3 May 15 12:46:04.044309 systemd[1]: Started cri-containerd-f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672.scope - libcontainer container f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672. May 15 12:46:04.074352 containerd[1571]: time="2025-05-15T12:46:04.074309891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7lpt9,Uid:fef49dad-d094-42d3-a7be-c88f837fdbde,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\"" May 15 12:46:04.076504 kubelet[2728]: E0515 12:46:04.076419 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:04.079675 containerd[1571]: time="2025-05-15T12:46:04.078702564Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 12:46:04.085041 containerd[1571]: time="2025-05-15T12:46:04.085020224Z" level=info msg="Container 222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003: CDI devices from CRI Config.CDIDevices: []" May 15 12:46:04.089728 containerd[1571]: time="2025-05-15T12:46:04.089707298Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003\"" May 15 12:46:04.090141 containerd[1571]: time="2025-05-15T12:46:04.090103409Z" level=info msg="StartContainer for \"222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003\"" May 15 12:46:04.091670 containerd[1571]: time="2025-05-15T12:46:04.091629774Z" level=info msg="connecting to shim 222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003" address="unix:///run/containerd/s/490c4693089ecd994141b0557304d937c1e777c1d4050a4786cf56cf56abb304" protocol=ttrpc version=3 May 15 12:46:04.113304 systemd[1]: Started cri-containerd-222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003.scope - libcontainer container 222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003. May 15 12:46:04.121172 sshd[4415]: Accepted publickey for core from 139.178.89.65 port 49520 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:46:04.122986 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:46:04.130664 systemd-logind[1545]: New session 24 of user core. May 15 12:46:04.133444 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 12:46:04.148935 containerd[1571]: time="2025-05-15T12:46:04.148851888Z" level=info msg="StartContainer for \"222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003\" returns successfully" May 15 12:46:04.158640 systemd[1]: cri-containerd-222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003.scope: Deactivated successfully. May 15 12:46:04.160490 containerd[1571]: time="2025-05-15T12:46:04.160454354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003\" id:\"222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003\" pid:4478 exited_at:{seconds:1747313164 nanos:160094052}" May 15 12:46:04.160552 containerd[1571]: time="2025-05-15T12:46:04.160513854Z" level=info msg="received exit event container_id:\"222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003\" id:\"222e3b2a99d907b0cf5d51cfa18bcdf672660458b7c3cb9e77e86a37f5205003\" pid:4478 exited_at:{seconds:1747313164 nanos:160094052}" May 15 12:46:04.360944 sshd[4485]: Connection closed by 139.178.89.65 port 49520 May 15 12:46:04.361921 sshd-session[4415]: pam_unix(sshd:session): session closed for user core May 15 12:46:04.366930 systemd[1]: sshd@23-172.234.214.203:22-139.178.89.65:49520.service: Deactivated successfully. May 15 12:46:04.366976 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. May 15 12:46:04.370109 systemd[1]: session-24.scope: Deactivated successfully. May 15 12:46:04.373133 systemd-logind[1545]: Removed session 24. May 15 12:46:04.426080 systemd[1]: Started sshd@24-172.234.214.203:22-139.178.89.65:49536.service - OpenSSH per-connection server daemon (139.178.89.65:49536). May 15 12:46:04.773418 sshd[4516]: Accepted publickey for core from 139.178.89.65 port 49536 ssh2: RSA SHA256:gyeIyP7CTSF398gDeXUDBL3yfhdqSHwOrE2zyc7w3tk May 15 12:46:04.775122 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:46:04.780814 systemd-logind[1545]: New session 25 of user core. May 15 12:46:04.788350 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 12:46:05.057237 kubelet[2728]: E0515 12:46:05.057002 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:05.061960 containerd[1571]: time="2025-05-15T12:46:05.061763256Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 12:46:05.072202 containerd[1571]: time="2025-05-15T12:46:05.072146667Z" level=info msg="Container 9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895: CDI devices from CRI Config.CDIDevices: []" May 15 12:46:05.076115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362654919.mount: Deactivated successfully. May 15 12:46:05.080643 containerd[1571]: time="2025-05-15T12:46:05.080609442Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895\"" May 15 12:46:05.081849 containerd[1571]: time="2025-05-15T12:46:05.081827846Z" level=info msg="StartContainer for \"9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895\"" May 15 12:46:05.083777 containerd[1571]: time="2025-05-15T12:46:05.083731342Z" level=info msg="connecting to shim 9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895" address="unix:///run/containerd/s/490c4693089ecd994141b0557304d937c1e777c1d4050a4786cf56cf56abb304" protocol=ttrpc version=3 May 15 12:46:05.109304 systemd[1]: Started cri-containerd-9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895.scope - libcontainer container 9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895. May 15 12:46:05.142381 containerd[1571]: time="2025-05-15T12:46:05.142288658Z" level=info msg="StartContainer for \"9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895\" returns successfully" May 15 12:46:05.153529 systemd[1]: cri-containerd-9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895.scope: Deactivated successfully. May 15 12:46:05.154925 containerd[1571]: time="2025-05-15T12:46:05.154549056Z" level=info msg="received exit event container_id:\"9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895\" id:\"9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895\" pid:4538 exited_at:{seconds:1747313165 nanos:154165794}" May 15 12:46:05.154998 containerd[1571]: time="2025-05-15T12:46:05.154563636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895\" id:\"9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895\" pid:4538 exited_at:{seconds:1747313165 nanos:154165794}" May 15 12:46:05.185812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d395d0955ab8966eafe480c0c114b8e83798816fe7082365950763f8c7c4895-rootfs.mount: Deactivated successfully. May 15 12:46:06.060676 kubelet[2728]: E0515 12:46:06.060630 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:06.066211 containerd[1571]: time="2025-05-15T12:46:06.064058215Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 12:46:06.075129 containerd[1571]: time="2025-05-15T12:46:06.075090588Z" level=info msg="Container 94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba: CDI devices from CRI Config.CDIDevices: []" May 15 12:46:06.090202 containerd[1571]: time="2025-05-15T12:46:06.090143103Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba\"" May 15 12:46:06.093009 containerd[1571]: time="2025-05-15T12:46:06.092967471Z" level=info msg="StartContainer for \"94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba\"" May 15 12:46:06.094717 containerd[1571]: time="2025-05-15T12:46:06.094685846Z" level=info msg="connecting to shim 94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba" address="unix:///run/containerd/s/490c4693089ecd994141b0557304d937c1e777c1d4050a4786cf56cf56abb304" protocol=ttrpc version=3 May 15 12:46:06.142319 systemd[1]: Started cri-containerd-94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba.scope - libcontainer container 94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba. May 15 12:46:06.185562 containerd[1571]: time="2025-05-15T12:46:06.185486998Z" level=info msg="StartContainer for \"94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba\" returns successfully" May 15 12:46:06.187688 systemd[1]: cri-containerd-94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba.scope: Deactivated successfully. May 15 12:46:06.193418 containerd[1571]: time="2025-05-15T12:46:06.193274420Z" level=info msg="received exit event container_id:\"94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba\" id:\"94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba\" pid:4581 exited_at:{seconds:1747313166 nanos:192811319}" May 15 12:46:06.193500 containerd[1571]: time="2025-05-15T12:46:06.193460001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba\" id:\"94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba\" pid:4581 exited_at:{seconds:1747313166 nanos:192811319}" May 15 12:46:06.216964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94cd4f9912fcf2ca0c4fe0cdd957f6013630e2b04ab049fc88f9acad05f92fba-rootfs.mount: Deactivated successfully. May 15 12:46:07.066971 kubelet[2728]: E0515 12:46:07.065475 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:07.067913 containerd[1571]: time="2025-05-15T12:46:07.067874109Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 12:46:07.078209 containerd[1571]: time="2025-05-15T12:46:07.077560918Z" level=info msg="Container fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40: CDI devices from CRI Config.CDIDevices: []" May 15 12:46:07.082495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543812095.mount: Deactivated successfully. May 15 12:46:07.086321 containerd[1571]: time="2025-05-15T12:46:07.086287984Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40\"" May 15 12:46:07.086890 containerd[1571]: time="2025-05-15T12:46:07.086869216Z" level=info msg="StartContainer for \"fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40\"" May 15 12:46:07.087959 containerd[1571]: time="2025-05-15T12:46:07.087932289Z" level=info msg="connecting to shim fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40" address="unix:///run/containerd/s/490c4693089ecd994141b0557304d937c1e777c1d4050a4786cf56cf56abb304" protocol=ttrpc version=3 May 15 12:46:07.114354 systemd[1]: Started cri-containerd-fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40.scope - libcontainer container fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40. May 15 12:46:07.146472 systemd[1]: cri-containerd-fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40.scope: Deactivated successfully. May 15 12:46:07.148921 containerd[1571]: time="2025-05-15T12:46:07.148846589Z" level=info msg="received exit event container_id:\"fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40\" id:\"fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40\" pid:4621 exited_at:{seconds:1747313167 nanos:148680528}" May 15 12:46:07.148921 containerd[1571]: time="2025-05-15T12:46:07.148870759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40\" id:\"fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40\" pid:4621 exited_at:{seconds:1747313167 nanos:148680528}" May 15 12:46:07.149584 containerd[1571]: time="2025-05-15T12:46:07.149549371Z" level=info msg="StartContainer for \"fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40\" returns successfully" May 15 12:46:07.162235 kubelet[2728]: E0515 12:46:07.162145 2728 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 12:46:07.174029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa0dbc2bc56b594e6c0c32b659d78e16c2918636311789fe0d7e0a6184d9fe40-rootfs.mount: Deactivated successfully. May 15 12:46:08.070951 kubelet[2728]: E0515 12:46:08.070908 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:08.073113 containerd[1571]: time="2025-05-15T12:46:08.073049620Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 12:46:08.087956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051989803.mount: Deactivated successfully. May 15 12:46:08.088830 containerd[1571]: time="2025-05-15T12:46:08.088803186Z" level=info msg="Container 58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629: CDI devices from CRI Config.CDIDevices: []" May 15 12:46:08.098013 containerd[1571]: time="2025-05-15T12:46:08.096560729Z" level=info msg="CreateContainer within sandbox \"f8388871724a5af7e52930b45c7940b7a46702f9d153f62d9b74503ad9abe672\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629\"" May 15 12:46:08.098617 containerd[1571]: time="2025-05-15T12:46:08.098594495Z" level=info msg="StartContainer for \"58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629\"" May 15 12:46:08.099923 containerd[1571]: time="2025-05-15T12:46:08.099875958Z" level=info msg="connecting to shim 58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629" address="unix:///run/containerd/s/490c4693089ecd994141b0557304d937c1e777c1d4050a4786cf56cf56abb304" protocol=ttrpc version=3 May 15 12:46:08.124649 systemd[1]: Started cri-containerd-58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629.scope - libcontainer container 58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629. May 15 12:46:08.165487 containerd[1571]: time="2025-05-15T12:46:08.165437581Z" level=info msg="StartContainer for \"58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629\" returns successfully" May 15 12:46:08.238274 containerd[1571]: time="2025-05-15T12:46:08.238011423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629\" id:\"38cf4f7a1c25379b62e6b30c8a87d27551b7ef1fc4580d704a3d53346b7c6fbd\" pid:4687 exited_at:{seconds:1747313168 nanos:237739662}" May 15 12:46:08.653228 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 15 12:46:08.926064 kubelet[2728]: E0515 12:46:08.925449 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:09.085469 kubelet[2728]: E0515 12:46:09.085416 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:10.089406 kubelet[2728]: E0515 12:46:10.089321 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:11.218217 containerd[1571]: time="2025-05-15T12:46:11.218095899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629\" id:\"c04d880b80d290d0af55681f4bffe5a12c361ff60c068479dab206019528ed52\" pid:5077 exit_status:1 exited_at:{seconds:1747313171 nanos:216688006}" May 15 12:46:11.276607 kubelet[2728]: I0515 12:46:11.275053 2728 setters.go:602] "Node became not ready" node="172-234-214-203" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T12:46:11Z","lastTransitionTime":"2025-05-15T12:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 12:46:11.576266 systemd-networkd[1466]: lxc_health: Link UP May 15 12:46:11.596355 systemd-networkd[1466]: lxc_health: Gained carrier May 15 12:46:11.997269 kubelet[2728]: E0515 12:46:11.997112 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:12.022075 kubelet[2728]: I0515 12:46:12.022016 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7lpt9" podStartSLOduration=9.022001903 podStartE2EDuration="9.022001903s" podCreationTimestamp="2025-05-15 12:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:46:09.101480731 +0000 UTC m=+172.315052188" watchObservedRunningTime="2025-05-15 12:46:12.022001903 +0000 UTC m=+175.235573360" May 15 12:46:12.093023 kubelet[2728]: E0515 12:46:12.092985 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:13.066133 systemd-networkd[1466]: lxc_health: Gained IPv6LL May 15 12:46:13.095127 kubelet[2728]: E0515 12:46:13.095103 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:13.370205 containerd[1571]: time="2025-05-15T12:46:13.370058918Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629\" id:\"1072e8e5ca0a83a113e18f8e0b52f1dfb1e2729879b7137fe7beb2b83813ee90\" pid:5213 exited_at:{seconds:1747313173 nanos:369608647}" May 15 12:46:15.468899 containerd[1571]: time="2025-05-15T12:46:15.468855714Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629\" id:\"091164e5ae1719581183da47f02cdee8d7cbf3ec5911222747b71f3601330ade\" pid:5245 exited_at:{seconds:1747313175 nanos:466386257}" May 15 12:46:16.938588 containerd[1571]: time="2025-05-15T12:46:16.938555143Z" level=info msg="StopPodSandbox for \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\"" May 15 12:46:16.939084 containerd[1571]: time="2025-05-15T12:46:16.939060464Z" level=info msg="TearDown network for sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" successfully" May 15 12:46:16.939084 containerd[1571]: time="2025-05-15T12:46:16.939080914Z" level=info msg="StopPodSandbox for \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" returns successfully" May 15 12:46:16.939667 containerd[1571]: time="2025-05-15T12:46:16.939646635Z" level=info msg="RemovePodSandbox for \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\"" May 15 12:46:16.939748 containerd[1571]: time="2025-05-15T12:46:16.939729286Z" level=info msg="Forcibly stopping sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\"" May 15 12:46:16.939815 containerd[1571]: time="2025-05-15T12:46:16.939792886Z" level=info msg="TearDown network for sandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" successfully" May 15 12:46:16.942200 containerd[1571]: time="2025-05-15T12:46:16.942132793Z" level=info msg="Ensure that sandbox 069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939 in task-service has been cleanup successfully" May 15 12:46:16.945355 containerd[1571]: time="2025-05-15T12:46:16.945333761Z" level=info msg="RemovePodSandbox \"069c83d21a4bcbd2fff59d854881e3b13f216ba54e9ba24ddb82cb115d6f7939\" returns successfully" May 15 12:46:16.946668 containerd[1571]: time="2025-05-15T12:46:16.946633405Z" level=info msg="StopPodSandbox for \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\"" May 15 12:46:16.946762 containerd[1571]: time="2025-05-15T12:46:16.946745225Z" level=info msg="TearDown network for sandbox \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" successfully" May 15 12:46:16.946762 containerd[1571]: time="2025-05-15T12:46:16.946760605Z" level=info msg="StopPodSandbox for \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" returns successfully" May 15 12:46:16.947300 containerd[1571]: time="2025-05-15T12:46:16.947277326Z" level=info msg="RemovePodSandbox for \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\"" May 15 12:46:16.947345 containerd[1571]: time="2025-05-15T12:46:16.947303197Z" level=info msg="Forcibly stopping sandbox \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\"" May 15 12:46:16.947446 containerd[1571]: time="2025-05-15T12:46:16.947422787Z" level=info msg="TearDown network for sandbox \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" successfully" May 15 12:46:16.948922 containerd[1571]: time="2025-05-15T12:46:16.948893191Z" level=info msg="Ensure that sandbox 905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9 in task-service has been cleanup successfully" May 15 12:46:16.950919 containerd[1571]: time="2025-05-15T12:46:16.950896257Z" level=info msg="RemovePodSandbox \"905da6a79d8c5ff4f80da71537f3bfc8e3f72619dd5bcf7b969c4f493b787fe9\" returns successfully" May 15 12:46:17.596496 containerd[1571]: time="2025-05-15T12:46:17.596458823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629\" id:\"a0e9d28a58ec5240d41154dc8d0614e887104592e1350140f8d80a08368b3eb6\" pid:5271 exited_at:{seconds:1747313177 nanos:595653971}" May 15 12:46:17.925943 kubelet[2728]: E0515 12:46:17.925834 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 15 12:46:19.704805 containerd[1571]: time="2025-05-15T12:46:19.704748570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58f1b06c3ac169567f92fbe8d546186c035a81e430eccc80c5120349f7ad4629\" id:\"bc17375efc8188ae3e6f7ad2e2c067eadfeddc9326986098dddf68e7034b5de5\" pid:5295 exited_at:{seconds:1747313179 nanos:703587177}" May 15 12:46:19.760330 sshd[4518]: Connection closed by 139.178.89.65 port 49536 May 15 12:46:19.761066 sshd-session[4516]: pam_unix(sshd:session): session closed for user core May 15 12:46:19.766031 systemd[1]: sshd@24-172.234.214.203:22-139.178.89.65:49536.service: Deactivated successfully. May 15 12:46:19.768702 systemd[1]: session-25.scope: Deactivated successfully. May 15 12:46:19.769723 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. May 15 12:46:19.772345 systemd-logind[1545]: Removed session 25. May 15 12:46:24.926975 kubelet[2728]: E0515 12:46:24.926893 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9"