Sep 6 00:19:49.991787 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:19:49.991819 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:19:49.991838 kernel: BIOS-provided physical RAM map: Sep 6 00:19:49.991850 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 6 00:19:49.991860 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 6 00:19:49.991871 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 6 00:19:49.991885 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 6 00:19:49.991897 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 6 00:19:49.991912 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 6 00:19:49.991923 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 6 00:19:49.991935 kernel: NX (Execute Disable) protection: active Sep 6 00:19:49.991947 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 6 00:19:49.991959 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 6 00:19:49.991971 kernel: extended physical RAM map: Sep 6 00:19:49.991989 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 6 00:19:49.992002 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Sep 6 00:19:49.992015 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Sep 6 00:19:49.992028 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Sep 6 00:19:49.992040 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 6 00:19:49.992053 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 6 00:19:49.992066 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 6 00:19:49.992078 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 6 00:19:49.992091 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 6 00:19:49.992103 kernel: efi: EFI v2.70 by EDK II Sep 6 00:19:49.992119 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 Sep 6 00:19:49.992132 kernel: SMBIOS 2.7 present. Sep 6 00:19:49.992145 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 6 00:19:49.992157 kernel: Hypervisor detected: KVM Sep 6 00:19:49.992170 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:19:49.992182 kernel: kvm-clock: cpu 0, msr 2c19f001, primary cpu clock Sep 6 00:19:49.992195 kernel: kvm-clock: using sched offset of 4145434097 cycles Sep 6 00:19:49.992208 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:19:49.992221 kernel: tsc: Detected 2499.998 MHz processor Sep 6 00:19:49.992234 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:19:49.992247 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:19:49.992262 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 6 00:19:49.992275 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:19:49.992288 kernel: Using GB pages for direct mapping Sep 6 00:19:49.992301 kernel: Secure boot disabled Sep 6 00:19:49.992314 kernel: ACPI: Early table checksum verification disabled Sep 6 00:19:49.992332 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 6 00:19:49.992346 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 6 00:19:49.992363 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 6 00:19:49.992377 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 6 00:19:49.992390 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 6 00:19:49.992405 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 6 00:19:49.992419 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 6 00:19:49.992433 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 6 00:19:49.992446 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 6 00:19:49.992463 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 6 00:19:49.992477 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 6 00:19:49.992491 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 6 00:19:49.992505 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 6 00:19:49.992519 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 6 00:19:49.992533 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 6 00:19:49.995427 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 6 00:19:49.995445 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 6 00:19:49.995459 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 6 00:19:49.995481 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 6 00:19:49.995494 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 6 00:19:49.995507 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 6 00:19:49.995521 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 6 00:19:49.995536 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 6 00:19:49.995565 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 6 00:19:49.995581 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:19:49.995595 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:19:49.995610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 6 00:19:49.995628 kernel: NUMA: Initialized distance table, cnt=1 Sep 6 00:19:49.995643 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 6 00:19:49.995658 kernel: Zone ranges: Sep 6 00:19:49.995674 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:19:49.995687 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 6 00:19:49.995698 kernel: Normal empty Sep 6 00:19:49.995710 kernel: Movable zone start for each node Sep 6 00:19:49.995720 kernel: Early memory node ranges Sep 6 00:19:49.995732 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 6 00:19:49.995747 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 6 00:19:49.995759 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 6 00:19:49.995771 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 6 00:19:49.995783 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:19:49.995796 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 6 00:19:49.995809 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 6 00:19:49.995823 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 6 00:19:49.995837 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 6 00:19:49.995850 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:19:49.995867 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 6 00:19:49.995881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:19:49.995895 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:19:49.995909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:19:49.995923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:19:49.995936 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:19:49.995951 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:19:49.995963 kernel: TSC deadline timer available Sep 6 00:19:49.995975 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:19:49.995990 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 6 00:19:49.996002 kernel: Booting paravirtualized kernel on KVM Sep 6 00:19:49.996017 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:19:49.996031 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:19:49.996045 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:19:49.996059 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:19:49.996072 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:19:49.996083 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Sep 6 00:19:49.996095 kernel: kvm-guest: PV spinlocks enabled Sep 6 00:19:49.996109 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:19:49.996120 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 6 00:19:49.996132 kernel: Policy zone: DMA32 Sep 6 00:19:49.996150 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:19:49.996164 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:19:49.996176 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:19:49.996189 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:19:49.996204 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:19:49.996221 kernel: Memory: 1876640K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 160904K reserved, 0K cma-reserved) Sep 6 00:19:49.996236 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:19:49.996251 kernel: Kernel/User page tables isolation: enabled Sep 6 00:19:49.996264 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:19:49.996279 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:19:49.996293 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:19:49.996309 kernel: rcu: RCU event tracing is enabled. Sep 6 00:19:49.996338 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:19:49.996353 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:19:49.996368 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:19:49.996384 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:19:49.996399 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:19:49.996416 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 6 00:19:49.996431 kernel: random: crng init done Sep 6 00:19:49.996446 kernel: Console: colour dummy device 80x25 Sep 6 00:19:49.996460 kernel: printk: console [tty0] enabled Sep 6 00:19:49.996475 kernel: printk: console [ttyS0] enabled Sep 6 00:19:49.996490 kernel: ACPI: Core revision 20210730 Sep 6 00:19:49.996506 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 6 00:19:49.996523 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:19:49.996569 kernel: x2apic enabled Sep 6 00:19:49.996582 kernel: Switched APIC routing to physical x2apic. Sep 6 00:19:49.996595 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 6 00:19:49.996608 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 6 00:19:49.996624 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 6 00:19:49.996638 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 6 00:19:49.996657 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:19:49.996671 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:19:49.996686 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:19:49.996700 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 6 00:19:49.996715 kernel: RETBleed: Vulnerable Sep 6 00:19:49.996730 kernel: Speculative Store Bypass: Vulnerable Sep 6 00:19:49.996744 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:19:49.996759 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:19:49.996773 kernel: GDS: Unknown: Dependent on hypervisor status Sep 6 00:19:49.996788 kernel: active return thunk: its_return_thunk Sep 6 00:19:49.996802 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:19:49.996820 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:19:49.996835 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:19:49.996850 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:19:49.996865 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 6 00:19:49.996879 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 6 00:19:49.996894 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 6 00:19:49.996909 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 6 00:19:49.996924 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 6 00:19:49.996939 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 6 00:19:49.996953 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:19:49.996968 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 6 00:19:49.996986 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 6 00:19:49.997000 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 6 00:19:49.997015 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 6 00:19:49.997030 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 6 00:19:49.997044 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 6 00:19:49.997059 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 6 00:19:49.997074 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:19:49.997089 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:19:49.997104 kernel: LSM: Security Framework initializing Sep 6 00:19:49.997118 kernel: SELinux: Initializing. Sep 6 00:19:49.997134 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:19:49.997151 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:19:49.997166 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 6 00:19:49.997181 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 6 00:19:49.997197 kernel: signal: max sigframe size: 3632 Sep 6 00:19:49.997212 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:19:49.997227 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:19:49.997242 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:19:49.997256 kernel: x86: Booting SMP configuration: Sep 6 00:19:49.997271 kernel: .... node #0, CPUs: #1 Sep 6 00:19:49.997286 kernel: kvm-clock: cpu 1, msr 2c19f041, secondary cpu clock Sep 6 00:19:49.997304 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Sep 6 00:19:49.997319 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 6 00:19:49.997335 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 6 00:19:49.997350 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:19:49.997365 kernel: smpboot: Max logical packages: 1 Sep 6 00:19:49.997380 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 6 00:19:49.997395 kernel: devtmpfs: initialized Sep 6 00:19:49.997410 kernel: x86/mm: Memory block size: 128MB Sep 6 00:19:49.997427 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 6 00:19:49.997443 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:19:49.997458 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:19:49.997472 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:19:49.997487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:19:49.997502 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:19:49.997516 kernel: audit: type=2000 audit(1757117990.184:1): state=initialized audit_enabled=0 res=1 Sep 6 00:19:49.997529 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:19:49.997558 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:19:49.997577 kernel: cpuidle: using governor menu Sep 6 00:19:49.997591 kernel: ACPI: bus type PCI registered Sep 6 00:19:49.997606 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:19:49.997620 kernel: dca service started, version 1.12.1 Sep 6 00:19:49.997635 kernel: PCI: Using configuration type 1 for base access Sep 6 00:19:49.997650 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:19:49.997663 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:19:49.997678 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:19:49.997693 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:19:49.997711 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:19:49.997725 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:19:49.997737 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:19:49.997750 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:19:49.997763 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:19:49.997776 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 6 00:19:49.997791 kernel: ACPI: Interpreter enabled Sep 6 00:19:49.997808 kernel: ACPI: PM: (supports S0 S5) Sep 6 00:19:49.997825 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:19:49.997847 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:19:49.997861 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 6 00:19:49.997874 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:19:49.998087 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:19:49.998217 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 6 00:19:49.998235 kernel: acpiphp: Slot [3] registered Sep 6 00:19:49.998249 kernel: acpiphp: Slot [4] registered Sep 6 00:19:49.998263 kernel: acpiphp: Slot [5] registered Sep 6 00:19:49.998280 kernel: acpiphp: Slot [6] registered Sep 6 00:19:49.998294 kernel: acpiphp: Slot [7] registered Sep 6 00:19:49.998308 kernel: acpiphp: Slot [8] registered Sep 6 00:19:49.998322 kernel: acpiphp: Slot [9] registered Sep 6 00:19:49.998336 kernel: acpiphp: Slot [10] registered Sep 6 00:19:49.998361 kernel: acpiphp: Slot [11] registered Sep 6 00:19:49.998375 kernel: acpiphp: Slot [12] registered Sep 6 00:19:49.998389 kernel: acpiphp: Slot [13] registered Sep 6 00:19:49.998403 kernel: acpiphp: Slot [14] registered Sep 6 00:19:49.998420 kernel: acpiphp: Slot [15] registered Sep 6 00:19:49.998434 kernel: acpiphp: Slot [16] registered Sep 6 00:19:49.998448 kernel: acpiphp: Slot [17] registered Sep 6 00:19:49.998462 kernel: acpiphp: Slot [18] registered Sep 6 00:19:49.998476 kernel: acpiphp: Slot [19] registered Sep 6 00:19:49.998490 kernel: acpiphp: Slot [20] registered Sep 6 00:19:49.998504 kernel: acpiphp: Slot [21] registered Sep 6 00:19:49.998518 kernel: acpiphp: Slot [22] registered Sep 6 00:19:49.998532 kernel: acpiphp: Slot [23] registered Sep 6 00:19:49.998560 kernel: acpiphp: Slot [24] registered Sep 6 00:19:49.998577 kernel: acpiphp: Slot [25] registered Sep 6 00:19:49.998591 kernel: acpiphp: Slot [26] registered Sep 6 00:19:49.998606 kernel: acpiphp: Slot [27] registered Sep 6 00:19:49.998619 kernel: acpiphp: Slot [28] registered Sep 6 00:19:49.998634 kernel: acpiphp: Slot [29] registered Sep 6 00:19:49.998648 kernel: acpiphp: Slot [30] registered Sep 6 00:19:49.998662 kernel: acpiphp: Slot [31] registered Sep 6 00:19:49.998676 kernel: PCI host bridge to bus 0000:00 Sep 6 00:19:49.998804 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:19:49.998922 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:19:49.999034 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:19:49.999145 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 6 00:19:49.999256 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 6 00:19:49.999367 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:19:49.999507 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 6 00:19:49.999661 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 6 00:19:49.999793 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 6 00:19:49.999917 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 6 00:19:50.000040 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 6 00:19:50.000162 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 6 00:19:50.000284 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 6 00:19:50.000406 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 6 00:19:50.000532 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 6 00:19:50.000670 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 6 00:19:50.000798 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 6 00:19:50.000924 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 6 00:19:50.001049 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 6 00:19:50.001175 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 6 00:19:50.001300 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:19:50.001441 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 6 00:19:50.006694 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 6 00:19:50.006876 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 6 00:19:50.006998 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 6 00:19:50.007016 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:19:50.007030 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:19:50.007044 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:19:50.007066 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:19:50.007079 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 6 00:19:50.007092 kernel: iommu: Default domain type: Translated Sep 6 00:19:50.007104 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:19:50.007223 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 6 00:19:50.007342 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:19:50.007456 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 6 00:19:50.007473 kernel: vgaarb: loaded Sep 6 00:19:50.007489 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:19:50.007502 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:19:50.007515 kernel: PTP clock support registered Sep 6 00:19:50.007529 kernel: Registered efivars operations Sep 6 00:19:50.007552 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:19:50.007565 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:19:50.007579 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Sep 6 00:19:50.007592 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 6 00:19:50.007604 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 6 00:19:50.007620 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 6 00:19:50.007633 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 6 00:19:50.007645 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:19:50.012657 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:19:50.012686 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:19:50.012702 kernel: pnp: PnP ACPI init Sep 6 00:19:50.012716 kernel: pnp: PnP ACPI: found 5 devices Sep 6 00:19:50.012731 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:19:50.012746 kernel: NET: Registered PF_INET protocol family Sep 6 00:19:50.012766 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:19:50.012780 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 6 00:19:50.012795 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:19:50.012808 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:19:50.012823 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 00:19:50.012837 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 6 00:19:50.012850 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:19:50.012865 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:19:50.012879 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:19:50.012895 kernel: NET: Registered PF_XDP protocol family Sep 6 00:19:50.013063 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:19:50.013172 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:19:50.013295 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:19:50.013403 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 6 00:19:50.013511 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 6 00:19:50.013684 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 6 00:19:50.013815 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 6 00:19:50.013840 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:19:50.013857 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:19:50.013872 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 6 00:19:50.013886 kernel: clocksource: Switched to clocksource tsc Sep 6 00:19:50.013901 kernel: Initialise system trusted keyrings Sep 6 00:19:50.013916 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 6 00:19:50.013930 kernel: Key type asymmetric registered Sep 6 00:19:50.013944 kernel: Asymmetric key parser 'x509' registered Sep 6 00:19:50.014614 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:19:50.014638 kernel: io scheduler mq-deadline registered Sep 6 00:19:50.014653 kernel: io scheduler kyber registered Sep 6 00:19:50.014668 kernel: io scheduler bfq registered Sep 6 00:19:50.014683 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:19:50.014698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:19:50.014714 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:19:50.014729 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:19:50.014743 kernel: i8042: Warning: Keylock active Sep 6 00:19:50.014762 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:19:50.014777 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:19:50.014934 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 6 00:19:50.015050 kernel: rtc_cmos 00:00: registered as rtc0 Sep 6 00:19:50.015161 kernel: rtc_cmos 00:00: setting system clock to 2025-09-06T00:19:49 UTC (1757117989) Sep 6 00:19:50.015290 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 6 00:19:50.015309 kernel: intel_pstate: CPU model not supported Sep 6 00:19:50.015324 kernel: efifb: probing for efifb Sep 6 00:19:50.015343 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 6 00:19:50.015354 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 6 00:19:50.015368 kernel: efifb: scrolling: redraw Sep 6 00:19:50.015381 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 6 00:19:50.015396 kernel: Console: switching to colour frame buffer device 100x37 Sep 6 00:19:50.015410 kernel: fb0: EFI VGA frame buffer device Sep 6 00:19:50.015448 kernel: pstore: Registered efi as persistent store backend Sep 6 00:19:50.015469 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:19:50.015487 kernel: Segment Routing with IPv6 Sep 6 00:19:50.015502 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:19:50.015514 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:19:50.015526 kernel: Key type dns_resolver registered Sep 6 00:19:50.015551 kernel: IPI shorthand broadcast: enabled Sep 6 00:19:50.015564 kernel: sched_clock: Marking stable (352020191, 131363114)->(566620842, -83237537) Sep 6 00:19:50.015578 kernel: registered taskstats version 1 Sep 6 00:19:50.015590 kernel: Loading compiled-in X.509 certificates Sep 6 00:19:50.015604 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:19:50.015617 kernel: Key type .fscrypt registered Sep 6 00:19:50.015633 kernel: Key type fscrypt-provisioning registered Sep 6 00:19:50.015646 kernel: pstore: Using crash dump compression: deflate Sep 6 00:19:50.015658 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:19:50.015673 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:19:50.015686 kernel: ima: No architecture policies found Sep 6 00:19:50.015701 kernel: clk: Disabling unused clocks Sep 6 00:19:50.015716 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:19:50.015729 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:19:50.015742 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:19:50.015760 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:19:50.015778 kernel: Run /init as init process Sep 6 00:19:50.015793 kernel: with arguments: Sep 6 00:19:50.015808 kernel: /init Sep 6 00:19:50.015824 kernel: with environment: Sep 6 00:19:50.015840 kernel: HOME=/ Sep 6 00:19:50.015855 kernel: TERM=linux Sep 6 00:19:50.015870 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:19:50.015889 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:19:50.015911 systemd[1]: Detected virtualization amazon. Sep 6 00:19:50.015928 systemd[1]: Detected architecture x86-64. Sep 6 00:19:50.015944 systemd[1]: Running in initrd. Sep 6 00:19:50.015959 systemd[1]: No hostname configured, using default hostname. Sep 6 00:19:50.015975 systemd[1]: Hostname set to . Sep 6 00:19:50.015992 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:19:50.016009 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:19:50.016028 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:19:50.016044 systemd[1]: Reached target cryptsetup.target. Sep 6 00:19:50.016061 systemd[1]: Reached target paths.target. Sep 6 00:19:50.016077 systemd[1]: Reached target slices.target. Sep 6 00:19:50.016092 systemd[1]: Reached target swap.target. Sep 6 00:19:50.016110 systemd[1]: Reached target timers.target. Sep 6 00:19:50.016128 systemd[1]: Listening on iscsid.socket. Sep 6 00:19:50.016144 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:19:50.016160 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:19:50.016177 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:19:50.016193 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:19:50.016209 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:19:50.016226 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:19:50.016248 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:19:50.016264 systemd[1]: Reached target sockets.target. Sep 6 00:19:50.016281 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:19:50.016297 systemd[1]: Finished network-cleanup.service. Sep 6 00:19:50.016313 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:19:50.016330 systemd[1]: Starting systemd-journald.service... Sep 6 00:19:50.016346 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:19:50.016362 systemd[1]: Starting systemd-resolved.service... Sep 6 00:19:50.016379 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:19:50.016397 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:19:50.016414 kernel: audit: type=1130 audit(1757117990.000:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.016431 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:19:50.016447 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:19:50.016476 systemd-journald[185]: Journal started Sep 6 00:19:50.020605 systemd-journald[185]: Runtime Journal (/run/log/journal/ec29d0c6daf6de0b92bbfc1f16337cb9) is 4.8M, max 38.3M, 33.5M free. Sep 6 00:19:50.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.015523 systemd-modules-load[186]: Inserted module 'overlay' Sep 6 00:19:50.048019 kernel: audit: type=1130 audit(1757117990.021:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.048056 systemd[1]: Started systemd-journald.service. Sep 6 00:19:50.048079 kernel: audit: type=1130 audit(1757117990.034:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.035511 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:19:50.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.052001 systemd-resolved[187]: Positive Trust Anchors: Sep 6 00:19:50.057779 kernel: audit: type=1130 audit(1757117990.049:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.052019 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:19:50.052073 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:19:50.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.055909 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:19:50.103631 kernel: audit: type=1130 audit(1757117990.081:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.103666 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:19:50.063014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:19:50.131743 kernel: audit: type=1130 audit(1757117990.104:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.131780 kernel: audit: type=1130 audit(1757117990.105:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.131801 kernel: Bridge firewalling registered Sep 6 00:19:50.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.067201 systemd-resolved[187]: Defaulting to hostname 'linux'. Sep 6 00:19:50.078440 systemd[1]: Started systemd-resolved.service. Sep 6 00:19:50.143977 dracut-cmdline[203]: dracut-dracut-053 Sep 6 00:19:50.143977 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 6 00:19:50.143977 dracut-cmdline[203]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:19:50.082818 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:19:50.105034 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:19:50.106352 systemd[1]: Reached target nss-lookup.target. Sep 6 00:19:50.113862 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:19:50.125460 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 6 00:19:50.175570 kernel: SCSI subsystem initialized Sep 6 00:19:50.196843 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:19:50.196916 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:19:50.200559 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:19:50.205019 systemd-modules-load[186]: Inserted module 'dm_multipath' Sep 6 00:19:50.205988 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:19:50.209529 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:19:50.220553 kernel: audit: type=1130 audit(1757117990.207:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.220590 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:19:50.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.230608 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:19:50.239824 kernel: audit: type=1130 audit(1757117990.231:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.246578 kernel: iscsi: registered transport (tcp) Sep 6 00:19:50.271422 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:19:50.271508 kernel: QLogic iSCSI HBA Driver Sep 6 00:19:50.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.303726 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:19:50.305304 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:19:50.357594 kernel: raid6: avx512x4 gen() 18014 MB/s Sep 6 00:19:50.375590 kernel: raid6: avx512x4 xor() 8025 MB/s Sep 6 00:19:50.393575 kernel: raid6: avx512x2 gen() 17994 MB/s Sep 6 00:19:50.411580 kernel: raid6: avx512x2 xor() 24259 MB/s Sep 6 00:19:50.429573 kernel: raid6: avx512x1 gen() 17953 MB/s Sep 6 00:19:50.447588 kernel: raid6: avx512x1 xor() 21922 MB/s Sep 6 00:19:50.465574 kernel: raid6: avx2x4 gen() 17920 MB/s Sep 6 00:19:50.483586 kernel: raid6: avx2x4 xor() 7393 MB/s Sep 6 00:19:50.501573 kernel: raid6: avx2x2 gen() 17888 MB/s Sep 6 00:19:50.519587 kernel: raid6: avx2x2 xor() 18134 MB/s Sep 6 00:19:50.537575 kernel: raid6: avx2x1 gen() 13744 MB/s Sep 6 00:19:50.555591 kernel: raid6: avx2x1 xor() 15774 MB/s Sep 6 00:19:50.573575 kernel: raid6: sse2x4 gen() 9566 MB/s Sep 6 00:19:50.591587 kernel: raid6: sse2x4 xor() 6032 MB/s Sep 6 00:19:50.609574 kernel: raid6: sse2x2 gen() 10589 MB/s Sep 6 00:19:50.627590 kernel: raid6: sse2x2 xor() 6128 MB/s Sep 6 00:19:50.645584 kernel: raid6: sse2x1 gen() 9465 MB/s Sep 6 00:19:50.663821 kernel: raid6: sse2x1 xor() 4831 MB/s Sep 6 00:19:50.663864 kernel: raid6: using algorithm avx512x4 gen() 18014 MB/s Sep 6 00:19:50.663884 kernel: raid6: .... xor() 8025 MB/s, rmw enabled Sep 6 00:19:50.664913 kernel: raid6: using avx512x2 recovery algorithm Sep 6 00:19:50.679570 kernel: xor: automatically using best checksumming function avx Sep 6 00:19:50.781574 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:19:50.790217 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:19:50.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.790000 audit: BPF prog-id=7 op=LOAD Sep 6 00:19:50.790000 audit: BPF prog-id=8 op=LOAD Sep 6 00:19:50.791808 systemd[1]: Starting systemd-udevd.service... Sep 6 00:19:50.804993 systemd-udevd[386]: Using default interface naming scheme 'v252'. Sep 6 00:19:50.810321 systemd[1]: Started systemd-udevd.service. Sep 6 00:19:50.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.812134 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:19:50.832565 dracut-pre-trigger[391]: rd.md=0: removing MD RAID activation Sep 6 00:19:50.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.865623 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:19:50.866933 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:19:50.909965 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:19:50.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:50.969565 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:19:50.996380 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 6 00:19:50.996687 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 6 00:19:51.010564 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 6 00:19:51.016429 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:19:51.016503 kernel: AES CTR mode by8 optimization enabled Sep 6 00:19:51.023763 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 6 00:19:51.040637 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:19:51.040669 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 6 00:19:51.040856 kernel: GPT:9289727 != 16777215 Sep 6 00:19:51.040875 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:19:51.040893 kernel: GPT:9289727 != 16777215 Sep 6 00:19:51.040911 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:19:51.040933 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:19:51.040950 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 6 00:19:51.041096 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:33:5e:e8:ea:ad Sep 6 00:19:51.043047 (udev-worker)[436]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:19:51.116567 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (439) Sep 6 00:19:51.162389 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:19:51.183870 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:19:51.187041 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:19:51.203252 systemd[1]: Starting disk-uuid.service... Sep 6 00:19:51.212629 disk-uuid[593]: Primary Header is updated. Sep 6 00:19:51.212629 disk-uuid[593]: Secondary Entries is updated. Sep 6 00:19:51.212629 disk-uuid[593]: Secondary Header is updated. Sep 6 00:19:51.218034 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:19:51.223992 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:19:51.227560 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:19:51.234610 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:19:51.241584 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:19:52.242573 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:19:52.243177 disk-uuid[594]: The operation has completed successfully. Sep 6 00:19:52.363223 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:19:52.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:52.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:52.363341 systemd[1]: Finished disk-uuid.service. Sep 6 00:19:52.365296 systemd[1]: Starting verity-setup.service... Sep 6 00:19:52.384841 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:19:52.474498 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:19:52.476511 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:19:52.479995 systemd[1]: Finished verity-setup.service. Sep 6 00:19:52.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:52.570567 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:19:52.571664 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:19:52.572494 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:19:52.573580 systemd[1]: Starting ignition-setup.service... Sep 6 00:19:52.578433 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:19:52.603730 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:19:52.603803 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:19:52.603826 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:19:52.640574 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:19:52.654737 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:19:52.657124 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:19:52.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:52.658000 audit: BPF prog-id=9 op=LOAD Sep 6 00:19:52.659564 systemd[1]: Starting systemd-networkd.service... Sep 6 00:19:52.671773 systemd[1]: Finished ignition-setup.service. Sep 6 00:19:52.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:52.674987 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:19:52.686169 systemd-networkd[1103]: lo: Link UP Sep 6 00:19:52.686183 systemd-networkd[1103]: lo: Gained carrier Sep 6 00:19:52.687012 systemd-networkd[1103]: Enumeration completed Sep 6 00:19:52.687296 systemd-networkd[1103]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:19:52.689056 systemd[1]: Started systemd-networkd.service. Sep 6 00:19:52.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:52.691804 systemd-networkd[1103]: eth0: Link UP Sep 6 00:19:52.691810 systemd-networkd[1103]: eth0: Gained carrier Sep 6 00:19:52.692141 systemd[1]: Reached target network.target. Sep 6 00:19:52.693460 systemd[1]: Starting iscsiuio.service... Sep 6 00:19:52.701924 systemd[1]: Started iscsiuio.service. Sep 6 00:19:52.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:52.703736 systemd[1]: Starting iscsid.service... Sep 6 00:19:52.704660 systemd-networkd[1103]: eth0: DHCPv4 address 172.31.31.235/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:19:52.709697 iscsid[1110]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:19:52.709697 iscsid[1110]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:19:52.709697 iscsid[1110]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:19:52.709697 iscsid[1110]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:19:52.709697 iscsid[1110]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:19:52.709697 iscsid[1110]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:19:52.709697 iscsid[1110]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:19:52.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:52.711384 systemd[1]: Started iscsid.service. Sep 6 00:19:52.713745 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:19:52.729349 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:19:52.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:52.730485 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:19:52.731072 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:19:52.731626 systemd[1]: Reached target remote-fs.target. Sep 6 00:19:52.734737 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:19:52.745646 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:19:52.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:53.217932 ignition[1105]: Ignition 2.14.0 Sep 6 00:19:53.217944 ignition[1105]: Stage: fetch-offline Sep 6 00:19:53.218056 ignition[1105]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:19:53.218089 ignition[1105]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:19:53.231837 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:19:53.232263 ignition[1105]: Ignition finished successfully Sep 6 00:19:53.234483 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:19:53.236333 systemd[1]: Starting ignition-fetch.service... Sep 6 00:19:53.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:53.245215 ignition[1129]: Ignition 2.14.0 Sep 6 00:19:53.245228 ignition[1129]: Stage: fetch Sep 6 00:19:53.245431 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:19:53.245465 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:19:53.253216 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:19:53.254269 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:19:53.364012 ignition[1129]: INFO : PUT result: OK Sep 6 00:19:53.376380 ignition[1129]: DEBUG : parsed url from cmdline: "" Sep 6 00:19:53.376380 ignition[1129]: INFO : no config URL provided Sep 6 00:19:53.376380 ignition[1129]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:19:53.376380 ignition[1129]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 6 00:19:53.378560 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:19:53.380651 ignition[1129]: INFO : PUT result: OK Sep 6 00:19:53.381303 ignition[1129]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 6 00:19:53.383081 ignition[1129]: INFO : GET result: OK Sep 6 00:19:53.383911 ignition[1129]: DEBUG : parsing config with SHA512: 730d333d13cd0032f548a861728919bea954553eb6be85641dea96ef3af77f8ca3d1c5abe07a1fbbb415f897c54e3c9e7ac6b7cfee01a177e8405bfd14bc487c Sep 6 00:19:53.390102 unknown[1129]: fetched base config from "system" Sep 6 00:19:53.390117 unknown[1129]: fetched base config from "system" Sep 6 00:19:53.391226 ignition[1129]: fetch: fetch complete Sep 6 00:19:53.390127 unknown[1129]: fetched user config from "aws" Sep 6 00:19:53.391234 ignition[1129]: fetch: fetch passed Sep 6 00:19:53.393242 systemd[1]: Finished ignition-fetch.service. Sep 6 00:19:53.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:53.391291 ignition[1129]: Ignition finished successfully Sep 6 00:19:53.395496 systemd[1]: Starting ignition-kargs.service... Sep 6 00:19:53.406571 ignition[1135]: Ignition 2.14.0 Sep 6 00:19:53.406586 ignition[1135]: Stage: kargs Sep 6 00:19:53.406803 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:19:53.406839 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:19:53.414295 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:19:53.415166 ignition[1135]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:19:53.416333 ignition[1135]: INFO : PUT result: OK Sep 6 00:19:53.420487 ignition[1135]: kargs: kargs passed Sep 6 00:19:53.420577 ignition[1135]: Ignition finished successfully Sep 6 00:19:53.422744 systemd[1]: Finished ignition-kargs.service. Sep 6 00:19:53.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:53.424453 systemd[1]: Starting ignition-disks.service... Sep 6 00:19:53.433495 ignition[1141]: Ignition 2.14.0 Sep 6 00:19:53.433510 ignition[1141]: Stage: disks Sep 6 00:19:53.433737 ignition[1141]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:19:53.433769 ignition[1141]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:19:53.440887 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:19:53.441830 ignition[1141]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:19:53.442574 ignition[1141]: INFO : PUT result: OK Sep 6 00:19:53.444737 ignition[1141]: disks: disks passed Sep 6 00:19:53.444800 ignition[1141]: Ignition finished successfully Sep 6 00:19:53.446128 systemd[1]: Finished ignition-disks.service. Sep 6 00:19:53.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:53.447266 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:19:53.448165 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:19:53.449099 systemd[1]: Reached target local-fs.target. Sep 6 00:19:53.450028 systemd[1]: Reached target sysinit.target. Sep 6 00:19:53.451048 systemd[1]: Reached target basic.target. Sep 6 00:19:53.453153 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:19:53.491855 systemd-fsck[1149]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:19:53.495231 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:19:53.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:53.497069 systemd[1]: Mounting sysroot.mount... Sep 6 00:19:53.515570 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:19:53.516058 systemd[1]: Mounted sysroot.mount. Sep 6 00:19:53.517322 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:19:53.526769 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:19:53.527813 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:19:53.527854 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:19:53.527880 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:19:53.532194 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:19:53.548652 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:19:53.550904 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:19:53.562615 initrd-setup-root[1171]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:19:53.570593 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1166) Sep 6 00:19:53.575185 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:19:53.575249 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:19:53.575262 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:19:53.577340 initrd-setup-root[1179]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:19:53.582149 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:19:53.587388 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:19:53.630568 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:19:53.639273 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:19:53.753388 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:19:53.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:53.755402 systemd[1]: Starting ignition-mount.service... Sep 6 00:19:53.758879 systemd[1]: Starting sysroot-boot.service... Sep 6 00:19:53.766832 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 00:19:53.766959 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 00:19:53.798501 systemd[1]: Finished sysroot-boot.service. Sep 6 00:19:53.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:53.812452 ignition[1234]: INFO : Ignition 2.14.0 Sep 6 00:19:53.812452 ignition[1234]: INFO : Stage: mount Sep 6 00:19:53.813880 ignition[1234]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:19:53.813880 ignition[1234]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:19:53.819774 ignition[1234]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:19:53.820664 ignition[1234]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:19:53.821466 ignition[1234]: INFO : PUT result: OK Sep 6 00:19:53.824141 ignition[1234]: INFO : mount: mount passed Sep 6 00:19:53.824924 ignition[1234]: INFO : Ignition finished successfully Sep 6 00:19:53.826617 systemd[1]: Finished ignition-mount.service. Sep 6 00:19:53.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:53.828402 systemd[1]: Starting ignition-files.service... Sep 6 00:19:53.837852 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:19:53.861570 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1241) Sep 6 00:19:53.865735 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:19:53.865809 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:19:53.865829 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:19:53.880584 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:19:53.885028 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:19:53.895303 ignition[1260]: INFO : Ignition 2.14.0 Sep 6 00:19:53.895303 ignition[1260]: INFO : Stage: files Sep 6 00:19:53.896693 ignition[1260]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:19:53.896693 ignition[1260]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:19:53.902307 ignition[1260]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:19:53.903144 ignition[1260]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:19:53.903765 ignition[1260]: INFO : PUT result: OK Sep 6 00:19:53.906731 ignition[1260]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:19:53.914108 ignition[1260]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:19:53.914108 ignition[1260]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:19:53.938107 ignition[1260]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:19:53.939737 ignition[1260]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:19:53.940872 ignition[1260]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:19:53.940872 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:19:53.940872 ignition[1260]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:19:53.939759 unknown[1260]: wrote ssh authorized keys file for user: core Sep 6 00:19:54.006287 ignition[1260]: INFO : GET result: OK Sep 6 00:19:54.554747 systemd-networkd[1103]: eth0: Gained IPv6LL Sep 6 00:19:54.840800 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:19:54.840800 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:19:54.845247 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:19:54.845247 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:19:54.845247 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:19:54.852383 ignition[1260]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem776694340" Sep 6 00:19:54.852383 ignition[1260]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem776694340": device or resource busy Sep 6 00:19:54.852383 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem776694340", trying btrfs: device or resource busy Sep 6 00:19:54.852383 ignition[1260]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem776694340" Sep 6 00:19:54.852383 ignition[1260]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem776694340" Sep 6 00:19:54.866297 ignition[1260]: INFO : op(3): [started] unmounting "/mnt/oem776694340" Sep 6 00:19:54.866297 ignition[1260]: INFO : op(3): [finished] unmounting "/mnt/oem776694340" Sep 6 00:19:54.866297 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:19:54.866297 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:19:54.866297 ignition[1260]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:19:54.859463 systemd[1]: mnt-oem776694340.mount: Deactivated successfully. Sep 6 00:19:54.914763 ignition[1260]: INFO : GET result: OK Sep 6 00:19:55.029737 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:19:55.032053 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:19:55.032053 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:19:55.066371 ignition[1260]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem552234657" Sep 6 00:19:55.066371 ignition[1260]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem552234657": device or resource busy Sep 6 00:19:55.066371 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem552234657", trying btrfs: device or resource busy Sep 6 00:19:55.066371 ignition[1260]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem552234657" Sep 6 00:19:55.066371 ignition[1260]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem552234657" Sep 6 00:19:55.066371 ignition[1260]: INFO : op(6): [started] unmounting "/mnt/oem552234657" Sep 6 00:19:55.066371 ignition[1260]: INFO : op(6): [finished] unmounting "/mnt/oem552234657" Sep 6 00:19:55.066371 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:19:55.066371 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:19:55.066371 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:19:55.048793 systemd[1]: mnt-oem552234657.mount: Deactivated successfully. Sep 6 00:19:55.081779 ignition[1260]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem198864912" Sep 6 00:19:55.081779 ignition[1260]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem198864912": device or resource busy Sep 6 00:19:55.081779 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem198864912", trying btrfs: device or resource busy Sep 6 00:19:55.081779 ignition[1260]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem198864912" Sep 6 00:19:55.081779 ignition[1260]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem198864912" Sep 6 00:19:55.081779 ignition[1260]: INFO : op(9): [started] unmounting "/mnt/oem198864912" Sep 6 00:19:55.081779 ignition[1260]: INFO : op(9): [finished] unmounting "/mnt/oem198864912" Sep 6 00:19:55.081779 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:19:55.081779 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:19:55.081779 ignition[1260]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:19:55.530636 ignition[1260]: INFO : GET result: OK Sep 6 00:19:57.373396 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:19:57.373396 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:19:57.378446 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:19:57.383764 ignition[1260]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem816734169" Sep 6 00:19:57.386375 ignition[1260]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem816734169": device or resource busy Sep 6 00:19:57.386375 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem816734169", trying btrfs: device or resource busy Sep 6 00:19:57.386375 ignition[1260]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem816734169" Sep 6 00:19:57.396619 ignition[1260]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem816734169" Sep 6 00:19:57.396619 ignition[1260]: INFO : op(c): [started] unmounting "/mnt/oem816734169" Sep 6 00:19:57.396619 ignition[1260]: INFO : op(c): [finished] unmounting "/mnt/oem816734169" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(10): [started] processing unit "amazon-ssm-agent.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(10): op(11): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(10): op(11): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(10): [finished] processing unit "amazon-ssm-agent.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(12): [started] processing unit "nvidia.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(12): [finished] processing unit "nvidia.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(13): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(13): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(16): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Sep 6 00:19:57.396619 ignition[1260]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:19:57.481033 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 6 00:19:57.481066 kernel: audit: type=1130 audit(1757117997.412:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.481085 kernel: audit: type=1130 audit(1757117997.441:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.481102 kernel: audit: type=1131 audit(1757117997.441:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.481119 kernel: audit: type=1130 audit(1757117997.453:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.395141 systemd[1]: mnt-oem816734169.mount: Deactivated successfully. Sep 6 00:19:57.482682 ignition[1260]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:19:57.482682 ignition[1260]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:19:57.482682 ignition[1260]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:19:57.482682 ignition[1260]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:19:57.482682 ignition[1260]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:19:57.482682 ignition[1260]: INFO : files: files passed Sep 6 00:19:57.482682 ignition[1260]: INFO : Ignition finished successfully Sep 6 00:19:57.507729 kernel: audit: type=1130 audit(1757117997.486:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.507766 kernel: audit: type=1131 audit(1757117997.486:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.410968 systemd[1]: Finished ignition-files.service. Sep 6 00:19:57.422157 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:19:57.510799 initrd-setup-root-after-ignition[1285]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:19:57.429283 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:19:57.430730 systemd[1]: Starting ignition-quench.service... Sep 6 00:19:57.436848 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:19:57.437262 systemd[1]: Finished ignition-quench.service. Sep 6 00:19:57.442249 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:19:57.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.453942 systemd[1]: Reached target ignition-complete.target. Sep 6 00:19:57.529406 kernel: audit: type=1130 audit(1757117997.520:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.463040 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:19:57.484979 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:19:57.485107 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:19:57.487613 systemd[1]: Reached target initrd-fs.target. Sep 6 00:19:57.499520 systemd[1]: Reached target initrd.target. Sep 6 00:19:57.501728 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:19:57.503129 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:19:57.519864 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:19:57.527144 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:19:57.542294 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:19:57.543106 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:19:57.544422 systemd[1]: Stopped target timers.target. Sep 6 00:19:57.545616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:19:57.552076 kernel: audit: type=1131 audit(1757117997.546:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.545791 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:19:57.547163 systemd[1]: Stopped target initrd.target. Sep 6 00:19:57.552950 systemd[1]: Stopped target basic.target. Sep 6 00:19:57.554145 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:19:57.555422 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:19:57.556603 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:19:57.557762 systemd[1]: Stopped target remote-fs.target. Sep 6 00:19:57.559038 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:19:57.560185 systemd[1]: Stopped target sysinit.target. Sep 6 00:19:57.561316 systemd[1]: Stopped target local-fs.target. Sep 6 00:19:57.562491 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:19:57.563682 systemd[1]: Stopped target swap.target. Sep 6 00:19:57.570968 kernel: audit: type=1131 audit(1757117997.565:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.564755 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:19:57.564962 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:19:57.578177 kernel: audit: type=1131 audit(1757117997.572:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.566138 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:19:57.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.571748 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:19:57.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.571958 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:19:57.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.573166 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:19:57.596461 iscsid[1110]: iscsid shutting down. Sep 6 00:19:57.573379 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:19:57.600158 ignition[1298]: INFO : Ignition 2.14.0 Sep 6 00:19:57.600158 ignition[1298]: INFO : Stage: umount Sep 6 00:19:57.600158 ignition[1298]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:19:57.600158 ignition[1298]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:19:57.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.579160 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:19:57.618747 ignition[1298]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:19:57.618747 ignition[1298]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:19:57.579368 systemd[1]: Stopped ignition-files.service. Sep 6 00:19:57.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.581815 systemd[1]: Stopping ignition-mount.service... Sep 6 00:19:57.626533 ignition[1298]: INFO : PUT result: OK Sep 6 00:19:57.583444 systemd[1]: Stopping iscsid.service... Sep 6 00:19:57.584391 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:19:57.588649 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:19:57.592739 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:19:57.634371 ignition[1298]: INFO : umount: umount passed Sep 6 00:19:57.634371 ignition[1298]: INFO : Ignition finished successfully Sep 6 00:19:57.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.593713 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:19:57.593971 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:19:57.597768 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:19:57.597972 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:19:57.605880 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:19:57.606600 systemd[1]: Stopped iscsid.service. Sep 6 00:19:57.613072 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:19:57.613204 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:19:57.621425 systemd[1]: Stopping iscsiuio.service... Sep 6 00:19:57.623884 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:19:57.624011 systemd[1]: Stopped iscsiuio.service. Sep 6 00:19:57.633001 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:19:57.634735 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:19:57.634873 systemd[1]: Stopped ignition-mount.service. Sep 6 00:19:57.635920 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:19:57.635969 systemd[1]: Stopped ignition-disks.service. Sep 6 00:19:57.636554 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:19:57.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.636606 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:19:57.637467 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:19:57.637519 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:19:57.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.638505 systemd[1]: Stopped target network.target. Sep 6 00:19:57.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.639367 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:19:57.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.639430 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:19:57.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.640331 systemd[1]: Stopped target paths.target. Sep 6 00:19:57.663000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:19:57.641107 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:19:57.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.648074 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:19:57.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.648568 systemd[1]: Stopped target slices.target. Sep 6 00:19:57.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.649116 systemd[1]: Stopped target sockets.target. Sep 6 00:19:57.650047 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:19:57.650106 systemd[1]: Closed iscsid.socket. Sep 6 00:19:57.651068 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:19:57.651120 systemd[1]: Closed iscsiuio.socket. Sep 6 00:19:57.651958 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:19:57.652024 systemd[1]: Stopped ignition-setup.service. Sep 6 00:19:57.653192 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:19:57.654085 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:19:57.655339 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:19:57.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.655467 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:19:57.656566 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:19:57.656624 systemd-networkd[1103]: eth0: DHCPv6 lease lost Sep 6 00:19:57.684000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:19:57.656625 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:19:57.659536 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:19:57.659755 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:19:57.661048 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:19:57.661169 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:19:57.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.662621 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:19:57.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.662665 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:19:57.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.664462 systemd[1]: Stopping network-cleanup.service... Sep 6 00:19:57.666476 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:19:57.666532 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:19:57.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.667136 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:19:57.667181 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:19:57.667882 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:19:57.667936 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:19:57.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.674427 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:19:57.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:57.677048 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:19:57.680019 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:19:57.680201 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:19:57.682565 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:19:57.682624 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:19:57.687547 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:19:57.687605 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:19:57.688508 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:19:57.688598 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:19:57.689616 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:19:57.689678 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:19:57.690785 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:19:57.690842 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:19:57.692902 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:19:57.695393 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:19:57.695464 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:19:57.699033 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:19:57.699600 systemd[1]: Stopped network-cleanup.service. Sep 6 00:19:57.702477 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:19:57.702596 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:19:57.704183 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:19:57.706174 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:19:57.723527 systemd[1]: Switching root. Sep 6 00:19:57.748027 systemd-journald[185]: Journal stopped Sep 6 00:20:04.326300 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Sep 6 00:20:04.326412 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:20:04.326438 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:20:04.326472 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:20:04.326499 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:20:04.326521 kernel: SELinux: policy capability open_perms=1 Sep 6 00:20:04.326563 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:20:04.326581 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:20:04.326600 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:20:04.326617 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:20:04.326640 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:20:04.326662 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:20:04.326684 systemd[1]: Successfully loaded SELinux policy in 107.020ms. Sep 6 00:20:04.326728 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.881ms. Sep 6 00:20:04.326754 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:20:04.326780 systemd[1]: Detected virtualization amazon. Sep 6 00:20:04.326803 systemd[1]: Detected architecture x86-64. Sep 6 00:20:04.326825 systemd[1]: Detected first boot. Sep 6 00:20:04.326849 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:20:04.326877 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:20:04.326900 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:20:04.326924 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:04.326955 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:04.326986 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:04.327010 kernel: kauditd_printk_skb: 48 callbacks suppressed Sep 6 00:20:04.327035 kernel: audit: type=1334 audit(1757118003.992:88): prog-id=12 op=LOAD Sep 6 00:20:04.327058 kernel: audit: type=1334 audit(1757118003.992:89): prog-id=3 op=UNLOAD Sep 6 00:20:04.327079 kernel: audit: type=1334 audit(1757118003.994:90): prog-id=13 op=LOAD Sep 6 00:20:04.327100 kernel: audit: type=1334 audit(1757118003.996:91): prog-id=14 op=LOAD Sep 6 00:20:04.327122 kernel: audit: type=1334 audit(1757118003.996:92): prog-id=4 op=UNLOAD Sep 6 00:20:04.327144 kernel: audit: type=1334 audit(1757118003.996:93): prog-id=5 op=UNLOAD Sep 6 00:20:04.327166 kernel: audit: type=1334 audit(1757118003.997:94): prog-id=15 op=LOAD Sep 6 00:20:04.327188 kernel: audit: type=1334 audit(1757118003.997:95): prog-id=12 op=UNLOAD Sep 6 00:20:04.327212 kernel: audit: type=1334 audit(1757118003.999:96): prog-id=16 op=LOAD Sep 6 00:20:04.327234 kernel: audit: type=1334 audit(1757118004.002:97): prog-id=17 op=LOAD Sep 6 00:20:04.327255 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:20:04.327279 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:20:04.327303 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:20:04.327326 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:20:04.327349 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:20:04.327374 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:20:04.327403 systemd[1]: Created slice system-getty.slice. Sep 6 00:20:04.327426 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:20:04.327453 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:20:04.327476 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:20:04.327500 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:20:04.327535 systemd[1]: Created slice user.slice. Sep 6 00:20:04.332415 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:20:04.332442 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:20:04.332462 systemd[1]: Set up automount boot.automount. Sep 6 00:20:04.332483 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:20:04.332503 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:20:04.332525 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:20:04.332562 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:20:04.332586 systemd[1]: Reached target integritysetup.target. Sep 6 00:20:04.332608 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:20:04.332629 systemd[1]: Reached target remote-fs.target. Sep 6 00:20:04.332649 systemd[1]: Reached target slices.target. Sep 6 00:20:04.332670 systemd[1]: Reached target swap.target. Sep 6 00:20:04.332692 systemd[1]: Reached target torcx.target. Sep 6 00:20:04.332712 systemd[1]: Reached target veritysetup.target. Sep 6 00:20:04.332733 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:20:04.332753 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:20:04.332776 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:20:04.332797 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:20:04.332819 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:20:04.332840 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:20:04.332861 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:20:04.332882 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:20:04.332903 systemd[1]: Mounting media.mount... Sep 6 00:20:04.332923 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:04.332944 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:20:04.332967 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:20:04.332988 systemd[1]: Mounting tmp.mount... Sep 6 00:20:04.333009 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:20:04.333029 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:04.333051 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:20:04.333072 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:20:04.333092 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:04.333113 systemd[1]: Starting modprobe@drm.service... Sep 6 00:20:04.333133 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:04.333157 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:20:04.333179 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:04.333200 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:20:04.333221 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:20:04.333243 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:20:04.333263 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:20:04.333285 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:20:04.333307 systemd[1]: Stopped systemd-journald.service. Sep 6 00:20:04.333328 systemd[1]: Starting systemd-journald.service... Sep 6 00:20:04.333352 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:20:04.333372 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:20:04.333392 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:20:04.333409 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:20:04.333426 kernel: fuse: init (API version 7.34) Sep 6 00:20:04.333447 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:20:04.333468 systemd[1]: Stopped verity-setup.service. Sep 6 00:20:04.333490 kernel: loop: module loaded Sep 6 00:20:04.333512 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:04.333534 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:20:04.333568 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:20:04.333587 systemd[1]: Mounted media.mount. Sep 6 00:20:04.333606 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:20:04.333626 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:20:04.333647 systemd[1]: Mounted tmp.mount. Sep 6 00:20:04.333673 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:20:04.333692 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:20:04.333714 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:20:04.333736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:04.333761 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:04.333781 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:20:04.333804 systemd[1]: Finished modprobe@drm.service. Sep 6 00:20:04.333825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:04.333852 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:04.333875 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:20:04.333897 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:20:04.333919 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:04.333941 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:04.333963 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:20:04.333989 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:20:04.334019 systemd-journald[1410]: Journal started Sep 6 00:20:04.334108 systemd-journald[1410]: Runtime Journal (/run/log/journal/ec29d0c6daf6de0b92bbfc1f16337cb9) is 4.8M, max 38.3M, 33.5M free. Sep 6 00:19:58.432000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:19:58.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:19:58.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:19:58.577000 audit: BPF prog-id=10 op=LOAD Sep 6 00:19:58.577000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:19:58.577000 audit: BPF prog-id=11 op=LOAD Sep 6 00:19:58.577000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:19:58.718000 audit[1331]: AVC avc: denied { associate } for pid=1331 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:19:58.718000 audit[1331]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0000240e2 a1=c00002a060 a2=c000028040 a3=32 items=0 ppid=1314 pid=1331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:19:58.718000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:19:58.721000 audit[1331]: AVC avc: denied { associate } for pid=1331 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:19:58.721000 audit[1331]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000241b9 a2=1ed a3=0 items=2 ppid=1314 pid=1331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:19:58.721000 audit: CWD cwd="/" Sep 6 00:19:58.721000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:19:58.721000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:19:58.721000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:20:03.992000 audit: BPF prog-id=12 op=LOAD Sep 6 00:20:03.992000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:20:03.994000 audit: BPF prog-id=13 op=LOAD Sep 6 00:20:03.996000 audit: BPF prog-id=14 op=LOAD Sep 6 00:20:03.996000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:20:03.996000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:20:03.997000 audit: BPF prog-id=15 op=LOAD Sep 6 00:20:03.997000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:20:03.999000 audit: BPF prog-id=16 op=LOAD Sep 6 00:20:04.002000 audit: BPF prog-id=17 op=LOAD Sep 6 00:20:04.002000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:20:04.002000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:20:04.010000 audit: BPF prog-id=18 op=LOAD Sep 6 00:20:04.010000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:20:04.010000 audit: BPF prog-id=19 op=LOAD Sep 6 00:20:04.010000 audit: BPF prog-id=20 op=LOAD Sep 6 00:20:04.010000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:20:04.010000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:20:04.340708 systemd[1]: Started systemd-journald.service. Sep 6 00:20:04.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.019000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:20:04.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.180000 audit: BPF prog-id=21 op=LOAD Sep 6 00:20:04.180000 audit: BPF prog-id=22 op=LOAD Sep 6 00:20:04.180000 audit: BPF prog-id=23 op=LOAD Sep 6 00:20:04.180000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:20:04.180000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:20:04.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.321000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:20:04.321000 audit[1410]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdba7ba970 a2=4000 a3=7ffdba7baa0c items=0 ppid=1 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:04.321000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:20:04.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:03.991112 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:20:04.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:19:58.707364 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:03.991126 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 6 00:19:58.709279 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:20:04.011451 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:19:58.709304 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:20:04.339593 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:19:58.709338 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:20:04.341991 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:19:58.709350 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:20:04.343362 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:19:58.709395 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:20:04.344921 systemd[1]: Reached target network-pre.target. Sep 6 00:19:58.709410 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:20:04.346807 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:19:58.709622 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:19:58.709665 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:19:58.709678 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:19:58.711068 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:19:58.711109 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:19:58.711130 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:19:58.711146 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:19:58.711164 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:19:58.711179 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:19:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:20:03.393106 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:20:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:20:03.393367 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:20:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:20:03.393521 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:20:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:20:03.393784 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:20:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:20:03.393832 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:20:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:20:03.393892 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2025-09-06T00:20:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:20:04.352346 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:20:04.353528 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:20:04.357101 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:20:04.359411 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:20:04.360406 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:04.361722 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:20:04.362842 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:04.364469 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:20:04.368294 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:20:04.372034 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:20:04.379626 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:20:04.382819 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:20:04.390591 systemd-journald[1410]: Time spent on flushing to /var/log/journal/ec29d0c6daf6de0b92bbfc1f16337cb9 is 36.571ms for 1243 entries. Sep 6 00:20:04.390591 systemd-journald[1410]: System Journal (/var/log/journal/ec29d0c6daf6de0b92bbfc1f16337cb9) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:20:04.436484 systemd-journald[1410]: Received client request to flush runtime journal. Sep 6 00:20:04.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.396767 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:20:04.437651 udevadm[1448]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:20:04.397824 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:20:04.428977 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:20:04.437657 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:20:04.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:04.565886 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:20:05.084461 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:20:05.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:05.085000 audit: BPF prog-id=24 op=LOAD Sep 6 00:20:05.085000 audit: BPF prog-id=25 op=LOAD Sep 6 00:20:05.085000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:20:05.085000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:20:05.086945 systemd[1]: Starting systemd-udevd.service... Sep 6 00:20:05.106486 systemd-udevd[1451]: Using default interface naming scheme 'v252'. Sep 6 00:20:05.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:05.192739 systemd[1]: Started systemd-udevd.service. Sep 6 00:20:05.199000 audit: BPF prog-id=26 op=LOAD Sep 6 00:20:05.200846 systemd[1]: Starting systemd-networkd.service... Sep 6 00:20:05.230000 audit: BPF prog-id=27 op=LOAD Sep 6 00:20:05.231000 audit: BPF prog-id=28 op=LOAD Sep 6 00:20:05.231000 audit: BPF prog-id=29 op=LOAD Sep 6 00:20:05.232361 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:20:05.244075 (udev-worker)[1454]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:20:05.253394 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:20:05.269099 systemd[1]: Started systemd-userdbd.service. Sep 6 00:20:05.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:05.345568 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:20:05.345000 audit[1453]: AVC avc: denied { confidentiality } for pid=1453 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:20:05.345000 audit[1453]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56143c3436c0 a1=338ec a2=7fb0609a3bc5 a3=5 items=110 ppid=1451 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:05.345000 audit: CWD cwd="/" Sep 6 00:20:05.345000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=1 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=2 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=3 name=(null) inode=14688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=4 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=5 name=(null) inode=14689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=6 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=7 name=(null) inode=14690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=8 name=(null) inode=14690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=9 name=(null) inode=14691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=10 name=(null) inode=14690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=11 name=(null) inode=14692 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=12 name=(null) inode=14690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=13 name=(null) inode=14693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=14 name=(null) inode=14690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=15 name=(null) inode=14694 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=16 name=(null) inode=14690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=17 name=(null) inode=14695 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=18 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=19 name=(null) inode=14696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=20 name=(null) inode=14696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=21 name=(null) inode=14697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=22 name=(null) inode=14696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=23 name=(null) inode=14698 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=24 name=(null) inode=14696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=25 name=(null) inode=14699 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=26 name=(null) inode=14696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=27 name=(null) inode=14700 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=28 name=(null) inode=14696 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=29 name=(null) inode=14701 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=30 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=31 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=32 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=33 name=(null) inode=14703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=34 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=35 name=(null) inode=14704 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=36 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=37 name=(null) inode=14705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=38 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=39 name=(null) inode=14706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=40 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=41 name=(null) inode=14707 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=42 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=43 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=44 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=45 name=(null) inode=14709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=46 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=47 name=(null) inode=14710 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=48 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=49 name=(null) inode=14711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=50 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=51 name=(null) inode=14712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=52 name=(null) inode=14708 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=53 name=(null) inode=14713 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=55 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=56 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=57 name=(null) inode=14715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=58 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=59 name=(null) inode=14716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=60 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=61 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=62 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=63 name=(null) inode=14718 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=64 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=65 name=(null) inode=14719 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=66 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=67 name=(null) inode=14720 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=68 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=69 name=(null) inode=14721 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=70 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=71 name=(null) inode=14722 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=72 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=73 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=74 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=75 name=(null) inode=14724 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=76 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=77 name=(null) inode=14725 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=78 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=79 name=(null) inode=14726 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=80 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=81 name=(null) inode=14727 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=82 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=83 name=(null) inode=14728 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=84 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=85 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=86 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=87 name=(null) inode=14730 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=88 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=89 name=(null) inode=14731 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=90 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=91 name=(null) inode=14732 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=92 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=93 name=(null) inode=14733 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=94 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=95 name=(null) inode=14734 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=96 name=(null) inode=14714 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=97 name=(null) inode=14735 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=98 name=(null) inode=14735 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=99 name=(null) inode=14736 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=100 name=(null) inode=14735 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=101 name=(null) inode=14737 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=102 name=(null) inode=14735 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=103 name=(null) inode=14738 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=104 name=(null) inode=14735 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=105 name=(null) inode=14739 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=106 name=(null) inode=14735 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=107 name=(null) inode=14740 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PATH item=109 name=(null) inode=14741 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:05.345000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:20:05.387576 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:20:05.397808 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 6 00:20:05.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:05.404640 systemd-networkd[1466]: lo: Link UP Sep 6 00:20:05.404646 systemd-networkd[1466]: lo: Gained carrier Sep 6 00:20:05.405257 systemd-networkd[1466]: Enumeration completed Sep 6 00:20:05.405375 systemd[1]: Started systemd-networkd.service. Sep 6 00:20:05.407806 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:20:05.408883 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:20:05.414277 systemd-networkd[1466]: eth0: Link UP Sep 6 00:20:05.414599 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:20:05.414941 systemd-networkd[1466]: eth0: Gained carrier Sep 6 00:20:05.424931 systemd-networkd[1466]: eth0: DHCPv4 address 172.31.31.235/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:20:05.431566 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:20:05.442561 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 6 00:20:05.449568 kernel: ACPI: button: Sleep Button [SLPF] Sep 6 00:20:05.471593 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:20:05.566170 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:20:05.567187 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:20:05.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:05.569080 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:20:05.631069 lvm[1565]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:20:05.657079 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:20:05.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:05.657758 systemd[1]: Reached target cryptsetup.target. Sep 6 00:20:05.659968 systemd[1]: Starting lvm2-activation.service... Sep 6 00:20:05.666501 lvm[1566]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:20:05.695112 systemd[1]: Finished lvm2-activation.service. Sep 6 00:20:05.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:05.695763 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:20:05.696248 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:20:05.696291 systemd[1]: Reached target local-fs.target. Sep 6 00:20:05.696878 systemd[1]: Reached target machines.target. Sep 6 00:20:05.698668 systemd[1]: Starting ldconfig.service... Sep 6 00:20:05.700311 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:05.700398 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:05.701778 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:20:05.704120 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:20:05.706392 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:20:05.709383 systemd[1]: Starting systemd-sysext.service... Sep 6 00:20:05.731922 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1568 (bootctl) Sep 6 00:20:05.734274 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:20:05.737442 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:20:05.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:05.742079 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:20:05.756925 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:20:05.757171 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:20:05.771576 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:20:05.897118 systemd-fsck[1578]: fsck.fat 4.2 (2021-01-31) Sep 6 00:20:05.897118 systemd-fsck[1578]: /dev/nvme0n1p1: 790 files, 120761/258078 clusters Sep 6 00:20:05.899247 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:20:05.901773 systemd[1]: Mounting boot.mount... Sep 6 00:20:05.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:05.926616 systemd[1]: Mounted boot.mount. Sep 6 00:20:05.960094 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:20:05.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.018838 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:20:06.019433 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:20:06.032566 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:20:06.053635 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:20:06.073592 (sd-sysext)[1593]: Using extensions 'kubernetes'. Sep 6 00:20:06.075352 (sd-sysext)[1593]: Merged extensions into '/usr'. Sep 6 00:20:06.094290 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:06.096056 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:20:06.097588 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.099999 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:06.105003 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:06.107850 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:06.108914 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.109371 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:06.109626 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:06.113059 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:20:06.113980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:06.114162 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:06.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.115229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:06.115396 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:06.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.116641 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:06.116807 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:06.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.119116 systemd[1]: Finished systemd-sysext.service. Sep 6 00:20:06.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.122287 systemd[1]: Starting ensure-sysext.service... Sep 6 00:20:06.123213 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:06.123306 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.124805 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:20:06.133235 systemd[1]: Reloading. Sep 6 00:20:06.164310 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:20:06.199918 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:20:06.220295 /usr/lib/systemd/system-generators/torcx-generator[1620]: time="2025-09-06T00:20:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:06.221621 /usr/lib/systemd/system-generators/torcx-generator[1620]: time="2025-09-06T00:20:06Z" level=info msg="torcx already run" Sep 6 00:20:06.221808 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:20:06.357424 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:06.357668 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:06.396687 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:06.458653 systemd-networkd[1466]: eth0: Gained IPv6LL Sep 6 00:20:06.519000 audit: BPF prog-id=30 op=LOAD Sep 6 00:20:06.519000 audit: BPF prog-id=27 op=UNLOAD Sep 6 00:20:06.519000 audit: BPF prog-id=31 op=LOAD Sep 6 00:20:06.519000 audit: BPF prog-id=32 op=LOAD Sep 6 00:20:06.519000 audit: BPF prog-id=28 op=UNLOAD Sep 6 00:20:06.519000 audit: BPF prog-id=29 op=UNLOAD Sep 6 00:20:06.520000 audit: BPF prog-id=33 op=LOAD Sep 6 00:20:06.520000 audit: BPF prog-id=34 op=LOAD Sep 6 00:20:06.520000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:20:06.520000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:20:06.522000 audit: BPF prog-id=35 op=LOAD Sep 6 00:20:06.522000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:20:06.524000 audit: BPF prog-id=36 op=LOAD Sep 6 00:20:06.524000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:20:06.524000 audit: BPF prog-id=37 op=LOAD Sep 6 00:20:06.524000 audit: BPF prog-id=38 op=LOAD Sep 6 00:20:06.524000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:20:06.524000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:20:06.530358 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:20:06.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.531577 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:20:06.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.541127 systemd[1]: Starting audit-rules.service... Sep 6 00:20:06.543345 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:20:06.546796 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:20:06.550000 audit: BPF prog-id=39 op=LOAD Sep 6 00:20:06.554000 audit: BPF prog-id=40 op=LOAD Sep 6 00:20:06.552638 systemd[1]: Starting systemd-resolved.service... Sep 6 00:20:06.556633 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:20:06.559372 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:20:06.573449 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.575476 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:06.578986 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:06.582788 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:06.583567 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.583752 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:06.585074 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:20:06.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.588298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:06.588485 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:06.589824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:06.590007 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:06.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.593000 audit[1682]: SYSTEM_BOOT pid=1682 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.592768 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:06.592937 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:06.598027 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:06.598306 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.598505 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:06.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.603412 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:20:06.607457 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.609569 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:06.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.612929 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:06.616430 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:06.617697 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.617905 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:06.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.618084 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:06.619314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:06.619505 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:06.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.623202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:06.623375 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:06.624524 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:06.632206 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.634278 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:06.638423 systemd[1]: Starting modprobe@drm.service... Sep 6 00:20:06.640957 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:06.641928 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.642146 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:06.642366 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:06.643644 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:06.643825 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:06.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.651945 systemd[1]: Finished ensure-sysext.service. Sep 6 00:20:06.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.655655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:06.655830 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:06.656620 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:06.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.663535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:06.663726 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:06.664454 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.665095 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:20:06.665279 systemd[1]: Finished modprobe@drm.service. Sep 6 00:20:06.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.690349 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:20:06.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:06.732000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:20:06.732000 audit[1705]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe4537b830 a2=420 a3=0 items=0 ppid=1676 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:06.732000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:20:06.734135 systemd[1]: Finished audit-rules.service. Sep 6 00:20:06.735932 augenrules[1705]: No rules Sep 6 00:20:06.750287 systemd-resolved[1680]: Positive Trust Anchors: Sep 6 00:20:06.750772 systemd-resolved[1680]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:20:06.750905 systemd-resolved[1680]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:20:06.751714 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:20:06.752261 systemd[1]: Reached target time-set.target. Sep 6 00:20:06.755892 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:06.755915 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:06.785414 systemd-resolved[1680]: Defaulting to hostname 'linux'. Sep 6 00:20:06.787229 systemd[1]: Started systemd-resolved.service. Sep 6 00:20:06.787715 systemd[1]: Reached target network.target. Sep 6 00:20:06.788039 systemd[1]: Reached target network-online.target. Sep 6 00:20:06.788345 systemd[1]: Reached target nss-lookup.target. Sep 6 00:20:06.838993 ldconfig[1567]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:20:06.850745 systemd[1]: Finished ldconfig.service. Sep 6 00:20:06.853065 systemd[1]: Starting systemd-update-done.service... Sep 6 00:20:06.863040 systemd[1]: Finished systemd-update-done.service. Sep 6 00:20:06.863606 systemd[1]: Reached target sysinit.target. Sep 6 00:20:06.864104 systemd[1]: Started motdgen.path. Sep 6 00:20:06.864504 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:20:06.865080 systemd[1]: Started logrotate.timer. Sep 6 00:20:06.865574 systemd[1]: Started mdadm.timer. Sep 6 00:20:06.865934 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:20:06.866407 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:20:06.866454 systemd[1]: Reached target paths.target. Sep 6 00:20:06.866847 systemd[1]: Reached target timers.target. Sep 6 00:20:06.867532 systemd[1]: Listening on dbus.socket. Sep 6 00:20:06.869294 systemd[1]: Starting docker.socket... Sep 6 00:20:06.876302 systemd[1]: Listening on sshd.socket. Sep 6 00:20:06.876918 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:06.877505 systemd[1]: Listening on docker.socket. Sep 6 00:20:06.878027 systemd[1]: Reached target sockets.target. Sep 6 00:20:06.878492 systemd[1]: Reached target basic.target. Sep 6 00:20:06.879109 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.879149 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:20:06.880482 systemd[1]: Started amazon-ssm-agent.service. Sep 6 00:20:06.882583 systemd[1]: Starting containerd.service... Sep 6 00:20:06.886748 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:20:06.888825 systemd[1]: Starting dbus.service... Sep 6 00:20:06.891224 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:20:06.894130 systemd[1]: Starting extend-filesystems.service... Sep 6 00:20:06.896210 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:20:06.898154 systemd[1]: Starting kubelet.service... Sep 6 00:20:06.937648 jq[1718]: false Sep 6 00:20:06.904448 systemd[1]: Starting motdgen.service... Sep 6 00:20:06.908819 systemd[1]: Started nvidia.service. Sep 6 00:20:06.911378 systemd[1]: Starting prepare-helm.service... Sep 6 00:20:06.923439 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:20:06.926133 systemd[1]: Starting sshd-keygen.service... Sep 6 00:20:06.936520 systemd[1]: Starting systemd-logind.service... Sep 6 00:20:06.939214 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:06.939331 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:20:06.940734 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:20:06.943339 systemd[1]: Starting update-engine.service... Sep 6 00:20:06.965045 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:20:08.140840 systemd-resolved[1680]: Clock change detected. Flushing caches. Sep 6 00:20:08.141012 systemd-timesyncd[1681]: Contacted time server 44.190.5.123:123 (0.flatcar.pool.ntp.org). Sep 6 00:20:08.141084 systemd-timesyncd[1681]: Initial clock synchronization to Sat 2025-09-06 00:20:08.140781 UTC. Sep 6 00:20:08.142743 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:20:08.143022 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:20:08.177351 jq[1733]: true Sep 6 00:20:08.203146 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:20:08.203382 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:20:08.260592 tar[1738]: linux-amd64/helm Sep 6 00:20:08.262588 jq[1743]: true Sep 6 00:20:08.286551 extend-filesystems[1719]: Found loop1 Sep 6 00:20:08.288684 extend-filesystems[1719]: Found nvme0n1 Sep 6 00:20:08.288684 extend-filesystems[1719]: Found nvme0n1p1 Sep 6 00:20:08.288684 extend-filesystems[1719]: Found nvme0n1p2 Sep 6 00:20:08.288684 extend-filesystems[1719]: Found nvme0n1p3 Sep 6 00:20:08.288684 extend-filesystems[1719]: Found usr Sep 6 00:20:08.288684 extend-filesystems[1719]: Found nvme0n1p4 Sep 6 00:20:08.288684 extend-filesystems[1719]: Found nvme0n1p6 Sep 6 00:20:08.288684 extend-filesystems[1719]: Found nvme0n1p7 Sep 6 00:20:08.288684 extend-filesystems[1719]: Found nvme0n1p9 Sep 6 00:20:08.288684 extend-filesystems[1719]: Checking size of /dev/nvme0n1p9 Sep 6 00:20:08.406232 extend-filesystems[1719]: Resized partition /dev/nvme0n1p9 Sep 6 00:20:08.294919 dbus-daemon[1717]: [system] SELinux support is enabled Sep 6 00:20:08.295146 systemd[1]: Started dbus.service. Sep 6 00:20:08.344805 dbus-daemon[1717]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1466 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 00:20:08.299073 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:20:08.299109 systemd[1]: Reached target system-config.target. Sep 6 00:20:08.299824 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:20:08.299845 systemd[1]: Reached target user-config.target. Sep 6 00:20:08.360934 systemd[1]: Starting systemd-hostnamed.service... Sep 6 00:20:08.367097 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:20:08.367291 systemd[1]: Finished motdgen.service. Sep 6 00:20:08.425160 extend-filesystems[1771]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:20:08.430852 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 6 00:20:08.438422 amazon-ssm-agent[1714]: 2025/09/06 00:20:08 Failed to load instance info from vault. RegistrationKey does not exist. Sep 6 00:20:08.445139 amazon-ssm-agent[1714]: Initializing new seelog logger Sep 6 00:20:08.450572 amazon-ssm-agent[1714]: New Seelog Logger Creation Complete Sep 6 00:20:08.450837 amazon-ssm-agent[1714]: 2025/09/06 00:20:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:20:08.450945 amazon-ssm-agent[1714]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:20:08.453749 update_engine[1730]: I0906 00:20:08.452195 1730 main.cc:92] Flatcar Update Engine starting Sep 6 00:20:08.457977 systemd[1]: Started update-engine.service. Sep 6 00:20:08.463756 update_engine[1730]: I0906 00:20:08.462198 1730 update_check_scheduler.cc:74] Next update check in 9m41s Sep 6 00:20:08.461077 systemd[1]: Started locksmithd.service. Sep 6 00:20:08.467442 amazon-ssm-agent[1714]: 2025/09/06 00:20:08 processing appconfig overrides Sep 6 00:20:08.521378 env[1744]: time="2025-09-06T00:20:08.521290481Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:20:08.528612 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 6 00:20:08.547115 extend-filesystems[1771]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 6 00:20:08.547115 extend-filesystems[1771]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:20:08.547115 extend-filesystems[1771]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 6 00:20:08.546014 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:20:08.551126 extend-filesystems[1719]: Resized filesystem in /dev/nvme0n1p9 Sep 6 00:20:08.546255 systemd[1]: Finished extend-filesystems.service. Sep 6 00:20:08.554651 bash[1792]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:20:08.553689 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:20:08.611389 env[1744]: time="2025-09-06T00:20:08.611330756Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:20:08.611738 env[1744]: time="2025-09-06T00:20:08.611711878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:08.613698 env[1744]: time="2025-09-06T00:20:08.613658775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:08.613820 env[1744]: time="2025-09-06T00:20:08.613800430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:08.614238 env[1744]: time="2025-09-06T00:20:08.614211462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:08.614328 env[1744]: time="2025-09-06T00:20:08.614312374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:08.614396 env[1744]: time="2025-09-06T00:20:08.614382613Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:20:08.614454 env[1744]: time="2025-09-06T00:20:08.614441924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:08.614627 env[1744]: time="2025-09-06T00:20:08.614608949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:08.616961 env[1744]: time="2025-09-06T00:20:08.616933701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:08.617296 env[1744]: time="2025-09-06T00:20:08.617254156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:08.619274 env[1744]: time="2025-09-06T00:20:08.619245838Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:20:08.619458 env[1744]: time="2025-09-06T00:20:08.619438548Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:20:08.619553 env[1744]: time="2025-09-06T00:20:08.619538180Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.630806873Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.630864391Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.630892266Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.630936429Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.630958257Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.630978494Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.630996994Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.631017251Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.631036129Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.631057772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.631077227Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.631096218Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.631252227Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:20:08.631590 env[1744]: time="2025-09-06T00:20:08.631347840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632473699Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632531428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632554347Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632635960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632660853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632679966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632697188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632716001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632734657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632753016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632770650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632793570Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.632975327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.633000177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.633591 env[1744]: time="2025-09-06T00:20:08.633021476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.634264 env[1744]: time="2025-09-06T00:20:08.633039884Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:20:08.634264 env[1744]: time="2025-09-06T00:20:08.633065378Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:20:08.634264 env[1744]: time="2025-09-06T00:20:08.633083559Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:20:08.634264 env[1744]: time="2025-09-06T00:20:08.633111715Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:20:08.634264 env[1744]: time="2025-09-06T00:20:08.633156237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:20:08.634467 env[1744]: time="2025-09-06T00:20:08.633471701Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:20:08.638163 env[1744]: time="2025-09-06T00:20:08.633554492Z" level=info msg="Connect containerd service" Sep 6 00:20:08.638163 env[1744]: time="2025-09-06T00:20:08.634744782Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:20:08.638163 env[1744]: time="2025-09-06T00:20:08.635650634Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:20:08.638163 env[1744]: time="2025-09-06T00:20:08.635962359Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:20:08.638163 env[1744]: time="2025-09-06T00:20:08.636012830Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:20:08.638163 env[1744]: time="2025-09-06T00:20:08.637083415Z" level=info msg="containerd successfully booted in 0.126905s" Sep 6 00:20:08.636159 systemd[1]: Started containerd.service. Sep 6 00:20:08.638864 env[1744]: time="2025-09-06T00:20:08.638815845Z" level=info msg="Start subscribing containerd event" Sep 6 00:20:08.638953 env[1744]: time="2025-09-06T00:20:08.638900166Z" level=info msg="Start recovering state" Sep 6 00:20:08.639060 env[1744]: time="2025-09-06T00:20:08.639039858Z" level=info msg="Start event monitor" Sep 6 00:20:08.639108 env[1744]: time="2025-09-06T00:20:08.639073129Z" level=info msg="Start snapshots syncer" Sep 6 00:20:08.639213 env[1744]: time="2025-09-06T00:20:08.639089983Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:20:08.639255 env[1744]: time="2025-09-06T00:20:08.639222446Z" level=info msg="Start streaming server" Sep 6 00:20:08.650756 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 00:20:08.650942 systemd[1]: Started systemd-hostnamed.service. Sep 6 00:20:08.652224 dbus-daemon[1717]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1766 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 00:20:08.655485 systemd[1]: Starting polkit.service... Sep 6 00:20:08.670506 systemd-logind[1726]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:20:08.675661 systemd-logind[1726]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 6 00:20:08.676606 systemd-logind[1726]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:20:08.679915 polkitd[1811]: Started polkitd version 121 Sep 6 00:20:08.680394 systemd-logind[1726]: New seat seat0. Sep 6 00:20:08.691819 systemd[1]: Started systemd-logind.service. Sep 6 00:20:08.702268 polkitd[1811]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 00:20:08.702350 polkitd[1811]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 00:20:08.704638 polkitd[1811]: Finished loading, compiling and executing 2 rules Sep 6 00:20:08.705652 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 00:20:08.706778 systemd[1]: Started polkit.service. Sep 6 00:20:08.706385 polkitd[1811]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 00:20:08.724787 systemd-hostnamed[1766]: Hostname set to (transient) Sep 6 00:20:08.724788 systemd-resolved[1680]: System hostname changed to 'ip-172-31-31-235'. Sep 6 00:20:08.735942 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 00:20:09.049411 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Create new startup processor Sep 6 00:20:09.050422 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [LongRunningPluginsManager] registered plugins: {} Sep 6 00:20:09.050530 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Initializing bookkeeping folders Sep 6 00:20:09.050530 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO removing the completed state files Sep 6 00:20:09.050530 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Initializing bookkeeping folders for long running plugins Sep 6 00:20:09.050530 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 6 00:20:09.050530 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Initializing healthcheck folders for long running plugins Sep 6 00:20:09.050530 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Initializing locations for inventory plugin Sep 6 00:20:09.050530 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Initializing default location for custom inventory Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Initializing default location for file inventory Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Initializing default location for role inventory Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Init the cloudwatchlogs publisher Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform independent plugin aws:softwareInventory Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform independent plugin aws:configureDocker Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform independent plugin aws:runDockerAction Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform independent plugin aws:configurePackage Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform independent plugin aws:runDocument Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform independent plugin aws:refreshAssociation Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform independent plugin aws:downloadContent Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Successfully loaded platform dependent plugin aws:runShellScript Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 6 00:20:09.050820 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO OS: linux, Arch: amd64 Sep 6 00:20:09.053254 amazon-ssm-agent[1714]: datastore file /var/lib/amazon/ssm/i-0501aab6aab877d96/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 6 00:20:09.061883 coreos-metadata[1716]: Sep 06 00:20:09.061 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 6 00:20:09.071459 coreos-metadata[1716]: Sep 06 00:20:09.066 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 6 00:20:09.076480 coreos-metadata[1716]: Sep 06 00:20:09.076 INFO Fetch successful Sep 6 00:20:09.076480 coreos-metadata[1716]: Sep 06 00:20:09.076 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 6 00:20:09.077156 coreos-metadata[1716]: Sep 06 00:20:09.077 INFO Fetch successful Sep 6 00:20:09.081726 unknown[1716]: wrote ssh authorized keys file for user: core Sep 6 00:20:09.122870 update-ssh-keys[1892]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:20:09.124157 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:20:09.149341 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [OfflineService] Starting document processing engine... Sep 6 00:20:09.244232 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [OfflineService] [EngineProcessor] Starting Sep 6 00:20:09.338595 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [OfflineService] [EngineProcessor] Initial processing Sep 6 00:20:09.432992 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [OfflineService] Starting message polling Sep 6 00:20:09.510784 tar[1738]: linux-amd64/LICENSE Sep 6 00:20:09.511307 tar[1738]: linux-amd64/README.md Sep 6 00:20:09.518948 systemd[1]: Finished prepare-helm.service. Sep 6 00:20:09.527677 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [OfflineService] Starting send replies to MDS Sep 6 00:20:09.623501 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] Starting document processing engine... Sep 6 00:20:09.686767 locksmithd[1791]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:20:09.717739 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 6 00:20:09.813032 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 6 00:20:09.908583 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] Starting message polling Sep 6 00:20:10.004235 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 6 00:20:10.100104 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [instanceID=i-0501aab6aab877d96] Starting association polling Sep 6 00:20:10.196352 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 6 00:20:10.292614 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 6 00:20:10.390173 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 6 00:20:10.486917 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 6 00:20:10.584748 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 6 00:20:10.681806 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessageGatewayService] Starting session document processing engine... Sep 6 00:20:10.779294 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 6 00:20:10.876691 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 6 00:20:10.974368 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0501aab6aab877d96, requestId: b5ec1c64-0ad3-49a6-9be1-edf6bf7f3c2e Sep 6 00:20:11.072263 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 6 00:20:11.170263 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 6 00:20:11.268510 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:20:11.367029 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessageGatewayService] listening reply. Sep 6 00:20:11.465600 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 6 00:20:11.564775 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [StartupProcessor] Executing startup processor tasks Sep 6 00:20:11.663874 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 6 00:20:11.764130 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 6 00:20:11.863586 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 6 00:20:11.963255 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0501aab6aab877d96?role=subscribe&stream=input Sep 6 00:20:12.063025 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0501aab6aab877d96?role=subscribe&stream=input Sep 6 00:20:12.163035 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessageGatewayService] Starting receiving message from control channel Sep 6 00:20:12.264669 amazon-ssm-agent[1714]: 2025-09-06 00:20:09 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 6 00:20:12.545669 systemd[1]: Started kubelet.service. Sep 6 00:20:14.610536 kubelet[1912]: E0906 00:20:14.610492 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:20:14.614259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:20:14.614434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:20:14.614740 systemd[1]: kubelet.service: Consumed 1.324s CPU time. Sep 6 00:20:15.002326 sshd_keygen[1751]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:20:15.027584 systemd[1]: Finished sshd-keygen.service. Sep 6 00:20:15.029624 systemd[1]: Starting issuegen.service... Sep 6 00:20:15.036163 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:20:15.036392 systemd[1]: Finished issuegen.service. Sep 6 00:20:15.038735 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:20:15.046662 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:20:15.049072 systemd[1]: Started getty@tty1.service. Sep 6 00:20:15.051380 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:20:15.052389 systemd[1]: Reached target getty.target. Sep 6 00:20:15.053130 systemd[1]: Reached target multi-user.target. Sep 6 00:20:15.055378 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:20:15.064900 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:20:15.065118 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:20:15.065847 systemd[1]: Startup finished in 590ms (kernel) + 8.511s (initrd) + 15.620s (userspace) = 24.722s. Sep 6 00:20:16.890722 systemd[1]: Created slice system-sshd.slice. Sep 6 00:20:16.892279 systemd[1]: Started sshd@0-172.31.31.235:22-139.178.68.195:37176.service. Sep 6 00:20:17.066165 sshd[1933]: Accepted publickey for core from 139.178.68.195 port 37176 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:20:17.069054 sshd[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:17.084881 systemd[1]: Created slice user-500.slice. Sep 6 00:20:17.086815 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:20:17.089793 systemd-logind[1726]: New session 1 of user core. Sep 6 00:20:17.100128 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:20:17.102658 systemd[1]: Starting user@500.service... Sep 6 00:20:17.107049 (systemd)[1936]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:17.209010 systemd[1936]: Queued start job for default target default.target. Sep 6 00:20:17.209659 systemd[1936]: Reached target paths.target. Sep 6 00:20:17.209692 systemd[1936]: Reached target sockets.target. Sep 6 00:20:17.209706 systemd[1936]: Reached target timers.target. Sep 6 00:20:17.209719 systemd[1936]: Reached target basic.target. Sep 6 00:20:17.209830 systemd[1]: Started user@500.service. Sep 6 00:20:17.210785 systemd[1]: Started session-1.scope. Sep 6 00:20:17.211462 systemd[1936]: Reached target default.target. Sep 6 00:20:17.211634 systemd[1936]: Startup finished in 97ms. Sep 6 00:20:17.363649 systemd[1]: Started sshd@1-172.31.31.235:22-139.178.68.195:37178.service. Sep 6 00:20:17.514290 sshd[1945]: Accepted publickey for core from 139.178.68.195 port 37178 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:20:17.515529 sshd[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:17.520685 systemd-logind[1726]: New session 2 of user core. Sep 6 00:20:17.521513 systemd[1]: Started session-2.scope. Sep 6 00:20:17.648701 sshd[1945]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:17.651316 systemd[1]: sshd@1-172.31.31.235:22-139.178.68.195:37178.service: Deactivated successfully. Sep 6 00:20:17.652191 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:20:17.652984 systemd-logind[1726]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:20:17.654009 systemd-logind[1726]: Removed session 2. Sep 6 00:20:17.674773 systemd[1]: Started sshd@2-172.31.31.235:22-139.178.68.195:37188.service. Sep 6 00:20:17.834352 sshd[1951]: Accepted publickey for core from 139.178.68.195 port 37188 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:20:17.835360 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:17.839740 systemd-logind[1726]: New session 3 of user core. Sep 6 00:20:17.840200 systemd[1]: Started session-3.scope. Sep 6 00:20:17.965178 sshd[1951]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:17.967994 systemd[1]: sshd@2-172.31.31.235:22-139.178.68.195:37188.service: Deactivated successfully. Sep 6 00:20:17.968674 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:20:17.969179 systemd-logind[1726]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:20:17.970296 systemd-logind[1726]: Removed session 3. Sep 6 00:20:17.990593 systemd[1]: Started sshd@3-172.31.31.235:22-139.178.68.195:37196.service. Sep 6 00:20:18.148762 sshd[1957]: Accepted publickey for core from 139.178.68.195 port 37196 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:20:18.149842 sshd[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:18.154683 systemd-logind[1726]: New session 4 of user core. Sep 6 00:20:18.155382 systemd[1]: Started session-4.scope. Sep 6 00:20:18.282076 sshd[1957]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:18.284712 systemd[1]: sshd@3-172.31.31.235:22-139.178.68.195:37196.service: Deactivated successfully. Sep 6 00:20:18.285579 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:20:18.286219 systemd-logind[1726]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:20:18.287188 systemd-logind[1726]: Removed session 4. Sep 6 00:20:18.306600 systemd[1]: Started sshd@4-172.31.31.235:22-139.178.68.195:37200.service. Sep 6 00:20:18.462037 sshd[1963]: Accepted publickey for core from 139.178.68.195 port 37200 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:20:18.463064 sshd[1963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:18.467679 systemd-logind[1726]: New session 5 of user core. Sep 6 00:20:18.468551 systemd[1]: Started session-5.scope. Sep 6 00:20:18.592048 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:20:18.592298 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:20:18.619506 systemd[1]: Starting docker.service... Sep 6 00:20:18.659716 env[1976]: time="2025-09-06T00:20:18.659655106Z" level=info msg="Starting up" Sep 6 00:20:18.661818 env[1976]: time="2025-09-06T00:20:18.661530545Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:20:18.661818 env[1976]: time="2025-09-06T00:20:18.661589185Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:20:18.661818 env[1976]: time="2025-09-06T00:20:18.661621373Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:20:18.661818 env[1976]: time="2025-09-06T00:20:18.661639661Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:20:18.664514 env[1976]: time="2025-09-06T00:20:18.664488040Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:20:18.664663 env[1976]: time="2025-09-06T00:20:18.664592698Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:20:18.664663 env[1976]: time="2025-09-06T00:20:18.664620508Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:20:18.664663 env[1976]: time="2025-09-06T00:20:18.664633992Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:20:18.671487 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1055574947-merged.mount: Deactivated successfully. Sep 6 00:20:18.700341 env[1976]: time="2025-09-06T00:20:18.700307559Z" level=info msg="Loading containers: start." Sep 6 00:20:18.842587 kernel: Initializing XFRM netlink socket Sep 6 00:20:18.882738 env[1976]: time="2025-09-06T00:20:18.882696915Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:20:18.886308 (udev-worker)[1986]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:20:18.950780 systemd-networkd[1466]: docker0: Link UP Sep 6 00:20:18.966258 env[1976]: time="2025-09-06T00:20:18.966219367Z" level=info msg="Loading containers: done." Sep 6 00:20:18.976339 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck200258291-merged.mount: Deactivated successfully. Sep 6 00:20:18.983665 env[1976]: time="2025-09-06T00:20:18.983611715Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:20:18.983860 env[1976]: time="2025-09-06T00:20:18.983805875Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:20:18.983923 env[1976]: time="2025-09-06T00:20:18.983904403Z" level=info msg="Daemon has completed initialization" Sep 6 00:20:19.000059 systemd[1]: Started docker.service. Sep 6 00:20:19.005214 env[1976]: time="2025-09-06T00:20:19.004638634Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:20:20.668239 amazon-ssm-agent[1714]: 2025-09-06 00:20:20 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 6 00:20:20.737503 env[1744]: time="2025-09-06T00:20:20.737447654Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:20:21.473097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531242868.mount: Deactivated successfully. Sep 6 00:20:23.578148 env[1744]: time="2025-09-06T00:20:23.578075638Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:23.582259 env[1744]: time="2025-09-06T00:20:23.582208505Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:23.585855 env[1744]: time="2025-09-06T00:20:23.585811023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:23.588759 env[1744]: time="2025-09-06T00:20:23.588718063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:23.589581 env[1744]: time="2025-09-06T00:20:23.589527066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 00:20:23.590463 env[1744]: time="2025-09-06T00:20:23.590436335Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:20:24.865646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:20:24.865924 systemd[1]: Stopped kubelet.service. Sep 6 00:20:24.865980 systemd[1]: kubelet.service: Consumed 1.324s CPU time. Sep 6 00:20:24.867875 systemd[1]: Starting kubelet.service... Sep 6 00:20:25.093597 systemd[1]: Started kubelet.service. Sep 6 00:20:25.164169 kubelet[2103]: E0906 00:20:25.164053 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:20:25.167836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:20:25.168011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:20:26.020020 env[1744]: time="2025-09-06T00:20:26.019949786Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:26.024586 env[1744]: time="2025-09-06T00:20:26.024511042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:26.028096 env[1744]: time="2025-09-06T00:20:26.028035198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:26.031163 env[1744]: time="2025-09-06T00:20:26.031120425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:26.031976 env[1744]: time="2025-09-06T00:20:26.031908383Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 00:20:26.032591 env[1744]: time="2025-09-06T00:20:26.032568415Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:20:27.783165 env[1744]: time="2025-09-06T00:20:27.783100123Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:27.785494 env[1744]: time="2025-09-06T00:20:27.785448941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:27.787674 env[1744]: time="2025-09-06T00:20:27.787633842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:27.790305 env[1744]: time="2025-09-06T00:20:27.790254582Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:27.794077 env[1744]: time="2025-09-06T00:20:27.794020518Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 00:20:27.795280 env[1744]: time="2025-09-06T00:20:27.795239723Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:20:29.250191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617983820.mount: Deactivated successfully. Sep 6 00:20:29.939110 env[1744]: time="2025-09-06T00:20:29.939055469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:29.943667 env[1744]: time="2025-09-06T00:20:29.943627047Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:29.946692 env[1744]: time="2025-09-06T00:20:29.946633301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:29.949267 env[1744]: time="2025-09-06T00:20:29.949147888Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:29.949825 env[1744]: time="2025-09-06T00:20:29.949785547Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 00:20:29.950492 env[1744]: time="2025-09-06T00:20:29.950459299Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:20:30.576255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092246531.mount: Deactivated successfully. Sep 6 00:20:31.894447 env[1744]: time="2025-09-06T00:20:31.894386571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:31.901828 env[1744]: time="2025-09-06T00:20:31.901776099Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:31.906154 env[1744]: time="2025-09-06T00:20:31.906107630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:31.910167 env[1744]: time="2025-09-06T00:20:31.910128746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:31.910858 env[1744]: time="2025-09-06T00:20:31.910826024Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:20:31.911373 env[1744]: time="2025-09-06T00:20:31.911352513Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:20:32.355239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939325836.mount: Deactivated successfully. Sep 6 00:20:32.361035 env[1744]: time="2025-09-06T00:20:32.360991829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:32.363472 env[1744]: time="2025-09-06T00:20:32.363436242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:32.364884 env[1744]: time="2025-09-06T00:20:32.364854407Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:32.366623 env[1744]: time="2025-09-06T00:20:32.366591631Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:32.367225 env[1744]: time="2025-09-06T00:20:32.367198829Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:20:32.367823 env[1744]: time="2025-09-06T00:20:32.367803580Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:20:32.851332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497601979.mount: Deactivated successfully. Sep 6 00:20:35.329298 env[1744]: time="2025-09-06T00:20:35.329130406Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:35.332708 env[1744]: time="2025-09-06T00:20:35.332658461Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:35.336627 env[1744]: time="2025-09-06T00:20:35.336584760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:35.339424 env[1744]: time="2025-09-06T00:20:35.339384304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:35.340305 env[1744]: time="2025-09-06T00:20:35.340265359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 00:20:35.419118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:20:35.419402 systemd[1]: Stopped kubelet.service. Sep 6 00:20:35.423953 systemd[1]: Starting kubelet.service... Sep 6 00:20:36.090041 systemd[1]: Started kubelet.service. Sep 6 00:20:36.173587 kubelet[2131]: E0906 00:20:36.173526 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:20:36.175997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:20:36.176174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:20:37.976088 systemd[1]: Stopped kubelet.service. Sep 6 00:20:37.979089 systemd[1]: Starting kubelet.service... Sep 6 00:20:38.015012 systemd[1]: Reloading. Sep 6 00:20:38.087786 /usr/lib/systemd/system-generators/torcx-generator[2164]: time="2025-09-06T00:20:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:38.088245 /usr/lib/systemd/system-generators/torcx-generator[2164]: time="2025-09-06T00:20:38Z" level=info msg="torcx already run" Sep 6 00:20:38.229465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:38.229755 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:38.255687 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:38.399491 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 00:20:38.399618 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 00:20:38.399972 systemd[1]: Stopped kubelet.service. Sep 6 00:20:38.402048 systemd[1]: Starting kubelet.service... Sep 6 00:20:38.683964 systemd[1]: Started kubelet.service. Sep 6 00:20:38.749643 kubelet[2225]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:20:38.749643 kubelet[2225]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:20:38.749643 kubelet[2225]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:20:38.750145 kubelet[2225]: I0906 00:20:38.749736 2225 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:20:38.756969 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 00:20:39.105484 kubelet[2225]: I0906 00:20:39.105434 2225 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:20:39.105484 kubelet[2225]: I0906 00:20:39.105468 2225 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:20:39.105854 kubelet[2225]: I0906 00:20:39.105829 2225 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:20:39.152273 kubelet[2225]: E0906 00:20:39.152235 2225 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.235:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:39.156997 kubelet[2225]: I0906 00:20:39.156963 2225 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:20:39.165321 kubelet[2225]: E0906 00:20:39.165268 2225 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:20:39.165321 kubelet[2225]: I0906 00:20:39.165310 2225 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:20:39.170653 kubelet[2225]: I0906 00:20:39.170623 2225 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:20:39.170811 kubelet[2225]: I0906 00:20:39.170758 2225 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:20:39.170954 kubelet[2225]: I0906 00:20:39.170916 2225 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:20:39.171176 kubelet[2225]: I0906 00:20:39.170949 2225 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-235","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:20:39.171314 kubelet[2225]: I0906 00:20:39.171188 2225 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:20:39.171314 kubelet[2225]: I0906 00:20:39.171203 2225 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:20:39.171399 kubelet[2225]: I0906 00:20:39.171338 2225 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:20:39.180846 kubelet[2225]: W0906 00:20:39.180634 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-235&limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:39.180846 kubelet[2225]: E0906 00:20:39.180722 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-235&limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:39.181073 kubelet[2225]: I0906 00:20:39.181056 2225 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:20:39.181319 kubelet[2225]: I0906 00:20:39.181281 2225 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:20:39.181319 kubelet[2225]: I0906 00:20:39.181326 2225 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:20:39.181446 kubelet[2225]: I0906 00:20:39.181347 2225 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:20:39.193734 kubelet[2225]: W0906 00:20:39.193664 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:39.193873 kubelet[2225]: E0906 00:20:39.193780 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:39.193932 kubelet[2225]: I0906 00:20:39.193910 2225 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:20:39.194420 kubelet[2225]: I0906 00:20:39.194327 2225 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:20:39.198029 kubelet[2225]: W0906 00:20:39.197991 2225 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:20:39.206347 kubelet[2225]: I0906 00:20:39.206310 2225 server.go:1274] "Started kubelet" Sep 6 00:20:39.210528 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:20:39.210666 kubelet[2225]: I0906 00:20:39.209058 2225 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:20:39.218036 kubelet[2225]: I0906 00:20:39.217983 2225 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:20:39.218966 kubelet[2225]: I0906 00:20:39.218934 2225 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:20:39.222642 kubelet[2225]: I0906 00:20:39.222600 2225 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:20:39.222864 kubelet[2225]: I0906 00:20:39.222843 2225 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:20:39.223120 kubelet[2225]: I0906 00:20:39.223091 2225 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:20:39.224505 kubelet[2225]: E0906 00:20:39.220669 2225 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.235:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.235:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-235.1862898c0b51edca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-235,UID:ip-172-31-31-235,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-235,},FirstTimestamp:2025-09-06 00:20:39.20626017 +0000 UTC m=+0.517942721,LastTimestamp:2025-09-06 00:20:39.20626017 +0000 UTC m=+0.517942721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-235,}" Sep 6 00:20:39.227473 kubelet[2225]: I0906 00:20:39.227442 2225 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:20:39.228281 kubelet[2225]: E0906 00:20:39.227771 2225 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-235\" not found" Sep 6 00:20:39.229820 kubelet[2225]: E0906 00:20:39.229792 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-235?timeout=10s\": dial tcp 172.31.31.235:6443: connect: connection refused" interval="200ms" Sep 6 00:20:39.230212 kubelet[2225]: I0906 00:20:39.230196 2225 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:20:39.230384 kubelet[2225]: I0906 00:20:39.230370 2225 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:20:39.232662 kubelet[2225]: I0906 00:20:39.232643 2225 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:20:39.232747 kubelet[2225]: I0906 00:20:39.232703 2225 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:20:39.233705 kubelet[2225]: I0906 00:20:39.233689 2225 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:20:39.241268 kubelet[2225]: W0906 00:20:39.241086 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:39.241268 kubelet[2225]: E0906 00:20:39.241271 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:39.244863 kubelet[2225]: E0906 00:20:39.244829 2225 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:20:39.248497 kubelet[2225]: I0906 00:20:39.248453 2225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:20:39.249859 kubelet[2225]: I0906 00:20:39.249829 2225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:20:39.249859 kubelet[2225]: I0906 00:20:39.249854 2225 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:20:39.249980 kubelet[2225]: I0906 00:20:39.249874 2225 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:20:39.249980 kubelet[2225]: E0906 00:20:39.249911 2225 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:20:39.258032 kubelet[2225]: W0906 00:20:39.257995 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:39.258231 kubelet[2225]: E0906 00:20:39.258207 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:39.259494 kubelet[2225]: I0906 00:20:39.259477 2225 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:20:39.259632 kubelet[2225]: I0906 00:20:39.259619 2225 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:20:39.259737 kubelet[2225]: I0906 00:20:39.259727 2225 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:20:39.262184 kubelet[2225]: I0906 00:20:39.262166 2225 policy_none.go:49] "None policy: Start" Sep 6 00:20:39.263020 kubelet[2225]: I0906 00:20:39.262998 2225 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:20:39.263115 kubelet[2225]: I0906 00:20:39.263036 2225 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:20:39.268804 systemd[1]: Created slice kubepods.slice. Sep 6 00:20:39.273100 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:20:39.277025 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:20:39.285668 kubelet[2225]: I0906 00:20:39.285642 2225 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:20:39.285959 kubelet[2225]: I0906 00:20:39.285947 2225 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:20:39.286061 kubelet[2225]: I0906 00:20:39.286029 2225 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:20:39.289459 kubelet[2225]: I0906 00:20:39.289434 2225 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:20:39.291648 kubelet[2225]: E0906 00:20:39.291593 2225 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-235\" not found" Sep 6 00:20:39.384424 systemd[1]: Created slice kubepods-burstable-podda68a11632ba2277e6a9e48e2c30abe0.slice. Sep 6 00:20:39.392498 kubelet[2225]: I0906 00:20:39.392465 2225 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-235" Sep 6 00:20:39.393433 kubelet[2225]: E0906 00:20:39.393403 2225 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.235:6443/api/v1/nodes\": dial tcp 172.31.31.235:6443: connect: connection refused" node="ip-172-31-31-235" Sep 6 00:20:39.395094 systemd[1]: Created slice kubepods-burstable-podf46181577502b0406d9d962623912ffa.slice. Sep 6 00:20:39.401705 systemd[1]: Created slice kubepods-burstable-pod3149dc2a06aebd93d06a3b0445f31c55.slice. Sep 6 00:20:39.430579 kubelet[2225]: E0906 00:20:39.430513 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-235?timeout=10s\": dial tcp 172.31.31.235:6443: connect: connection refused" interval="400ms" Sep 6 00:20:39.433755 kubelet[2225]: I0906 00:20:39.433710 2225 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da68a11632ba2277e6a9e48e2c30abe0-ca-certs\") pod \"kube-apiserver-ip-172-31-31-235\" (UID: \"da68a11632ba2277e6a9e48e2c30abe0\") " pod="kube-system/kube-apiserver-ip-172-31-31-235" Sep 6 00:20:39.433755 kubelet[2225]: I0906 00:20:39.433751 2225 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da68a11632ba2277e6a9e48e2c30abe0-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-235\" (UID: \"da68a11632ba2277e6a9e48e2c30abe0\") " pod="kube-system/kube-apiserver-ip-172-31-31-235" Sep 6 00:20:39.433755 kubelet[2225]: I0906 00:20:39.433781 2225 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:39.434002 kubelet[2225]: I0906 00:20:39.433796 2225 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:39.434002 kubelet[2225]: I0906 00:20:39.433812 2225 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:39.434002 kubelet[2225]: I0906 00:20:39.433829 2225 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:39.434002 kubelet[2225]: I0906 00:20:39.433844 2225 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3149dc2a06aebd93d06a3b0445f31c55-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-235\" (UID: \"3149dc2a06aebd93d06a3b0445f31c55\") " pod="kube-system/kube-scheduler-ip-172-31-31-235" Sep 6 00:20:39.434002 kubelet[2225]: I0906 00:20:39.433858 2225 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:39.434141 kubelet[2225]: I0906 00:20:39.433874 2225 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da68a11632ba2277e6a9e48e2c30abe0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-235\" (UID: \"da68a11632ba2277e6a9e48e2c30abe0\") " pod="kube-system/kube-apiserver-ip-172-31-31-235" Sep 6 00:20:39.595520 kubelet[2225]: I0906 00:20:39.595489 2225 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-235" Sep 6 00:20:39.595898 kubelet[2225]: E0906 00:20:39.595865 2225 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.235:6443/api/v1/nodes\": dial tcp 172.31.31.235:6443: connect: connection refused" node="ip-172-31-31-235" Sep 6 00:20:39.694671 env[1744]: time="2025-09-06T00:20:39.694308537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-235,Uid:da68a11632ba2277e6a9e48e2c30abe0,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:39.700252 env[1744]: time="2025-09-06T00:20:39.700200673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-235,Uid:f46181577502b0406d9d962623912ffa,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:39.705885 env[1744]: time="2025-09-06T00:20:39.705830213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-235,Uid:3149dc2a06aebd93d06a3b0445f31c55,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:39.831193 kubelet[2225]: E0906 00:20:39.831150 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-235?timeout=10s\": dial tcp 172.31.31.235:6443: connect: connection refused" interval="800ms" Sep 6 00:20:39.998468 kubelet[2225]: I0906 00:20:39.998375 2225 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-235" Sep 6 00:20:39.998949 kubelet[2225]: E0906 00:20:39.998918 2225 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.235:6443/api/v1/nodes\": dial tcp 172.31.31.235:6443: connect: connection refused" node="ip-172-31-31-235" Sep 6 00:20:40.125012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464807699.mount: Deactivated successfully. Sep 6 00:20:40.133311 env[1744]: time="2025-09-06T00:20:40.133206734Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.134522 env[1744]: time="2025-09-06T00:20:40.134483705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.135521 env[1744]: time="2025-09-06T00:20:40.135476248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.140403 env[1744]: time="2025-09-06T00:20:40.140354911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.141279 env[1744]: time="2025-09-06T00:20:40.141189853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.141949 env[1744]: time="2025-09-06T00:20:40.141920515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.143377 env[1744]: time="2025-09-06T00:20:40.143343937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.144157 env[1744]: time="2025-09-06T00:20:40.144130847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.145574 env[1744]: time="2025-09-06T00:20:40.145534450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.146527 env[1744]: time="2025-09-06T00:20:40.146499267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.148430 env[1744]: time="2025-09-06T00:20:40.148384716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.150240 env[1744]: time="2025-09-06T00:20:40.150204379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:40.171453 kubelet[2225]: W0906 00:20:40.171396 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:40.171628 kubelet[2225]: E0906 00:20:40.171459 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:40.189777 env[1744]: time="2025-09-06T00:20:40.189675911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:40.190060 env[1744]: time="2025-09-06T00:20:40.190011038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:40.190410 env[1744]: time="2025-09-06T00:20:40.190370510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:40.190830 env[1744]: time="2025-09-06T00:20:40.190736282Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa6786fa3600d8cc84bba51d2b13115e0504996327a8b9413a374cac7873c226 pid=2274 runtime=io.containerd.runc.v2 Sep 6 00:20:40.200411 env[1744]: time="2025-09-06T00:20:40.199839624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:40.200411 env[1744]: time="2025-09-06T00:20:40.199918071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:40.200411 env[1744]: time="2025-09-06T00:20:40.199935025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:40.200411 env[1744]: time="2025-09-06T00:20:40.200244456Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b2d0d19fde953c2d2dd3029616679cb3bec7d828addcff23d75b50cc5798113 pid=2272 runtime=io.containerd.runc.v2 Sep 6 00:20:40.214001 env[1744]: time="2025-09-06T00:20:40.213903987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:40.214161 env[1744]: time="2025-09-06T00:20:40.214015413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:40.214161 env[1744]: time="2025-09-06T00:20:40.214047025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:40.214321 env[1744]: time="2025-09-06T00:20:40.214278750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca3161cfcf52569aad4fb122921d66a46ebad52277e226c163020ec440d3d8db pid=2301 runtime=io.containerd.runc.v2 Sep 6 00:20:40.225785 systemd[1]: Started cri-containerd-8b2d0d19fde953c2d2dd3029616679cb3bec7d828addcff23d75b50cc5798113.scope. Sep 6 00:20:40.241248 kubelet[2225]: W0906 00:20:40.238318 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:40.241248 kubelet[2225]: E0906 00:20:40.238373 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:40.242734 systemd[1]: Started cri-containerd-fa6786fa3600d8cc84bba51d2b13115e0504996327a8b9413a374cac7873c226.scope. Sep 6 00:20:40.264363 systemd[1]: Started cri-containerd-ca3161cfcf52569aad4fb122921d66a46ebad52277e226c163020ec440d3d8db.scope. Sep 6 00:20:40.354343 env[1744]: time="2025-09-06T00:20:40.354299889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-235,Uid:f46181577502b0406d9d962623912ffa,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa6786fa3600d8cc84bba51d2b13115e0504996327a8b9413a374cac7873c226\"" Sep 6 00:20:40.362634 env[1744]: time="2025-09-06T00:20:40.362551017Z" level=info msg="CreateContainer within sandbox \"fa6786fa3600d8cc84bba51d2b13115e0504996327a8b9413a374cac7873c226\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:20:40.366893 env[1744]: time="2025-09-06T00:20:40.366849260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-235,Uid:3149dc2a06aebd93d06a3b0445f31c55,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b2d0d19fde953c2d2dd3029616679cb3bec7d828addcff23d75b50cc5798113\"" Sep 6 00:20:40.370920 env[1744]: time="2025-09-06T00:20:40.370874184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-235,Uid:da68a11632ba2277e6a9e48e2c30abe0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca3161cfcf52569aad4fb122921d66a46ebad52277e226c163020ec440d3d8db\"" Sep 6 00:20:40.371837 env[1744]: time="2025-09-06T00:20:40.371395327Z" level=info msg="CreateContainer within sandbox \"8b2d0d19fde953c2d2dd3029616679cb3bec7d828addcff23d75b50cc5798113\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:20:40.374062 env[1744]: time="2025-09-06T00:20:40.374027804Z" level=info msg="CreateContainer within sandbox \"ca3161cfcf52569aad4fb122921d66a46ebad52277e226c163020ec440d3d8db\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:20:40.420517 env[1744]: time="2025-09-06T00:20:40.420473415Z" level=info msg="CreateContainer within sandbox \"8b2d0d19fde953c2d2dd3029616679cb3bec7d828addcff23d75b50cc5798113\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be234b5c23d1b9318e15f20da7feba471e8060e6b6a56cd15ee0bebe8dd74363\"" Sep 6 00:20:40.421366 env[1744]: time="2025-09-06T00:20:40.421304975Z" level=info msg="CreateContainer within sandbox \"ca3161cfcf52569aad4fb122921d66a46ebad52277e226c163020ec440d3d8db\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f709770b127d00ed77854d2d2a51ef0c1766a933cee7925f3a0a1defa2848c0e\"" Sep 6 00:20:40.421580 env[1744]: time="2025-09-06T00:20:40.421538283Z" level=info msg="StartContainer for \"be234b5c23d1b9318e15f20da7feba471e8060e6b6a56cd15ee0bebe8dd74363\"" Sep 6 00:20:40.423909 env[1744]: time="2025-09-06T00:20:40.423873148Z" level=info msg="CreateContainer within sandbox \"fa6786fa3600d8cc84bba51d2b13115e0504996327a8b9413a374cac7873c226\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b959c228f30e586eb5d27705e39fba4abd76fb6819e9b5fb2f60a1bb6cc31162\"" Sep 6 00:20:40.425896 env[1744]: time="2025-09-06T00:20:40.425857061Z" level=info msg="StartContainer for \"f709770b127d00ed77854d2d2a51ef0c1766a933cee7925f3a0a1defa2848c0e\"" Sep 6 00:20:40.431403 kubelet[2225]: E0906 00:20:40.431275 2225 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.235:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.235:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-235.1862898c0b51edca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-235,UID:ip-172-31-31-235,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-235,},FirstTimestamp:2025-09-06 00:20:39.20626017 +0000 UTC m=+0.517942721,LastTimestamp:2025-09-06 00:20:39.20626017 +0000 UTC m=+0.517942721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-235,}" Sep 6 00:20:40.431891 env[1744]: time="2025-09-06T00:20:40.431855552Z" level=info msg="StartContainer for \"b959c228f30e586eb5d27705e39fba4abd76fb6819e9b5fb2f60a1bb6cc31162\"" Sep 6 00:20:40.453348 systemd[1]: Started cri-containerd-f709770b127d00ed77854d2d2a51ef0c1766a933cee7925f3a0a1defa2848c0e.scope. Sep 6 00:20:40.465451 systemd[1]: Started cri-containerd-be234b5c23d1b9318e15f20da7feba471e8060e6b6a56cd15ee0bebe8dd74363.scope. Sep 6 00:20:40.486913 systemd[1]: Started cri-containerd-b959c228f30e586eb5d27705e39fba4abd76fb6819e9b5fb2f60a1bb6cc31162.scope. Sep 6 00:20:40.550615 kubelet[2225]: W0906 00:20:40.550466 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:40.550615 kubelet[2225]: E0906 00:20:40.550540 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:40.557042 env[1744]: time="2025-09-06T00:20:40.556981051Z" level=info msg="StartContainer for \"f709770b127d00ed77854d2d2a51ef0c1766a933cee7925f3a0a1defa2848c0e\" returns successfully" Sep 6 00:20:40.571164 env[1744]: time="2025-09-06T00:20:40.571118722Z" level=info msg="StartContainer for \"b959c228f30e586eb5d27705e39fba4abd76fb6819e9b5fb2f60a1bb6cc31162\" returns successfully" Sep 6 00:20:40.599247 env[1744]: time="2025-09-06T00:20:40.599196208Z" level=info msg="StartContainer for \"be234b5c23d1b9318e15f20da7feba471e8060e6b6a56cd15ee0bebe8dd74363\" returns successfully" Sep 6 00:20:40.632180 kubelet[2225]: E0906 00:20:40.632105 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-235?timeout=10s\": dial tcp 172.31.31.235:6443: connect: connection refused" interval="1.6s" Sep 6 00:20:40.670328 kubelet[2225]: W0906 00:20:40.670241 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-235&limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:40.670509 kubelet[2225]: E0906 00:20:40.670339 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-235&limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:40.800813 kubelet[2225]: I0906 00:20:40.800714 2225 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-235" Sep 6 00:20:40.801273 kubelet[2225]: E0906 00:20:40.801062 2225 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.235:6443/api/v1/nodes\": dial tcp 172.31.31.235:6443: connect: connection refused" node="ip-172-31-31-235" Sep 6 00:20:41.248412 kubelet[2225]: E0906 00:20:41.248363 2225 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.235:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:42.233945 kubelet[2225]: E0906 00:20:42.233899 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-235?timeout=10s\": dial tcp 172.31.31.235:6443: connect: connection refused" interval="3.2s" Sep 6 00:20:42.402991 kubelet[2225]: I0906 00:20:42.402947 2225 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-235" Sep 6 00:20:42.403344 kubelet[2225]: E0906 00:20:42.403237 2225 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.235:6443/api/v1/nodes\": dial tcp 172.31.31.235:6443: connect: connection refused" node="ip-172-31-31-235" Sep 6 00:20:42.544849 kubelet[2225]: W0906 00:20:42.544697 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:42.544849 kubelet[2225]: E0906 00:20:42.544780 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:42.729175 kubelet[2225]: W0906 00:20:42.729028 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:42.729334 kubelet[2225]: E0906 00:20:42.729239 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:42.788484 kubelet[2225]: W0906 00:20:42.788420 2225 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.235:6443: connect: connection refused Sep 6 00:20:42.788678 kubelet[2225]: E0906 00:20:42.788492 2225 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.235:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.235:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:44.838030 kubelet[2225]: E0906 00:20:44.837978 2225 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-31-235" not found Sep 6 00:20:45.184892 kubelet[2225]: I0906 00:20:45.184850 2225 apiserver.go:52] "Watching apiserver" Sep 6 00:20:45.207347 kubelet[2225]: E0906 00:20:45.207311 2225 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-31-235" not found Sep 6 00:20:45.233590 kubelet[2225]: I0906 00:20:45.233527 2225 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:20:45.437431 kubelet[2225]: E0906 00:20:45.437306 2225 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-235\" not found" node="ip-172-31-31-235" Sep 6 00:20:45.605634 kubelet[2225]: I0906 00:20:45.605602 2225 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-235" Sep 6 00:20:45.616123 kubelet[2225]: I0906 00:20:45.616082 2225 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-235" Sep 6 00:20:46.669309 systemd[1]: Reloading. Sep 6 00:20:46.776275 /usr/lib/systemd/system-generators/torcx-generator[2522]: time="2025-09-06T00:20:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:46.776316 /usr/lib/systemd/system-generators/torcx-generator[2522]: time="2025-09-06T00:20:46Z" level=info msg="torcx already run" Sep 6 00:20:46.863616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:46.863641 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:46.884249 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:47.007998 kubelet[2225]: I0906 00:20:47.007892 2225 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:20:47.009377 systemd[1]: Stopping kubelet.service... Sep 6 00:20:47.031191 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:20:47.031425 systemd[1]: Stopped kubelet.service. Sep 6 00:20:47.033396 systemd[1]: Starting kubelet.service... Sep 6 00:20:48.559496 systemd[1]: Started kubelet.service. Sep 6 00:20:48.635337 kubelet[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:20:48.635886 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:20:48.636005 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:20:48.637232 kubelet[2579]: I0906 00:20:48.637172 2579 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:20:48.651123 kubelet[2579]: I0906 00:20:48.651082 2579 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:20:48.651604 kubelet[2579]: I0906 00:20:48.651296 2579 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:20:48.652608 kubelet[2579]: I0906 00:20:48.652530 2579 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:20:48.656712 kubelet[2579]: I0906 00:20:48.656681 2579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:20:48.663746 kubelet[2579]: I0906 00:20:48.663711 2579 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:20:48.665992 sudo[2593]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:20:48.667245 sudo[2593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:20:48.667381 kubelet[2579]: E0906 00:20:48.667259 2579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:20:48.667381 kubelet[2579]: I0906 00:20:48.667283 2579 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:20:48.670166 kubelet[2579]: I0906 00:20:48.669753 2579 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:20:48.670166 kubelet[2579]: I0906 00:20:48.669863 2579 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:20:48.670529 kubelet[2579]: I0906 00:20:48.670466 2579 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:20:48.670705 kubelet[2579]: I0906 00:20:48.670508 2579 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-235","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:20:48.670803 kubelet[2579]: I0906 00:20:48.670708 2579 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:20:48.670803 kubelet[2579]: I0906 00:20:48.670719 2579 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:20:48.670803 kubelet[2579]: I0906 00:20:48.670749 2579 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:20:48.670893 kubelet[2579]: I0906 00:20:48.670853 2579 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:20:48.670893 kubelet[2579]: I0906 00:20:48.670865 2579 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:20:48.671648 kubelet[2579]: I0906 00:20:48.671631 2579 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:20:48.671761 kubelet[2579]: I0906 00:20:48.671751 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:20:48.673769 kubelet[2579]: I0906 00:20:48.673750 2579 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:20:48.674393 kubelet[2579]: I0906 00:20:48.674378 2579 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:20:48.674910 kubelet[2579]: I0906 00:20:48.674898 2579 server.go:1274] "Started kubelet" Sep 6 00:20:48.703513 kubelet[2579]: I0906 00:20:48.703490 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:20:48.704234 kubelet[2579]: I0906 00:20:48.704188 2579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:20:48.704612 kubelet[2579]: I0906 00:20:48.704599 2579 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:20:48.718115 kubelet[2579]: I0906 00:20:48.702592 2579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:20:48.718115 kubelet[2579]: I0906 00:20:48.703893 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:20:48.718583 kubelet[2579]: I0906 00:20:48.718536 2579 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:20:48.721049 kubelet[2579]: I0906 00:20:48.721020 2579 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:20:48.721353 kubelet[2579]: E0906 00:20:48.721324 2579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-235\" not found" Sep 6 00:20:48.722588 kubelet[2579]: I0906 00:20:48.722007 2579 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:20:48.722588 kubelet[2579]: I0906 00:20:48.722142 2579 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:20:48.749811 kubelet[2579]: I0906 00:20:48.747743 2579 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:20:48.749811 kubelet[2579]: I0906 00:20:48.747770 2579 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:20:48.749811 kubelet[2579]: I0906 00:20:48.747878 2579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:20:48.755851 kubelet[2579]: I0906 00:20:48.755809 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:20:48.757293 kubelet[2579]: I0906 00:20:48.757265 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:20:48.757451 kubelet[2579]: I0906 00:20:48.757439 2579 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:20:48.757534 kubelet[2579]: I0906 00:20:48.757526 2579 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:20:48.757711 kubelet[2579]: E0906 00:20:48.757692 2579 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:20:48.847762 kubelet[2579]: I0906 00:20:48.847737 2579 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:20:48.847963 kubelet[2579]: I0906 00:20:48.847948 2579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:20:48.848084 kubelet[2579]: I0906 00:20:48.848072 2579 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:20:48.848355 kubelet[2579]: I0906 00:20:48.848343 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:20:48.848473 kubelet[2579]: I0906 00:20:48.848451 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:20:48.848543 kubelet[2579]: I0906 00:20:48.848537 2579 policy_none.go:49] "None policy: Start" Sep 6 00:20:48.849398 kubelet[2579]: I0906 00:20:48.849385 2579 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:20:48.849534 kubelet[2579]: I0906 00:20:48.849524 2579 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:20:48.849825 kubelet[2579]: I0906 00:20:48.849813 2579 state_mem.go:75] "Updated machine memory state" Sep 6 00:20:48.860084 kubelet[2579]: E0906 00:20:48.860059 2579 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:20:48.860279 kubelet[2579]: I0906 00:20:48.860264 2579 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:20:48.860581 kubelet[2579]: I0906 00:20:48.860542 2579 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:20:48.860770 kubelet[2579]: I0906 00:20:48.860733 2579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:20:48.865542 kubelet[2579]: I0906 00:20:48.865519 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:20:48.987132 kubelet[2579]: I0906 00:20:48.986879 2579 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-235" Sep 6 00:20:48.996695 kubelet[2579]: I0906 00:20:48.996667 2579 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-235" Sep 6 00:20:48.996933 kubelet[2579]: I0906 00:20:48.996921 2579 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-235" Sep 6 00:20:49.069548 kubelet[2579]: E0906 00:20:49.069505 2579 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-235\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:49.128339 kubelet[2579]: I0906 00:20:49.128243 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:49.128585 kubelet[2579]: I0906 00:20:49.128536 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:49.128688 kubelet[2579]: I0906 00:20:49.128588 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:49.128688 kubelet[2579]: I0906 00:20:49.128617 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da68a11632ba2277e6a9e48e2c30abe0-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-235\" (UID: \"da68a11632ba2277e6a9e48e2c30abe0\") " pod="kube-system/kube-apiserver-ip-172-31-31-235" Sep 6 00:20:49.128688 kubelet[2579]: I0906 00:20:49.128643 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:49.128688 kubelet[2579]: I0906 00:20:49.128667 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3149dc2a06aebd93d06a3b0445f31c55-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-235\" (UID: \"3149dc2a06aebd93d06a3b0445f31c55\") " pod="kube-system/kube-scheduler-ip-172-31-31-235" Sep 6 00:20:49.128855 kubelet[2579]: I0906 00:20:49.128690 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da68a11632ba2277e6a9e48e2c30abe0-ca-certs\") pod \"kube-apiserver-ip-172-31-31-235\" (UID: \"da68a11632ba2277e6a9e48e2c30abe0\") " pod="kube-system/kube-apiserver-ip-172-31-31-235" Sep 6 00:20:49.128855 kubelet[2579]: I0906 00:20:49.128714 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da68a11632ba2277e6a9e48e2c30abe0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-235\" (UID: \"da68a11632ba2277e6a9e48e2c30abe0\") " pod="kube-system/kube-apiserver-ip-172-31-31-235" Sep 6 00:20:49.128855 kubelet[2579]: I0906 00:20:49.128743 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f46181577502b0406d9d962623912ffa-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-235\" (UID: \"f46181577502b0406d9d962623912ffa\") " pod="kube-system/kube-controller-manager-ip-172-31-31-235" Sep 6 00:20:49.510499 sudo[2593]: pam_unix(sudo:session): session closed for user root Sep 6 00:20:49.672892 kubelet[2579]: I0906 00:20:49.672855 2579 apiserver.go:52] "Watching apiserver" Sep 6 00:20:49.723115 kubelet[2579]: I0906 00:20:49.723079 2579 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:20:49.828255 kubelet[2579]: E0906 00:20:49.828140 2579 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-31-235\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-235" Sep 6 00:20:49.864708 kubelet[2579]: I0906 00:20:49.864630 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-235" podStartSLOduration=0.864610025 podStartE2EDuration="864.610025ms" podCreationTimestamp="2025-09-06 00:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:20:49.854266948 +0000 UTC m=+1.280391677" watchObservedRunningTime="2025-09-06 00:20:49.864610025 +0000 UTC m=+1.290734753" Sep 6 00:20:49.877750 kubelet[2579]: I0906 00:20:49.877664 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-235" podStartSLOduration=0.877623149 podStartE2EDuration="877.623149ms" podCreationTimestamp="2025-09-06 00:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:20:49.865618146 +0000 UTC m=+1.291742873" watchObservedRunningTime="2025-09-06 00:20:49.877623149 +0000 UTC m=+1.303747872" Sep 6 00:20:49.878146 kubelet[2579]: I0906 00:20:49.878099 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-235" podStartSLOduration=3.878084731 podStartE2EDuration="3.878084731s" podCreationTimestamp="2025-09-06 00:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:20:49.875776784 +0000 UTC m=+1.301901511" watchObservedRunningTime="2025-09-06 00:20:49.878084731 +0000 UTC m=+1.304209459" Sep 6 00:20:50.698205 amazon-ssm-agent[1714]: 2025-09-06 00:20:50 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 6 00:20:52.213329 sudo[1966]: pam_unix(sudo:session): session closed for user root Sep 6 00:20:52.236227 sshd[1963]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:52.239441 systemd[1]: sshd@4-172.31.31.235:22-139.178.68.195:37200.service: Deactivated successfully. Sep 6 00:20:52.240180 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:20:52.240315 systemd[1]: session-5.scope: Consumed 5.294s CPU time. Sep 6 00:20:52.241336 systemd-logind[1726]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:20:52.242253 systemd-logind[1726]: Removed session 5. Sep 6 00:20:53.332878 kubelet[2579]: I0906 00:20:53.332840 2579 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:20:53.333422 env[1744]: time="2025-09-06T00:20:53.333278310Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:20:53.333646 kubelet[2579]: I0906 00:20:53.333439 2579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:20:53.345310 update_engine[1730]: I0906 00:20:53.344622 1730 update_attempter.cc:509] Updating boot flags... Sep 6 00:20:54.127596 systemd[1]: Created slice kubepods-besteffort-pod50681e4d_7d90_4078_a2ff_4ded5bf1f14c.slice. Sep 6 00:20:54.149150 systemd[1]: Created slice kubepods-burstable-pode2ed4e30_d62f_4ef9_bfb1_73d588563199.slice. Sep 6 00:20:54.163722 kubelet[2579]: I0906 00:20:54.163680 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-lib-modules\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.163899 kubelet[2579]: I0906 00:20:54.163729 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-host-proc-sys-kernel\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.163899 kubelet[2579]: I0906 00:20:54.163753 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50681e4d-7d90-4078-a2ff-4ded5bf1f14c-xtables-lock\") pod \"kube-proxy-t9x79\" (UID: \"50681e4d-7d90-4078-a2ff-4ded5bf1f14c\") " pod="kube-system/kube-proxy-t9x79" Sep 6 00:20:54.163899 kubelet[2579]: I0906 00:20:54.163774 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cni-path\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.163899 kubelet[2579]: I0906 00:20:54.163796 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-etc-cni-netd\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.163899 kubelet[2579]: I0906 00:20:54.163819 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50681e4d-7d90-4078-a2ff-4ded5bf1f14c-lib-modules\") pod \"kube-proxy-t9x79\" (UID: \"50681e4d-7d90-4078-a2ff-4ded5bf1f14c\") " pod="kube-system/kube-proxy-t9x79" Sep 6 00:20:54.164191 kubelet[2579]: I0906 00:20:54.163844 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2clb7\" (UniqueName: \"kubernetes.io/projected/50681e4d-7d90-4078-a2ff-4ded5bf1f14c-kube-api-access-2clb7\") pod \"kube-proxy-t9x79\" (UID: \"50681e4d-7d90-4078-a2ff-4ded5bf1f14c\") " pod="kube-system/kube-proxy-t9x79" Sep 6 00:20:54.164191 kubelet[2579]: I0906 00:20:54.163888 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-hostproc\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.164191 kubelet[2579]: I0906 00:20:54.163911 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-cgroup\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.164191 kubelet[2579]: I0906 00:20:54.163933 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-host-proc-sys-net\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.164191 kubelet[2579]: I0906 00:20:54.163958 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g44bk\" (UniqueName: \"kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-kube-api-access-g44bk\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.164403 kubelet[2579]: I0906 00:20:54.163981 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50681e4d-7d90-4078-a2ff-4ded5bf1f14c-kube-proxy\") pod \"kube-proxy-t9x79\" (UID: \"50681e4d-7d90-4078-a2ff-4ded5bf1f14c\") " pod="kube-system/kube-proxy-t9x79" Sep 6 00:20:54.164403 kubelet[2579]: I0906 00:20:54.164006 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-config-path\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.164403 kubelet[2579]: I0906 00:20:54.164033 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-hubble-tls\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.164403 kubelet[2579]: I0906 00:20:54.164058 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-run\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.164403 kubelet[2579]: I0906 00:20:54.164082 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2ed4e30-d62f-4ef9-bfb1-73d588563199-clustermesh-secrets\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.164403 kubelet[2579]: I0906 00:20:54.164107 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-bpf-maps\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.164690 kubelet[2579]: I0906 00:20:54.164131 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-xtables-lock\") pod \"cilium-66srn\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " pod="kube-system/cilium-66srn" Sep 6 00:20:54.265305 kubelet[2579]: I0906 00:20:54.265263 2579 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:20:54.278183 kubelet[2579]: E0906 00:20:54.278158 2579 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 00:20:54.280911 kubelet[2579]: E0906 00:20:54.280892 2579 projected.go:194] Error preparing data for projected volume kube-api-access-g44bk for pod kube-system/cilium-66srn: configmap "kube-root-ca.crt" not found Sep 6 00:20:54.281418 kubelet[2579]: E0906 00:20:54.281397 2579 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 00:20:54.281646 kubelet[2579]: E0906 00:20:54.281634 2579 projected.go:194] Error preparing data for projected volume kube-api-access-2clb7 for pod kube-system/kube-proxy-t9x79: configmap "kube-root-ca.crt" not found Sep 6 00:20:54.281824 kubelet[2579]: E0906 00:20:54.281796 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-kube-api-access-g44bk podName:e2ed4e30-d62f-4ef9-bfb1-73d588563199 nodeName:}" failed. No retries permitted until 2025-09-06 00:20:54.781057331 +0000 UTC m=+6.207182056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g44bk" (UniqueName: "kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-kube-api-access-g44bk") pod "cilium-66srn" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199") : configmap "kube-root-ca.crt" not found Sep 6 00:20:54.281980 kubelet[2579]: E0906 00:20:54.281971 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50681e4d-7d90-4078-a2ff-4ded5bf1f14c-kube-api-access-2clb7 podName:50681e4d-7d90-4078-a2ff-4ded5bf1f14c nodeName:}" failed. No retries permitted until 2025-09-06 00:20:54.781955786 +0000 UTC m=+6.208080491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2clb7" (UniqueName: "kubernetes.io/projected/50681e4d-7d90-4078-a2ff-4ded5bf1f14c-kube-api-access-2clb7") pod "kube-proxy-t9x79" (UID: "50681e4d-7d90-4078-a2ff-4ded5bf1f14c") : configmap "kube-root-ca.crt" not found Sep 6 00:20:54.446226 systemd[1]: Created slice kubepods-besteffort-podce359e0c_f76e_426c_9891_786986f206a3.slice. Sep 6 00:20:54.465900 kubelet[2579]: I0906 00:20:54.465867 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khh7j\" (UniqueName: \"kubernetes.io/projected/ce359e0c-f76e-426c-9891-786986f206a3-kube-api-access-khh7j\") pod \"cilium-operator-5d85765b45-c8kb5\" (UID: \"ce359e0c-f76e-426c-9891-786986f206a3\") " pod="kube-system/cilium-operator-5d85765b45-c8kb5" Sep 6 00:20:54.466430 kubelet[2579]: I0906 00:20:54.466394 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce359e0c-f76e-426c-9891-786986f206a3-cilium-config-path\") pod \"cilium-operator-5d85765b45-c8kb5\" (UID: \"ce359e0c-f76e-426c-9891-786986f206a3\") " pod="kube-system/cilium-operator-5d85765b45-c8kb5" Sep 6 00:20:54.754030 env[1744]: time="2025-09-06T00:20:54.753537936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-c8kb5,Uid:ce359e0c-f76e-426c-9891-786986f206a3,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:54.781341 env[1744]: time="2025-09-06T00:20:54.781076998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:54.781341 env[1744]: time="2025-09-06T00:20:54.781164244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:54.781341 env[1744]: time="2025-09-06T00:20:54.781175413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:54.781592 env[1744]: time="2025-09-06T00:20:54.781404314Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08 pid=2758 runtime=io.containerd.runc.v2 Sep 6 00:20:54.799086 systemd[1]: Started cri-containerd-c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08.scope. Sep 6 00:20:54.851070 env[1744]: time="2025-09-06T00:20:54.850531953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-c8kb5,Uid:ce359e0c-f76e-426c-9891-786986f206a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08\"" Sep 6 00:20:54.854392 env[1744]: time="2025-09-06T00:20:54.852888134Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:20:55.050935 env[1744]: time="2025-09-06T00:20:55.050810229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9x79,Uid:50681e4d-7d90-4078-a2ff-4ded5bf1f14c,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:55.058742 env[1744]: time="2025-09-06T00:20:55.058693909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-66srn,Uid:e2ed4e30-d62f-4ef9-bfb1-73d588563199,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:55.091126 env[1744]: time="2025-09-06T00:20:55.091057255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:55.091388 env[1744]: time="2025-09-06T00:20:55.091354473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:55.091512 env[1744]: time="2025-09-06T00:20:55.091484166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:55.100003 env[1744]: time="2025-09-06T00:20:55.094347695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7992c0a1b3cafbfd8f7cee72c7a12909e6a0c9d6f8f1e1fb448cee9d87f3dab6 pid=2802 runtime=io.containerd.runc.v2 Sep 6 00:20:55.102399 env[1744]: time="2025-09-06T00:20:55.094644398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:55.102399 env[1744]: time="2025-09-06T00:20:55.094687969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:55.102399 env[1744]: time="2025-09-06T00:20:55.094703658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:55.102399 env[1744]: time="2025-09-06T00:20:55.094931713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6 pid=2818 runtime=io.containerd.runc.v2 Sep 6 00:20:55.123161 systemd[1]: Started cri-containerd-7992c0a1b3cafbfd8f7cee72c7a12909e6a0c9d6f8f1e1fb448cee9d87f3dab6.scope. Sep 6 00:20:55.140911 systemd[1]: Started cri-containerd-8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6.scope. Sep 6 00:20:55.190590 env[1744]: time="2025-09-06T00:20:55.190510075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9x79,Uid:50681e4d-7d90-4078-a2ff-4ded5bf1f14c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7992c0a1b3cafbfd8f7cee72c7a12909e6a0c9d6f8f1e1fb448cee9d87f3dab6\"" Sep 6 00:20:55.197932 env[1744]: time="2025-09-06T00:20:55.197885830Z" level=info msg="CreateContainer within sandbox \"7992c0a1b3cafbfd8f7cee72c7a12909e6a0c9d6f8f1e1fb448cee9d87f3dab6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:20:55.207721 env[1744]: time="2025-09-06T00:20:55.207633914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-66srn,Uid:e2ed4e30-d62f-4ef9-bfb1-73d588563199,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\"" Sep 6 00:20:55.262961 env[1744]: time="2025-09-06T00:20:55.262898406Z" level=info msg="CreateContainer within sandbox \"7992c0a1b3cafbfd8f7cee72c7a12909e6a0c9d6f8f1e1fb448cee9d87f3dab6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6ea7251987df6a8bec207ac7cf60f253505a95a17ad3830fc63dcb927d6e9082\"" Sep 6 00:20:55.265296 env[1744]: time="2025-09-06T00:20:55.265206573Z" level=info msg="StartContainer for \"6ea7251987df6a8bec207ac7cf60f253505a95a17ad3830fc63dcb927d6e9082\"" Sep 6 00:20:55.306956 systemd[1]: run-containerd-runc-k8s.io-6ea7251987df6a8bec207ac7cf60f253505a95a17ad3830fc63dcb927d6e9082-runc.K6o0kE.mount: Deactivated successfully. Sep 6 00:20:55.314998 systemd[1]: Started cri-containerd-6ea7251987df6a8bec207ac7cf60f253505a95a17ad3830fc63dcb927d6e9082.scope. Sep 6 00:20:55.358181 env[1744]: time="2025-09-06T00:20:55.358144236Z" level=info msg="StartContainer for \"6ea7251987df6a8bec207ac7cf60f253505a95a17ad3830fc63dcb927d6e9082\" returns successfully" Sep 6 00:20:56.278497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206744261.mount: Deactivated successfully. Sep 6 00:20:58.878993 env[1744]: time="2025-09-06T00:20:58.878924192Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:58.883265 env[1744]: time="2025-09-06T00:20:58.883213684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:58.886445 env[1744]: time="2025-09-06T00:20:58.886400747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:58.886944 env[1744]: time="2025-09-06T00:20:58.886908410Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:20:58.890836 env[1744]: time="2025-09-06T00:20:58.890385394Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:20:58.891303 env[1744]: time="2025-09-06T00:20:58.891276662Z" level=info msg="CreateContainer within sandbox \"c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:20:58.910828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797054843.mount: Deactivated successfully. Sep 6 00:20:58.917021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276933727.mount: Deactivated successfully. Sep 6 00:20:58.931847 env[1744]: time="2025-09-06T00:20:58.931784403Z" level=info msg="CreateContainer within sandbox \"c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\"" Sep 6 00:20:58.934382 env[1744]: time="2025-09-06T00:20:58.934235505Z" level=info msg="StartContainer for \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\"" Sep 6 00:20:58.959139 systemd[1]: Started cri-containerd-6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d.scope. Sep 6 00:20:59.002320 env[1744]: time="2025-09-06T00:20:59.002260134Z" level=info msg="StartContainer for \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\" returns successfully" Sep 6 00:20:59.586638 kubelet[2579]: I0906 00:20:59.586588 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t9x79" podStartSLOduration=5.5865683619999995 podStartE2EDuration="5.586568362s" podCreationTimestamp="2025-09-06 00:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:20:55.839622713 +0000 UTC m=+7.265747441" watchObservedRunningTime="2025-09-06 00:20:59.586568362 +0000 UTC m=+11.012693081" Sep 6 00:20:59.855519 kubelet[2579]: I0906 00:20:59.855431 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-c8kb5" podStartSLOduration=1.8190384929999999 podStartE2EDuration="5.855409562s" podCreationTimestamp="2025-09-06 00:20:54 +0000 UTC" firstStartedPulling="2025-09-06 00:20:54.851988766 +0000 UTC m=+6.278113485" lastFinishedPulling="2025-09-06 00:20:58.888359836 +0000 UTC m=+10.314484554" observedRunningTime="2025-09-06 00:20:59.853368819 +0000 UTC m=+11.279493546" watchObservedRunningTime="2025-09-06 00:20:59.855409562 +0000 UTC m=+11.281534292" Sep 6 00:21:05.706910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861356399.mount: Deactivated successfully. Sep 6 00:21:08.734986 env[1744]: time="2025-09-06T00:21:08.734924078Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:08.741906 env[1744]: time="2025-09-06T00:21:08.741810361Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:08.745475 env[1744]: time="2025-09-06T00:21:08.745422455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:08.746082 env[1744]: time="2025-09-06T00:21:08.746048116Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:21:08.750533 env[1744]: time="2025-09-06T00:21:08.750490795Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:21:08.771084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802075236.mount: Deactivated successfully. Sep 6 00:21:08.781623 env[1744]: time="2025-09-06T00:21:08.781547185Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\"" Sep 6 00:21:08.782672 env[1744]: time="2025-09-06T00:21:08.782401383Z" level=info msg="StartContainer for \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\"" Sep 6 00:21:08.820894 systemd[1]: Started cri-containerd-6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd.scope. Sep 6 00:21:08.864589 env[1744]: time="2025-09-06T00:21:08.861465174Z" level=info msg="StartContainer for \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\" returns successfully" Sep 6 00:21:08.872634 systemd[1]: cri-containerd-6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd.scope: Deactivated successfully. Sep 6 00:21:08.968481 env[1744]: time="2025-09-06T00:21:08.968404470Z" level=info msg="shim disconnected" id=6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd Sep 6 00:21:08.969150 env[1744]: time="2025-09-06T00:21:08.969121330Z" level=warning msg="cleaning up after shim disconnected" id=6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd namespace=k8s.io Sep 6 00:21:08.969307 env[1744]: time="2025-09-06T00:21:08.969278895Z" level=info msg="cleaning up dead shim" Sep 6 00:21:08.978861 env[1744]: time="2025-09-06T00:21:08.978812222Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3127 runtime=io.containerd.runc.v2\n" Sep 6 00:21:09.768819 systemd[1]: run-containerd-runc-k8s.io-6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd-runc.E2Hhxc.mount: Deactivated successfully. Sep 6 00:21:09.768910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd-rootfs.mount: Deactivated successfully. Sep 6 00:21:09.893397 env[1744]: time="2025-09-06T00:21:09.893354404Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:21:09.911359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371579810.mount: Deactivated successfully. Sep 6 00:21:09.921766 env[1744]: time="2025-09-06T00:21:09.921459173Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\"" Sep 6 00:21:09.923113 env[1744]: time="2025-09-06T00:21:09.923071715Z" level=info msg="StartContainer for \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\"" Sep 6 00:21:09.951871 systemd[1]: Started cri-containerd-64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b.scope. Sep 6 00:21:09.983148 env[1744]: time="2025-09-06T00:21:09.983092628Z" level=info msg="StartContainer for \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\" returns successfully" Sep 6 00:21:09.998436 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:21:09.998883 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:21:09.999234 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:21:10.001459 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:21:10.004545 systemd[1]: cri-containerd-64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b.scope: Deactivated successfully. Sep 6 00:21:10.018877 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:21:10.045640 env[1744]: time="2025-09-06T00:21:10.045597373Z" level=info msg="shim disconnected" id=64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b Sep 6 00:21:10.045872 env[1744]: time="2025-09-06T00:21:10.045661970Z" level=warning msg="cleaning up after shim disconnected" id=64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b namespace=k8s.io Sep 6 00:21:10.045872 env[1744]: time="2025-09-06T00:21:10.045683976Z" level=info msg="cleaning up dead shim" Sep 6 00:21:10.058087 env[1744]: time="2025-09-06T00:21:10.058037045Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3190 runtime=io.containerd.runc.v2\n" Sep 6 00:21:10.769622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b-rootfs.mount: Deactivated successfully. Sep 6 00:21:10.894114 env[1744]: time="2025-09-06T00:21:10.894064590Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:21:10.932766 env[1744]: time="2025-09-06T00:21:10.932696682Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\"" Sep 6 00:21:10.933728 env[1744]: time="2025-09-06T00:21:10.933672713Z" level=info msg="StartContainer for \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\"" Sep 6 00:21:10.966062 systemd[1]: Started cri-containerd-d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa.scope. Sep 6 00:21:11.002756 env[1744]: time="2025-09-06T00:21:11.002701960Z" level=info msg="StartContainer for \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\" returns successfully" Sep 6 00:21:11.015220 systemd[1]: cri-containerd-d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa.scope: Deactivated successfully. Sep 6 00:21:11.054715 env[1744]: time="2025-09-06T00:21:11.054225020Z" level=info msg="shim disconnected" id=d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa Sep 6 00:21:11.054715 env[1744]: time="2025-09-06T00:21:11.054271340Z" level=warning msg="cleaning up after shim disconnected" id=d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa namespace=k8s.io Sep 6 00:21:11.054715 env[1744]: time="2025-09-06T00:21:11.054281136Z" level=info msg="cleaning up dead shim" Sep 6 00:21:11.063265 env[1744]: time="2025-09-06T00:21:11.063194335Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3249 runtime=io.containerd.runc.v2\n" Sep 6 00:21:11.769207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa-rootfs.mount: Deactivated successfully. Sep 6 00:21:11.897593 env[1744]: time="2025-09-06T00:21:11.897335057Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:21:11.931855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239795293.mount: Deactivated successfully. Sep 6 00:21:11.944697 env[1744]: time="2025-09-06T00:21:11.944645682Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\"" Sep 6 00:21:11.946580 env[1744]: time="2025-09-06T00:21:11.946520291Z" level=info msg="StartContainer for \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\"" Sep 6 00:21:11.972194 systemd[1]: Started cri-containerd-498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf.scope. Sep 6 00:21:12.005498 systemd[1]: cri-containerd-498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf.scope: Deactivated successfully. Sep 6 00:21:12.009759 env[1744]: time="2025-09-06T00:21:12.009664514Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2ed4e30_d62f_4ef9_bfb1_73d588563199.slice/cri-containerd-498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf.scope/memory.events\": no such file or directory" Sep 6 00:21:12.011115 env[1744]: time="2025-09-06T00:21:12.011059143Z" level=info msg="StartContainer for \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\" returns successfully" Sep 6 00:21:12.043671 env[1744]: time="2025-09-06T00:21:12.043158482Z" level=info msg="shim disconnected" id=498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf Sep 6 00:21:12.043671 env[1744]: time="2025-09-06T00:21:12.043204530Z" level=warning msg="cleaning up after shim disconnected" id=498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf namespace=k8s.io Sep 6 00:21:12.043671 env[1744]: time="2025-09-06T00:21:12.043215990Z" level=info msg="cleaning up dead shim" Sep 6 00:21:12.052632 env[1744]: time="2025-09-06T00:21:12.052585377Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3307 runtime=io.containerd.runc.v2\n" Sep 6 00:21:12.769314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf-rootfs.mount: Deactivated successfully. Sep 6 00:21:12.901793 env[1744]: time="2025-09-06T00:21:12.901586828Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:21:12.933451 env[1744]: time="2025-09-06T00:21:12.933395995Z" level=info msg="CreateContainer within sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\"" Sep 6 00:21:12.934150 env[1744]: time="2025-09-06T00:21:12.934121689Z" level=info msg="StartContainer for \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\"" Sep 6 00:21:12.961716 systemd[1]: Started cri-containerd-df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59.scope. Sep 6 00:21:13.006455 env[1744]: time="2025-09-06T00:21:13.006347183Z" level=info msg="StartContainer for \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\" returns successfully" Sep 6 00:21:13.136202 kubelet[2579]: I0906 00:21:13.136169 2579 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:21:13.169248 systemd[1]: Created slice kubepods-burstable-pod8e47694c_d71e_45a8_9a63_8b79bca7a6c3.slice. Sep 6 00:21:13.228514 systemd[1]: Created slice kubepods-burstable-pod3b223213_e560_4a42_a568_3141137d58e0.slice. Sep 6 00:21:13.246480 kubelet[2579]: I0906 00:21:13.246411 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e47694c-d71e-45a8-9a63-8b79bca7a6c3-config-volume\") pod \"coredns-7c65d6cfc9-wnfw2\" (UID: \"8e47694c-d71e-45a8-9a63-8b79bca7a6c3\") " pod="kube-system/coredns-7c65d6cfc9-wnfw2" Sep 6 00:21:13.246480 kubelet[2579]: I0906 00:21:13.246465 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nsb9\" (UniqueName: \"kubernetes.io/projected/8e47694c-d71e-45a8-9a63-8b79bca7a6c3-kube-api-access-7nsb9\") pod \"coredns-7c65d6cfc9-wnfw2\" (UID: \"8e47694c-d71e-45a8-9a63-8b79bca7a6c3\") " pod="kube-system/coredns-7c65d6cfc9-wnfw2" Sep 6 00:21:13.346951 kubelet[2579]: I0906 00:21:13.346903 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5vt\" (UniqueName: \"kubernetes.io/projected/3b223213-e560-4a42-a568-3141137d58e0-kube-api-access-zt5vt\") pod \"coredns-7c65d6cfc9-vswjf\" (UID: \"3b223213-e560-4a42-a568-3141137d58e0\") " pod="kube-system/coredns-7c65d6cfc9-vswjf" Sep 6 00:21:13.347139 kubelet[2579]: I0906 00:21:13.346986 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b223213-e560-4a42-a568-3141137d58e0-config-volume\") pod \"coredns-7c65d6cfc9-vswjf\" (UID: \"3b223213-e560-4a42-a568-3141137d58e0\") " pod="kube-system/coredns-7c65d6cfc9-vswjf" Sep 6 00:21:13.474314 env[1744]: time="2025-09-06T00:21:13.474193256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wnfw2,Uid:8e47694c-d71e-45a8-9a63-8b79bca7a6c3,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:13.534027 env[1744]: time="2025-09-06T00:21:13.533982951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vswjf,Uid:3b223213-e560-4a42-a568-3141137d58e0,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:31.985893 systemd[1]: Started sshd@5-172.31.31.235:22-139.178.68.195:40028.service. Sep 6 00:21:32.164303 sshd[3473]: Accepted publickey for core from 139.178.68.195 port 40028 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:21:32.166439 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:32.172113 systemd[1]: Started session-6.scope. Sep 6 00:21:32.173214 systemd-logind[1726]: New session 6 of user core. Sep 6 00:21:32.484298 sshd[3473]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:32.487146 systemd[1]: sshd@5-172.31.31.235:22-139.178.68.195:40028.service: Deactivated successfully. Sep 6 00:21:32.487908 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:21:32.488539 systemd-logind[1726]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:21:32.489583 systemd-logind[1726]: Removed session 6. Sep 6 00:21:37.309454 systemd-networkd[1466]: cilium_host: Link UP Sep 6 00:21:37.311218 systemd-networkd[1466]: cilium_net: Link UP Sep 6 00:21:37.312647 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:21:37.311226 systemd-networkd[1466]: cilium_net: Gained carrier Sep 6 00:21:37.311415 systemd-networkd[1466]: cilium_host: Gained carrier Sep 6 00:21:37.313175 systemd-networkd[1466]: cilium_host: Gained IPv6LL Sep 6 00:21:37.314277 (udev-worker)[3486]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:21:37.316863 (udev-worker)[3488]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:21:37.456386 systemd-networkd[1466]: cilium_vxlan: Link UP Sep 6 00:21:37.456396 systemd-networkd[1466]: cilium_vxlan: Gained carrier Sep 6 00:21:37.510146 systemd[1]: Started sshd@6-172.31.31.235:22-139.178.68.195:40040.service. Sep 6 00:21:37.653782 systemd-networkd[1466]: cilium_net: Gained IPv6LL Sep 6 00:21:37.689761 sshd[3573]: Accepted publickey for core from 139.178.68.195 port 40040 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:21:37.692820 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:37.700759 systemd[1]: Started session-7.scope. Sep 6 00:21:37.701939 systemd-logind[1726]: New session 7 of user core. Sep 6 00:21:38.027847 sshd[3573]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:38.032083 systemd-logind[1726]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:21:38.032325 systemd[1]: sshd@6-172.31.31.235:22-139.178.68.195:40040.service: Deactivated successfully. Sep 6 00:21:38.033431 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:21:38.034907 systemd-logind[1726]: Removed session 7. Sep 6 00:21:38.102584 kernel: NET: Registered PF_ALG protocol family Sep 6 00:21:38.766791 systemd-networkd[1466]: cilium_vxlan: Gained IPv6LL Sep 6 00:21:38.839270 (udev-worker)[3497]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:21:38.856015 systemd-networkd[1466]: lxc_health: Link UP Sep 6 00:21:38.865015 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:21:38.864683 systemd-networkd[1466]: lxc_health: Gained carrier Sep 6 00:21:39.083523 systemd-networkd[1466]: lxcda8fc6619838: Link UP Sep 6 00:21:39.103664 kernel: eth0: renamed from tmp2be18 Sep 6 00:21:39.114882 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcda8fc6619838: link becomes ready Sep 6 00:21:39.115213 systemd-networkd[1466]: lxcda8fc6619838: Gained carrier Sep 6 00:21:39.142328 kubelet[2579]: I0906 00:21:39.141856 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-66srn" podStartSLOduration=31.603827117 podStartE2EDuration="45.14182665s" podCreationTimestamp="2025-09-06 00:20:54 +0000 UTC" firstStartedPulling="2025-09-06 00:20:55.209311166 +0000 UTC m=+6.635435875" lastFinishedPulling="2025-09-06 00:21:08.747310689 +0000 UTC m=+20.173435408" observedRunningTime="2025-09-06 00:21:13.920473105 +0000 UTC m=+25.346597831" watchObservedRunningTime="2025-09-06 00:21:39.14182665 +0000 UTC m=+50.567951384" Sep 6 00:21:39.143383 systemd-networkd[1466]: lxce35ddc158a1b: Link UP Sep 6 00:21:39.151587 kernel: eth0: renamed from tmp0080f Sep 6 00:21:39.169107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce35ddc158a1b: link becomes ready Sep 6 00:21:39.168862 systemd-networkd[1466]: lxce35ddc158a1b: Gained carrier Sep 6 00:21:40.494902 systemd-networkd[1466]: lxce35ddc158a1b: Gained IPv6LL Sep 6 00:21:40.621698 systemd-networkd[1466]: lxcda8fc6619838: Gained IPv6LL Sep 6 00:21:40.622057 systemd-networkd[1466]: lxc_health: Gained IPv6LL Sep 6 00:21:43.058450 systemd[1]: Started sshd@7-172.31.31.235:22-139.178.68.195:56282.service. Sep 6 00:21:43.243200 sshd[3867]: Accepted publickey for core from 139.178.68.195 port 56282 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:21:43.246094 sshd[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:43.253847 systemd[1]: Started session-8.scope. Sep 6 00:21:43.255630 systemd-logind[1726]: New session 8 of user core. Sep 6 00:21:43.627606 sshd[3867]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:43.630688 systemd[1]: sshd@7-172.31.31.235:22-139.178.68.195:56282.service: Deactivated successfully. Sep 6 00:21:43.631575 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:21:43.633686 systemd-logind[1726]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:21:43.634974 systemd-logind[1726]: Removed session 8. Sep 6 00:21:43.771228 env[1744]: time="2025-09-06T00:21:43.771106594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:43.771228 env[1744]: time="2025-09-06T00:21:43.771186106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:43.774250 env[1744]: time="2025-09-06T00:21:43.771201041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:43.774250 env[1744]: time="2025-09-06T00:21:43.772047452Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2be180cbfbb9f26923789c79dbbcdc5f206b57787fed6336876685abb22f60ce pid=3893 runtime=io.containerd.runc.v2 Sep 6 00:21:43.795582 env[1744]: time="2025-09-06T00:21:43.793746504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:43.795582 env[1744]: time="2025-09-06T00:21:43.793883764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:43.795582 env[1744]: time="2025-09-06T00:21:43.793917326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:43.795582 env[1744]: time="2025-09-06T00:21:43.794204339Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0080fceb9d7944c0a1b670aff82cff8c03f6f994a633d03b6b7373c1e117d1e5 pid=3903 runtime=io.containerd.runc.v2 Sep 6 00:21:43.828608 systemd[1]: Started cri-containerd-2be180cbfbb9f26923789c79dbbcdc5f206b57787fed6336876685abb22f60ce.scope. Sep 6 00:21:43.855736 systemd[1]: run-containerd-runc-k8s.io-0080fceb9d7944c0a1b670aff82cff8c03f6f994a633d03b6b7373c1e117d1e5-runc.Ljs2ZC.mount: Deactivated successfully. Sep 6 00:21:43.860686 systemd[1]: Started cri-containerd-0080fceb9d7944c0a1b670aff82cff8c03f6f994a633d03b6b7373c1e117d1e5.scope. Sep 6 00:21:43.962079 env[1744]: time="2025-09-06T00:21:43.962033350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wnfw2,Uid:8e47694c-d71e-45a8-9a63-8b79bca7a6c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2be180cbfbb9f26923789c79dbbcdc5f206b57787fed6336876685abb22f60ce\"" Sep 6 00:21:43.965751 env[1744]: time="2025-09-06T00:21:43.965709206Z" level=info msg="CreateContainer within sandbox \"2be180cbfbb9f26923789c79dbbcdc5f206b57787fed6336876685abb22f60ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:21:44.000863 env[1744]: time="2025-09-06T00:21:44.000800003Z" level=info msg="CreateContainer within sandbox \"2be180cbfbb9f26923789c79dbbcdc5f206b57787fed6336876685abb22f60ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cde4f95a797786474ad1759f083ad41022003459fde0fcb103d4782bff8267f\"" Sep 6 00:21:44.001818 env[1744]: time="2025-09-06T00:21:44.001777309Z" level=info msg="StartContainer for \"7cde4f95a797786474ad1759f083ad41022003459fde0fcb103d4782bff8267f\"" Sep 6 00:21:44.002649 env[1744]: time="2025-09-06T00:21:44.002604140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vswjf,Uid:3b223213-e560-4a42-a568-3141137d58e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0080fceb9d7944c0a1b670aff82cff8c03f6f994a633d03b6b7373c1e117d1e5\"" Sep 6 00:21:44.010398 env[1744]: time="2025-09-06T00:21:44.006286245Z" level=info msg="CreateContainer within sandbox \"0080fceb9d7944c0a1b670aff82cff8c03f6f994a633d03b6b7373c1e117d1e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:21:44.035803 env[1744]: time="2025-09-06T00:21:44.035702661Z" level=info msg="CreateContainer within sandbox \"0080fceb9d7944c0a1b670aff82cff8c03f6f994a633d03b6b7373c1e117d1e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68a3bbfa72c6d6ef89e4c3a74795a88d8418dbb7fa9cf901643eeff028562adf\"" Sep 6 00:21:44.040839 env[1744]: time="2025-09-06T00:21:44.040784020Z" level=info msg="StartContainer for \"68a3bbfa72c6d6ef89e4c3a74795a88d8418dbb7fa9cf901643eeff028562adf\"" Sep 6 00:21:44.044488 systemd[1]: Started cri-containerd-7cde4f95a797786474ad1759f083ad41022003459fde0fcb103d4782bff8267f.scope. Sep 6 00:21:44.088144 systemd[1]: Started cri-containerd-68a3bbfa72c6d6ef89e4c3a74795a88d8418dbb7fa9cf901643eeff028562adf.scope. Sep 6 00:21:44.118439 env[1744]: time="2025-09-06T00:21:44.118380164Z" level=info msg="StartContainer for \"7cde4f95a797786474ad1759f083ad41022003459fde0fcb103d4782bff8267f\" returns successfully" Sep 6 00:21:44.150825 env[1744]: time="2025-09-06T00:21:44.150774020Z" level=info msg="StartContainer for \"68a3bbfa72c6d6ef89e4c3a74795a88d8418dbb7fa9cf901643eeff028562adf\" returns successfully" Sep 6 00:21:44.985220 kubelet[2579]: I0906 00:21:44.985010 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wnfw2" podStartSLOduration=50.98499208 podStartE2EDuration="50.98499208s" podCreationTimestamp="2025-09-06 00:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:44.983136512 +0000 UTC m=+56.409261241" watchObservedRunningTime="2025-09-06 00:21:44.98499208 +0000 UTC m=+56.411116806" Sep 6 00:21:44.997729 kubelet[2579]: I0906 00:21:44.997668 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vswjf" podStartSLOduration=50.99764565 podStartE2EDuration="50.99764565s" podCreationTimestamp="2025-09-06 00:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:44.996675098 +0000 UTC m=+56.422799828" watchObservedRunningTime="2025-09-06 00:21:44.99764565 +0000 UTC m=+56.423770378" Sep 6 00:21:48.654923 systemd[1]: Started sshd@8-172.31.31.235:22-139.178.68.195:56286.service. Sep 6 00:21:48.834959 sshd[4052]: Accepted publickey for core from 139.178.68.195 port 56286 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:21:48.837216 sshd[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:48.842467 systemd[1]: Started session-9.scope. Sep 6 00:21:48.842852 systemd-logind[1726]: New session 9 of user core. Sep 6 00:21:49.144923 sshd[4052]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:49.147752 systemd[1]: sshd@8-172.31.31.235:22-139.178.68.195:56286.service: Deactivated successfully. Sep 6 00:21:49.148530 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:21:49.149446 systemd-logind[1726]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:21:49.150470 systemd-logind[1726]: Removed session 9. Sep 6 00:21:49.170451 systemd[1]: Started sshd@9-172.31.31.235:22-139.178.68.195:56288.service. Sep 6 00:21:49.326218 sshd[4066]: Accepted publickey for core from 139.178.68.195 port 56288 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:21:49.327723 sshd[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:49.333089 systemd[1]: Started session-10.scope. Sep 6 00:21:49.333435 systemd-logind[1726]: New session 10 of user core. Sep 6 00:21:49.613354 sshd[4066]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:49.617544 systemd-logind[1726]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:21:49.619588 systemd[1]: sshd@9-172.31.31.235:22-139.178.68.195:56288.service: Deactivated successfully. Sep 6 00:21:49.620593 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:21:49.622433 systemd-logind[1726]: Removed session 10. Sep 6 00:21:49.637983 systemd[1]: Started sshd@10-172.31.31.235:22-139.178.68.195:56294.service. Sep 6 00:21:49.806478 sshd[4076]: Accepted publickey for core from 139.178.68.195 port 56294 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:21:49.808035 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:49.812632 systemd-logind[1726]: New session 11 of user core. Sep 6 00:21:49.813807 systemd[1]: Started session-11.scope. Sep 6 00:21:50.025753 sshd[4076]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:50.029711 systemd-logind[1726]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:21:50.030115 systemd[1]: sshd@10-172.31.31.235:22-139.178.68.195:56294.service: Deactivated successfully. Sep 6 00:21:50.031140 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:21:50.032481 systemd-logind[1726]: Removed session 11. Sep 6 00:21:55.054051 systemd[1]: Started sshd@11-172.31.31.235:22-139.178.68.195:55062.service. Sep 6 00:21:55.216194 sshd[4087]: Accepted publickey for core from 139.178.68.195 port 55062 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:21:55.218068 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:55.223686 systemd-logind[1726]: New session 12 of user core. Sep 6 00:21:55.223976 systemd[1]: Started session-12.scope. Sep 6 00:21:55.423059 sshd[4087]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:55.426671 systemd[1]: sshd@11-172.31.31.235:22-139.178.68.195:55062.service: Deactivated successfully. Sep 6 00:21:55.427638 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:21:55.428472 systemd-logind[1726]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:21:55.429787 systemd-logind[1726]: Removed session 12. Sep 6 00:22:00.449522 systemd[1]: Started sshd@12-172.31.31.235:22-139.178.68.195:52576.service. Sep 6 00:22:00.610081 sshd[4102]: Accepted publickey for core from 139.178.68.195 port 52576 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:00.611459 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:00.618761 systemd[1]: Started session-13.scope. Sep 6 00:22:00.619602 systemd-logind[1726]: New session 13 of user core. Sep 6 00:22:00.822117 sshd[4102]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:00.826088 systemd-logind[1726]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:22:00.826219 systemd[1]: sshd@12-172.31.31.235:22-139.178.68.195:52576.service: Deactivated successfully. Sep 6 00:22:00.827212 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:22:00.828229 systemd-logind[1726]: Removed session 13. Sep 6 00:22:00.847202 systemd[1]: Started sshd@13-172.31.31.235:22-139.178.68.195:52590.service. Sep 6 00:22:00.999125 sshd[4113]: Accepted publickey for core from 139.178.68.195 port 52590 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:01.000542 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:01.006610 systemd[1]: Started session-14.scope. Sep 6 00:22:01.007107 systemd-logind[1726]: New session 14 of user core. Sep 6 00:22:02.397984 sshd[4113]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:02.434884 systemd[1]: sshd@13-172.31.31.235:22-139.178.68.195:52590.service: Deactivated successfully. Sep 6 00:22:02.437043 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:22:02.444124 systemd-logind[1726]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:22:02.451495 systemd[1]: Started sshd@14-172.31.31.235:22-139.178.68.195:52594.service. Sep 6 00:22:02.454827 systemd-logind[1726]: Removed session 14. Sep 6 00:22:02.666200 sshd[4123]: Accepted publickey for core from 139.178.68.195 port 52594 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:02.670122 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:02.705153 systemd[1]: Started session-15.scope. Sep 6 00:22:02.705772 systemd-logind[1726]: New session 15 of user core. Sep 6 00:22:04.473680 sshd[4123]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:04.477361 systemd[1]: sshd@14-172.31.31.235:22-139.178.68.195:52594.service: Deactivated successfully. Sep 6 00:22:04.478769 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:22:04.480025 systemd-logind[1726]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:22:04.481275 systemd-logind[1726]: Removed session 15. Sep 6 00:22:04.499254 systemd[1]: Started sshd@15-172.31.31.235:22-139.178.68.195:52610.service. Sep 6 00:22:04.660869 sshd[4140]: Accepted publickey for core from 139.178.68.195 port 52610 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:04.662508 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:04.669665 systemd-logind[1726]: New session 16 of user core. Sep 6 00:22:04.669722 systemd[1]: Started session-16.scope. Sep 6 00:22:05.165183 sshd[4140]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:05.168231 systemd[1]: sshd@15-172.31.31.235:22-139.178.68.195:52610.service: Deactivated successfully. Sep 6 00:22:05.169131 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:22:05.169721 systemd-logind[1726]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:22:05.170509 systemd-logind[1726]: Removed session 16. Sep 6 00:22:05.191862 systemd[1]: Started sshd@16-172.31.31.235:22-139.178.68.195:52614.service. Sep 6 00:22:05.349232 sshd[4150]: Accepted publickey for core from 139.178.68.195 port 52614 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:05.350952 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:05.357138 systemd[1]: Started session-17.scope. Sep 6 00:22:05.358108 systemd-logind[1726]: New session 17 of user core. Sep 6 00:22:05.561124 sshd[4150]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:05.564674 systemd[1]: sshd@16-172.31.31.235:22-139.178.68.195:52614.service: Deactivated successfully. Sep 6 00:22:05.565622 systemd-logind[1726]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:22:05.565667 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:22:05.567124 systemd-logind[1726]: Removed session 17. Sep 6 00:22:10.584921 systemd[1]: Started sshd@17-172.31.31.235:22-139.178.68.195:35744.service. Sep 6 00:22:10.746085 sshd[4166]: Accepted publickey for core from 139.178.68.195 port 35744 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:10.747810 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:10.753702 systemd[1]: Started session-18.scope. Sep 6 00:22:10.754367 systemd-logind[1726]: New session 18 of user core. Sep 6 00:22:10.953738 sshd[4166]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:10.956663 systemd-logind[1726]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:22:10.956845 systemd[1]: sshd@17-172.31.31.235:22-139.178.68.195:35744.service: Deactivated successfully. Sep 6 00:22:10.957767 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:22:10.958479 systemd-logind[1726]: Removed session 18. Sep 6 00:22:15.980253 systemd[1]: Started sshd@18-172.31.31.235:22-139.178.68.195:35760.service. Sep 6 00:22:16.140750 sshd[4178]: Accepted publickey for core from 139.178.68.195 port 35760 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:16.142751 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:16.148267 systemd[1]: Started session-19.scope. Sep 6 00:22:16.148763 systemd-logind[1726]: New session 19 of user core. Sep 6 00:22:16.370410 sshd[4178]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:16.373542 systemd[1]: sshd@18-172.31.31.235:22-139.178.68.195:35760.service: Deactivated successfully. Sep 6 00:22:16.374292 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:22:16.374716 systemd-logind[1726]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:22:16.375547 systemd-logind[1726]: Removed session 19. Sep 6 00:22:21.407719 systemd[1]: Started sshd@19-172.31.31.235:22-139.178.68.195:39534.service. Sep 6 00:22:21.589776 sshd[4190]: Accepted publickey for core from 139.178.68.195 port 39534 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:21.591385 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:21.597143 systemd[1]: Started session-20.scope. Sep 6 00:22:21.598152 systemd-logind[1726]: New session 20 of user core. Sep 6 00:22:21.802174 sshd[4190]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:21.805380 systemd[1]: sshd@19-172.31.31.235:22-139.178.68.195:39534.service: Deactivated successfully. Sep 6 00:22:21.806426 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:22:21.807296 systemd-logind[1726]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:22:21.808286 systemd-logind[1726]: Removed session 20. Sep 6 00:22:21.828252 systemd[1]: Started sshd@20-172.31.31.235:22-139.178.68.195:39542.service. Sep 6 00:22:21.988516 sshd[4202]: Accepted publickey for core from 139.178.68.195 port 39542 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:21.990279 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:21.995645 systemd-logind[1726]: New session 21 of user core. Sep 6 00:22:21.996294 systemd[1]: Started session-21.scope. Sep 6 00:22:23.753210 systemd[1]: run-containerd-runc-k8s.io-df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59-runc.Iw80k2.mount: Deactivated successfully. Sep 6 00:22:23.755321 env[1744]: time="2025-09-06T00:22:23.755268135Z" level=info msg="StopContainer for \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\" with timeout 30 (s)" Sep 6 00:22:23.756761 env[1744]: time="2025-09-06T00:22:23.756717293Z" level=info msg="Stop container \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\" with signal terminated" Sep 6 00:22:23.774242 systemd[1]: cri-containerd-6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d.scope: Deactivated successfully. Sep 6 00:22:23.794141 env[1744]: time="2025-09-06T00:22:23.794070146Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:22:23.804243 env[1744]: time="2025-09-06T00:22:23.804124466Z" level=info msg="StopContainer for \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\" with timeout 2 (s)" Sep 6 00:22:23.804860 env[1744]: time="2025-09-06T00:22:23.804829189Z" level=info msg="Stop container \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\" with signal terminated" Sep 6 00:22:23.807643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d-rootfs.mount: Deactivated successfully. Sep 6 00:22:23.816763 systemd-networkd[1466]: lxc_health: Link DOWN Sep 6 00:22:23.816775 systemd-networkd[1466]: lxc_health: Lost carrier Sep 6 00:22:23.833290 env[1744]: time="2025-09-06T00:22:23.833207608Z" level=info msg="shim disconnected" id=6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d Sep 6 00:22:23.833290 env[1744]: time="2025-09-06T00:22:23.833265971Z" level=warning msg="cleaning up after shim disconnected" id=6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d namespace=k8s.io Sep 6 00:22:23.833290 env[1744]: time="2025-09-06T00:22:23.833280888Z" level=info msg="cleaning up dead shim" Sep 6 00:22:23.848064 env[1744]: time="2025-09-06T00:22:23.848014161Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4258 runtime=io.containerd.runc.v2\n" Sep 6 00:22:23.851904 env[1744]: time="2025-09-06T00:22:23.851857194Z" level=info msg="StopContainer for \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\" returns successfully" Sep 6 00:22:23.859497 env[1744]: time="2025-09-06T00:22:23.852908354Z" level=info msg="StopPodSandbox for \"c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08\"" Sep 6 00:22:23.859497 env[1744]: time="2025-09-06T00:22:23.852980293Z" level=info msg="Container to stop \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:23.857309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08-shm.mount: Deactivated successfully. Sep 6 00:22:23.858475 systemd[1]: cri-containerd-df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59.scope: Deactivated successfully. Sep 6 00:22:23.858996 systemd[1]: cri-containerd-df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59.scope: Consumed 8.192s CPU time. Sep 6 00:22:23.871782 systemd[1]: cri-containerd-c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08.scope: Deactivated successfully. Sep 6 00:22:23.894690 kubelet[2579]: E0906 00:22:23.894616 2579 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:22:23.910426 env[1744]: time="2025-09-06T00:22:23.910337622Z" level=info msg="shim disconnected" id=df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59 Sep 6 00:22:23.910426 env[1744]: time="2025-09-06T00:22:23.910402772Z" level=warning msg="cleaning up after shim disconnected" id=df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59 namespace=k8s.io Sep 6 00:22:23.910426 env[1744]: time="2025-09-06T00:22:23.910415563Z" level=info msg="cleaning up dead shim" Sep 6 00:22:23.911372 env[1744]: time="2025-09-06T00:22:23.911327333Z" level=info msg="shim disconnected" id=c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08 Sep 6 00:22:23.911372 env[1744]: time="2025-09-06T00:22:23.911370990Z" level=warning msg="cleaning up after shim disconnected" id=c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08 namespace=k8s.io Sep 6 00:22:23.911645 env[1744]: time="2025-09-06T00:22:23.911382602Z" level=info msg="cleaning up dead shim" Sep 6 00:22:23.926353 env[1744]: time="2025-09-06T00:22:23.925092961Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4304 runtime=io.containerd.runc.v2\n" Sep 6 00:22:23.930604 env[1744]: time="2025-09-06T00:22:23.927543541Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4305 runtime=io.containerd.runc.v2\n" Sep 6 00:22:23.930604 env[1744]: time="2025-09-06T00:22:23.928238695Z" level=info msg="TearDown network for sandbox \"c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08\" successfully" Sep 6 00:22:23.930604 env[1744]: time="2025-09-06T00:22:23.928291510Z" level=info msg="StopPodSandbox for \"c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08\" returns successfully" Sep 6 00:22:23.930604 env[1744]: time="2025-09-06T00:22:23.929202873Z" level=info msg="StopContainer for \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\" returns successfully" Sep 6 00:22:23.933832 env[1744]: time="2025-09-06T00:22:23.933794278Z" level=info msg="StopPodSandbox for \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\"" Sep 6 00:22:23.934937 env[1744]: time="2025-09-06T00:22:23.934258383Z" level=info msg="Container to stop \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:23.935208 env[1744]: time="2025-09-06T00:22:23.935182686Z" level=info msg="Container to stop \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:23.935337 env[1744]: time="2025-09-06T00:22:23.935314388Z" level=info msg="Container to stop \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:23.935446 env[1744]: time="2025-09-06T00:22:23.935426034Z" level=info msg="Container to stop \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:23.935571 env[1744]: time="2025-09-06T00:22:23.935534633Z" level=info msg="Container to stop \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:23.951148 systemd[1]: cri-containerd-8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6.scope: Deactivated successfully. Sep 6 00:22:23.976714 kubelet[2579]: I0906 00:22:23.976435 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce359e0c-f76e-426c-9891-786986f206a3-cilium-config-path\") pod \"ce359e0c-f76e-426c-9891-786986f206a3\" (UID: \"ce359e0c-f76e-426c-9891-786986f206a3\") " Sep 6 00:22:23.976714 kubelet[2579]: I0906 00:22:23.976501 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khh7j\" (UniqueName: \"kubernetes.io/projected/ce359e0c-f76e-426c-9891-786986f206a3-kube-api-access-khh7j\") pod \"ce359e0c-f76e-426c-9891-786986f206a3\" (UID: \"ce359e0c-f76e-426c-9891-786986f206a3\") " Sep 6 00:22:23.991744 env[1744]: time="2025-09-06T00:22:23.991684219Z" level=info msg="shim disconnected" id=8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6 Sep 6 00:22:23.991744 env[1744]: time="2025-09-06T00:22:23.991732211Z" level=warning msg="cleaning up after shim disconnected" id=8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6 namespace=k8s.io Sep 6 00:22:23.991744 env[1744]: time="2025-09-06T00:22:23.991741225Z" level=info msg="cleaning up dead shim" Sep 6 00:22:23.994633 kubelet[2579]: I0906 00:22:23.991280 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce359e0c-f76e-426c-9891-786986f206a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce359e0c-f76e-426c-9891-786986f206a3" (UID: "ce359e0c-f76e-426c-9891-786986f206a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:22:24.004527 env[1744]: time="2025-09-06T00:22:24.003944461Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4349 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:22:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Sep 6 00:22:24.004527 env[1744]: time="2025-09-06T00:22:24.004345673Z" level=info msg="TearDown network for sandbox \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" successfully" Sep 6 00:22:24.004527 env[1744]: time="2025-09-06T00:22:24.004376214Z" level=info msg="StopPodSandbox for \"8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6\" returns successfully" Sep 6 00:22:24.009466 kubelet[2579]: I0906 00:22:24.009392 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce359e0c-f76e-426c-9891-786986f206a3-kube-api-access-khh7j" (OuterVolumeSpecName: "kube-api-access-khh7j") pod "ce359e0c-f76e-426c-9891-786986f206a3" (UID: "ce359e0c-f76e-426c-9891-786986f206a3"). InnerVolumeSpecName "kube-api-access-khh7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:22:24.077596 kubelet[2579]: I0906 00:22:24.077319 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-xtables-lock\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.077596 kubelet[2579]: I0906 00:22:24.077364 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-hostproc\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.077596 kubelet[2579]: I0906 00:22:24.077392 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2ed4e30-d62f-4ef9-bfb1-73d588563199-clustermesh-secrets\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.077596 kubelet[2579]: I0906 00:22:24.077408 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cni-path\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.077596 kubelet[2579]: I0906 00:22:24.077422 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-host-proc-sys-net\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.077596 kubelet[2579]: I0906 00:22:24.077437 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-run\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.078014 kubelet[2579]: I0906 00:22:24.077456 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-config-path\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.078014 kubelet[2579]: I0906 00:22:24.077473 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g44bk\" (UniqueName: \"kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-kube-api-access-g44bk\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.078014 kubelet[2579]: I0906 00:22:24.077489 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-cgroup\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.078014 kubelet[2579]: I0906 00:22:24.077502 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-lib-modules\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.078014 kubelet[2579]: I0906 00:22:24.077518 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-host-proc-sys-kernel\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.078014 kubelet[2579]: I0906 00:22:24.077532 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-etc-cni-netd\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.078173 kubelet[2579]: I0906 00:22:24.077580 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-hubble-tls\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.078173 kubelet[2579]: I0906 00:22:24.077599 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-bpf-maps\") pod \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\" (UID: \"e2ed4e30-d62f-4ef9-bfb1-73d588563199\") " Sep 6 00:22:24.078173 kubelet[2579]: I0906 00:22:24.077638 2579 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce359e0c-f76e-426c-9891-786986f206a3-cilium-config-path\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.078173 kubelet[2579]: I0906 00:22:24.077648 2579 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khh7j\" (UniqueName: \"kubernetes.io/projected/ce359e0c-f76e-426c-9891-786986f206a3-kube-api-access-khh7j\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.078173 kubelet[2579]: I0906 00:22:24.077789 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.078173 kubelet[2579]: I0906 00:22:24.077833 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.078327 kubelet[2579]: I0906 00:22:24.077848 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-hostproc" (OuterVolumeSpecName: "hostproc") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.078516 kubelet[2579]: I0906 00:22:24.078486 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.078589 kubelet[2579]: I0906 00:22:24.078529 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.078589 kubelet[2579]: I0906 00:22:24.078545 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.078589 kubelet[2579]: I0906 00:22:24.078574 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.078813 kubelet[2579]: I0906 00:22:24.078796 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.078875 kubelet[2579]: I0906 00:22:24.078821 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cni-path" (OuterVolumeSpecName: "cni-path") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.078875 kubelet[2579]: I0906 00:22:24.078836 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:24.081433 kubelet[2579]: I0906 00:22:24.081399 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:22:24.083900 kubelet[2579]: I0906 00:22:24.083873 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ed4e30-d62f-4ef9-bfb1-73d588563199-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:22:24.089478 kubelet[2579]: I0906 00:22:24.089435 2579 scope.go:117] "RemoveContainer" containerID="df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59" Sep 6 00:22:24.090405 systemd[1]: Removed slice kubepods-besteffort-podce359e0c_f76e_426c_9891_786986f206a3.slice. Sep 6 00:22:24.092425 env[1744]: time="2025-09-06T00:22:24.092091241Z" level=info msg="RemoveContainer for \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\"" Sep 6 00:22:24.100181 kubelet[2579]: I0906 00:22:24.100143 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-kube-api-access-g44bk" (OuterVolumeSpecName: "kube-api-access-g44bk") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "kube-api-access-g44bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:22:24.103020 env[1744]: time="2025-09-06T00:22:24.102872808Z" level=info msg="RemoveContainer for \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\" returns successfully" Sep 6 00:22:24.103158 kubelet[2579]: I0906 00:22:24.103130 2579 scope.go:117] "RemoveContainer" containerID="498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf" Sep 6 00:22:24.104762 kubelet[2579]: I0906 00:22:24.103754 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e2ed4e30-d62f-4ef9-bfb1-73d588563199" (UID: "e2ed4e30-d62f-4ef9-bfb1-73d588563199"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:22:24.109393 env[1744]: time="2025-09-06T00:22:24.109063109Z" level=info msg="RemoveContainer for \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\"" Sep 6 00:22:24.114401 env[1744]: time="2025-09-06T00:22:24.114348134Z" level=info msg="RemoveContainer for \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\" returns successfully" Sep 6 00:22:24.114791 kubelet[2579]: I0906 00:22:24.114766 2579 scope.go:117] "RemoveContainer" containerID="d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa" Sep 6 00:22:24.115900 env[1744]: time="2025-09-06T00:22:24.115848193Z" level=info msg="RemoveContainer for \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\"" Sep 6 00:22:24.121258 env[1744]: time="2025-09-06T00:22:24.121157425Z" level=info msg="RemoveContainer for \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\" returns successfully" Sep 6 00:22:24.121392 kubelet[2579]: I0906 00:22:24.121372 2579 scope.go:117] "RemoveContainer" containerID="64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b" Sep 6 00:22:24.122966 env[1744]: time="2025-09-06T00:22:24.122917426Z" level=info msg="RemoveContainer for \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\"" Sep 6 00:22:24.128384 env[1744]: time="2025-09-06T00:22:24.128311383Z" level=info msg="RemoveContainer for \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\" returns successfully" Sep 6 00:22:24.128668 kubelet[2579]: I0906 00:22:24.128631 2579 scope.go:117] "RemoveContainer" containerID="6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd" Sep 6 00:22:24.130053 env[1744]: time="2025-09-06T00:22:24.129995310Z" level=info msg="RemoveContainer for \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\"" Sep 6 00:22:24.135511 env[1744]: time="2025-09-06T00:22:24.135464891Z" level=info msg="RemoveContainer for \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\" returns successfully" Sep 6 00:22:24.135843 kubelet[2579]: I0906 00:22:24.135816 2579 scope.go:117] "RemoveContainer" containerID="df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59" Sep 6 00:22:24.136186 env[1744]: time="2025-09-06T00:22:24.136113220Z" level=error msg="ContainerStatus for \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\": not found" Sep 6 00:22:24.137764 kubelet[2579]: E0906 00:22:24.137718 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\": not found" containerID="df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59" Sep 6 00:22:24.137893 kubelet[2579]: I0906 00:22:24.137770 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59"} err="failed to get container status \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\": rpc error: code = NotFound desc = an error occurred when try to find container \"df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59\": not found" Sep 6 00:22:24.137893 kubelet[2579]: I0906 00:22:24.137885 2579 scope.go:117] "RemoveContainer" containerID="498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf" Sep 6 00:22:24.138249 env[1744]: time="2025-09-06T00:22:24.138126655Z" level=error msg="ContainerStatus for \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\": not found" Sep 6 00:22:24.138338 kubelet[2579]: E0906 00:22:24.138293 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\": not found" containerID="498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf" Sep 6 00:22:24.138338 kubelet[2579]: I0906 00:22:24.138317 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf"} err="failed to get container status \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"498284c59c9fe9e414e4ba14eb160c7e599dc0475a073138cac7e4c918e429cf\": not found" Sep 6 00:22:24.138338 kubelet[2579]: I0906 00:22:24.138334 2579 scope.go:117] "RemoveContainer" containerID="d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa" Sep 6 00:22:24.138546 env[1744]: time="2025-09-06T00:22:24.138488161Z" level=error msg="ContainerStatus for \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\": not found" Sep 6 00:22:24.138659 kubelet[2579]: E0906 00:22:24.138638 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\": not found" containerID="d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa" Sep 6 00:22:24.138699 kubelet[2579]: I0906 00:22:24.138662 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa"} err="failed to get container status \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5e13f404f1fcafa5167b0cc44b7d6d15050dbc35c853f0a17f889ed06294eaa\": not found" Sep 6 00:22:24.138699 kubelet[2579]: I0906 00:22:24.138675 2579 scope.go:117] "RemoveContainer" containerID="64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b" Sep 6 00:22:24.138964 env[1744]: time="2025-09-06T00:22:24.138797701Z" level=error msg="ContainerStatus for \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\": not found" Sep 6 00:22:24.139152 kubelet[2579]: E0906 00:22:24.139119 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\": not found" containerID="64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b" Sep 6 00:22:24.139255 kubelet[2579]: I0906 00:22:24.139154 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b"} err="failed to get container status \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\": rpc error: code = NotFound desc = an error occurred when try to find container \"64a85d38f0cefbf378d8b25ad7a12f6f59d37a296c538cb1aecf862cb2bc579b\": not found" Sep 6 00:22:24.139255 kubelet[2579]: I0906 00:22:24.139186 2579 scope.go:117] "RemoveContainer" containerID="6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd" Sep 6 00:22:24.139437 env[1744]: time="2025-09-06T00:22:24.139381803Z" level=error msg="ContainerStatus for \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\": not found" Sep 6 00:22:24.139585 kubelet[2579]: E0906 00:22:24.139548 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\": not found" containerID="6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd" Sep 6 00:22:24.139676 kubelet[2579]: I0906 00:22:24.139592 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd"} err="failed to get container status \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c3d50c0316a9b1b10c3313bbba8a40fb37787b19d942db53ec1218dd35b51dd\": not found" Sep 6 00:22:24.139676 kubelet[2579]: I0906 00:22:24.139615 2579 scope.go:117] "RemoveContainer" containerID="6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d" Sep 6 00:22:24.140852 env[1744]: time="2025-09-06T00:22:24.140824811Z" level=info msg="RemoveContainer for \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\"" Sep 6 00:22:24.146691 env[1744]: time="2025-09-06T00:22:24.146641039Z" level=info msg="RemoveContainer for \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\" returns successfully" Sep 6 00:22:24.146972 kubelet[2579]: I0906 00:22:24.146944 2579 scope.go:117] "RemoveContainer" containerID="6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d" Sep 6 00:22:24.147385 env[1744]: time="2025-09-06T00:22:24.147269209Z" level=error msg="ContainerStatus for \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\": not found" Sep 6 00:22:24.147543 kubelet[2579]: E0906 00:22:24.147510 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\": not found" containerID="6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d" Sep 6 00:22:24.147683 kubelet[2579]: I0906 00:22:24.147552 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d"} err="failed to get container status \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a126dd4a7db50d7f5fde7af8eb5dac9c2ebe06d1ca81dab736d0a52708d079d\": not found" Sep 6 00:22:24.177814 kubelet[2579]: I0906 00:22:24.177776 2579 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2ed4e30-d62f-4ef9-bfb1-73d588563199-clustermesh-secrets\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.177814 kubelet[2579]: I0906 00:22:24.177810 2579 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cni-path\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.177814 kubelet[2579]: I0906 00:22:24.177824 2579 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-host-proc-sys-net\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178051 kubelet[2579]: I0906 00:22:24.177835 2579 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-run\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178051 kubelet[2579]: I0906 00:22:24.177847 2579 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-config-path\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178051 kubelet[2579]: I0906 00:22:24.177854 2579 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-cilium-cgroup\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178051 kubelet[2579]: I0906 00:22:24.177862 2579 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g44bk\" (UniqueName: \"kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-kube-api-access-g44bk\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178051 kubelet[2579]: I0906 00:22:24.177871 2579 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-lib-modules\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178051 kubelet[2579]: I0906 00:22:24.177880 2579 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-host-proc-sys-kernel\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178051 kubelet[2579]: I0906 00:22:24.177891 2579 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-etc-cni-netd\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178051 kubelet[2579]: I0906 00:22:24.177898 2579 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2ed4e30-d62f-4ef9-bfb1-73d588563199-hubble-tls\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178266 kubelet[2579]: I0906 00:22:24.177906 2579 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-bpf-maps\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178266 kubelet[2579]: I0906 00:22:24.177913 2579 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-xtables-lock\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.178266 kubelet[2579]: I0906 00:22:24.177932 2579 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2ed4e30-d62f-4ef9-bfb1-73d588563199-hostproc\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:24.390243 systemd[1]: Removed slice kubepods-burstable-pode2ed4e30_d62f_4ef9_bfb1_73d588563199.slice. Sep 6 00:22:24.390332 systemd[1]: kubepods-burstable-pode2ed4e30_d62f_4ef9_bfb1_73d588563199.slice: Consumed 8.312s CPU time. Sep 6 00:22:24.743263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df9baf9d2c3acb9ef1376adc998786c1cb5665dc247d757182362c52ffde3d59-rootfs.mount: Deactivated successfully. Sep 6 00:22:24.743906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6-rootfs.mount: Deactivated successfully. Sep 6 00:22:24.744170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8bee1c05dc2b2020775a841f6b8c41b06773a16cb8bd3a51887782957a9520f6-shm.mount: Deactivated successfully. Sep 6 00:22:24.744270 systemd[1]: var-lib-kubelet-pods-e2ed4e30\x2dd62f\x2d4ef9\x2dbfb1\x2d73d588563199-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg44bk.mount: Deactivated successfully. Sep 6 00:22:24.744361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0f5bb644149611f9fc4130f6b2131c4ab87b026ef6857250d8b310fb2859c08-rootfs.mount: Deactivated successfully. Sep 6 00:22:24.744466 systemd[1]: var-lib-kubelet-pods-ce359e0c\x2df76e\x2d426c\x2d9891\x2d786986f206a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkhh7j.mount: Deactivated successfully. Sep 6 00:22:24.744575 systemd[1]: var-lib-kubelet-pods-e2ed4e30\x2dd62f\x2d4ef9\x2dbfb1\x2d73d588563199-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:22:24.744685 systemd[1]: var-lib-kubelet-pods-e2ed4e30\x2dd62f\x2d4ef9\x2dbfb1\x2d73d588563199-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:22:24.760361 kubelet[2579]: I0906 00:22:24.760322 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce359e0c-f76e-426c-9891-786986f206a3" path="/var/lib/kubelet/pods/ce359e0c-f76e-426c-9891-786986f206a3/volumes" Sep 6 00:22:24.760800 kubelet[2579]: I0906 00:22:24.760763 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2ed4e30-d62f-4ef9-bfb1-73d588563199" path="/var/lib/kubelet/pods/e2ed4e30-d62f-4ef9-bfb1-73d588563199/volumes" Sep 6 00:22:25.691935 sshd[4202]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:25.694902 systemd[1]: sshd@20-172.31.31.235:22-139.178.68.195:39542.service: Deactivated successfully. Sep 6 00:22:25.695621 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:22:25.696197 systemd-logind[1726]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:22:25.697285 systemd-logind[1726]: Removed session 21. Sep 6 00:22:25.717988 systemd[1]: Started sshd@21-172.31.31.235:22-139.178.68.195:39546.service. Sep 6 00:22:25.891545 sshd[4369]: Accepted publickey for core from 139.178.68.195 port 39546 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:25.893272 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:25.899345 systemd[1]: Started session-22.scope. Sep 6 00:22:25.900002 systemd-logind[1726]: New session 22 of user core. Sep 6 00:22:26.883233 sshd[4369]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:26.887028 systemd[1]: sshd@21-172.31.31.235:22-139.178.68.195:39546.service: Deactivated successfully. Sep 6 00:22:26.888090 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:22:26.889458 systemd-logind[1726]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:22:26.890868 systemd-logind[1726]: Removed session 22. Sep 6 00:22:26.910636 systemd[1]: Started sshd@22-172.31.31.235:22-139.178.68.195:39556.service. Sep 6 00:22:26.924553 kubelet[2579]: E0906 00:22:26.924514 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2ed4e30-d62f-4ef9-bfb1-73d588563199" containerName="apply-sysctl-overwrites" Sep 6 00:22:26.925108 kubelet[2579]: E0906 00:22:26.925085 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2ed4e30-d62f-4ef9-bfb1-73d588563199" containerName="cilium-agent" Sep 6 00:22:26.925435 kubelet[2579]: E0906 00:22:26.925405 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce359e0c-f76e-426c-9891-786986f206a3" containerName="cilium-operator" Sep 6 00:22:26.925570 kubelet[2579]: E0906 00:22:26.925544 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2ed4e30-d62f-4ef9-bfb1-73d588563199" containerName="mount-cgroup" Sep 6 00:22:26.925718 kubelet[2579]: E0906 00:22:26.925705 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2ed4e30-d62f-4ef9-bfb1-73d588563199" containerName="mount-bpf-fs" Sep 6 00:22:26.925816 kubelet[2579]: E0906 00:22:26.925805 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2ed4e30-d62f-4ef9-bfb1-73d588563199" containerName="clean-cilium-state" Sep 6 00:22:26.925979 kubelet[2579]: I0906 00:22:26.925966 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce359e0c-f76e-426c-9891-786986f206a3" containerName="cilium-operator" Sep 6 00:22:26.926067 kubelet[2579]: I0906 00:22:26.926056 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ed4e30-d62f-4ef9-bfb1-73d588563199" containerName="cilium-agent" Sep 6 00:22:26.952121 systemd[1]: Created slice kubepods-burstable-pod42283bf1_a286_417a_9e3e_d9146ff319e5.slice. Sep 6 00:22:26.998152 kubelet[2579]: I0906 00:22:26.998082 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-etc-cni-netd\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998152 kubelet[2579]: I0906 00:22:26.998148 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-config-path\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998334 kubelet[2579]: I0906 00:22:26.998168 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42283bf1-a286-417a-9e3e-d9146ff319e5-hubble-tls\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998334 kubelet[2579]: I0906 00:22:26.998187 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cni-path\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998334 kubelet[2579]: I0906 00:22:26.998214 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-cgroup\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998334 kubelet[2579]: I0906 00:22:26.998234 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-lib-modules\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998334 kubelet[2579]: I0906 00:22:26.998249 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42283bf1-a286-417a-9e3e-d9146ff319e5-clustermesh-secrets\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998334 kubelet[2579]: I0906 00:22:26.998265 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-hostproc\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998546 kubelet[2579]: I0906 00:22:26.998292 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-host-proc-sys-net\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998546 kubelet[2579]: I0906 00:22:26.998310 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvmd\" (UniqueName: \"kubernetes.io/projected/42283bf1-a286-417a-9e3e-d9146ff319e5-kube-api-access-gjvmd\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998546 kubelet[2579]: I0906 00:22:26.998328 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-ipsec-secrets\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998546 kubelet[2579]: I0906 00:22:26.998342 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-host-proc-sys-kernel\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998546 kubelet[2579]: I0906 00:22:26.998372 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-bpf-maps\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998703 kubelet[2579]: I0906 00:22:26.998385 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-xtables-lock\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:26.998703 kubelet[2579]: I0906 00:22:26.998402 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-run\") pod \"cilium-cpqzh\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " pod="kube-system/cilium-cpqzh" Sep 6 00:22:27.087946 sshd[4380]: Accepted publickey for core from 139.178.68.195 port 39556 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:27.089522 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:27.096181 systemd[1]: Started session-23.scope. Sep 6 00:22:27.096697 systemd-logind[1726]: New session 23 of user core. Sep 6 00:22:27.256311 env[1744]: time="2025-09-06T00:22:27.256181407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cpqzh,Uid:42283bf1-a286-417a-9e3e-d9146ff319e5,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:27.284688 env[1744]: time="2025-09-06T00:22:27.283658369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:27.284688 env[1744]: time="2025-09-06T00:22:27.283717618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:27.284688 env[1744]: time="2025-09-06T00:22:27.283734826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:27.284688 env[1744]: time="2025-09-06T00:22:27.283987179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9 pid=4402 runtime=io.containerd.runc.v2 Sep 6 00:22:27.308384 systemd[1]: Started cri-containerd-8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9.scope. Sep 6 00:22:27.352482 env[1744]: time="2025-09-06T00:22:27.352431963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cpqzh,Uid:42283bf1-a286-417a-9e3e-d9146ff319e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9\"" Sep 6 00:22:27.355881 env[1744]: time="2025-09-06T00:22:27.355830574Z" level=info msg="CreateContainer within sandbox \"8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:22:27.383397 env[1744]: time="2025-09-06T00:22:27.383344464Z" level=info msg="CreateContainer within sandbox \"8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9\"" Sep 6 00:22:27.384258 env[1744]: time="2025-09-06T00:22:27.384229764Z" level=info msg="StartContainer for \"d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9\"" Sep 6 00:22:27.401199 systemd[1]: Started cri-containerd-d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9.scope. Sep 6 00:22:27.412043 sshd[4380]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:27.415462 systemd[1]: sshd@22-172.31.31.235:22-139.178.68.195:39556.service: Deactivated successfully. Sep 6 00:22:27.416229 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:22:27.417509 systemd-logind[1726]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:22:27.418212 systemd-logind[1726]: Removed session 23. Sep 6 00:22:27.424086 systemd[1]: cri-containerd-d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9.scope: Deactivated successfully. Sep 6 00:22:27.440196 systemd[1]: Started sshd@23-172.31.31.235:22-139.178.68.195:39564.service. Sep 6 00:22:27.456845 env[1744]: time="2025-09-06T00:22:27.456393116Z" level=info msg="shim disconnected" id=d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9 Sep 6 00:22:27.456845 env[1744]: time="2025-09-06T00:22:27.456439110Z" level=warning msg="cleaning up after shim disconnected" id=d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9 namespace=k8s.io Sep 6 00:22:27.456845 env[1744]: time="2025-09-06T00:22:27.456448084Z" level=info msg="cleaning up dead shim" Sep 6 00:22:27.469510 env[1744]: time="2025-09-06T00:22:27.469465992Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4468 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:22:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:22:27.469984 env[1744]: time="2025-09-06T00:22:27.469893947Z" level=error msg="copy shim log" error="read /proc/self/fd/33: file already closed" Sep 6 00:22:27.470346 env[1744]: time="2025-09-06T00:22:27.470292924Z" level=error msg="Failed to pipe stderr of container \"d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9\"" error="reading from a closed fifo" Sep 6 00:22:27.470432 env[1744]: time="2025-09-06T00:22:27.470372022Z" level=error msg="Failed to pipe stdout of container \"d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9\"" error="reading from a closed fifo" Sep 6 00:22:27.473864 env[1744]: time="2025-09-06T00:22:27.473805353Z" level=error msg="StartContainer for \"d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:22:27.474376 kubelet[2579]: E0906 00:22:27.474224 2579 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9" Sep 6 00:22:27.486608 kubelet[2579]: E0906 00:22:27.486534 2579 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:22:27.486608 kubelet[2579]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:22:27.486608 kubelet[2579]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:22:27.486608 kubelet[2579]: rm /hostbin/cilium-mount Sep 6 00:22:27.486846 kubelet[2579]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjvmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-cpqzh_kube-system(42283bf1-a286-417a-9e3e-d9146ff319e5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:22:27.486846 kubelet[2579]: > logger="UnhandledError" Sep 6 00:22:27.488757 kubelet[2579]: E0906 00:22:27.488710 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cpqzh" podUID="42283bf1-a286-417a-9e3e-d9146ff319e5" Sep 6 00:22:27.593197 sshd[4466]: Accepted publickey for core from 139.178.68.195 port 39564 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:22:27.594788 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:27.600335 systemd-logind[1726]: New session 24 of user core. Sep 6 00:22:27.600965 systemd[1]: Started session-24.scope. Sep 6 00:22:28.100245 env[1744]: time="2025-09-06T00:22:28.100203700Z" level=info msg="StopPodSandbox for \"8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9\"" Sep 6 00:22:28.100430 env[1744]: time="2025-09-06T00:22:28.100267023Z" level=info msg="Container to stop \"d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:28.111115 systemd[1]: cri-containerd-8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9.scope: Deactivated successfully. Sep 6 00:22:28.120916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9-shm.mount: Deactivated successfully. Sep 6 00:22:28.156103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9-rootfs.mount: Deactivated successfully. Sep 6 00:22:28.182137 env[1744]: time="2025-09-06T00:22:28.182088955Z" level=info msg="shim disconnected" id=8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9 Sep 6 00:22:28.182137 env[1744]: time="2025-09-06T00:22:28.182136439Z" level=warning msg="cleaning up after shim disconnected" id=8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9 namespace=k8s.io Sep 6 00:22:28.182137 env[1744]: time="2025-09-06T00:22:28.182146107Z" level=info msg="cleaning up dead shim" Sep 6 00:22:28.191306 env[1744]: time="2025-09-06T00:22:28.191245743Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4508 runtime=io.containerd.runc.v2\n" Sep 6 00:22:28.191634 env[1744]: time="2025-09-06T00:22:28.191549467Z" level=info msg="TearDown network for sandbox \"8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9\" successfully" Sep 6 00:22:28.191634 env[1744]: time="2025-09-06T00:22:28.191608545Z" level=info msg="StopPodSandbox for \"8a1e2d200bb198149e061f7d61c441e505db7bcdc6c3599fe9c49b51f73f6ec9\" returns successfully" Sep 6 00:22:28.314506 kubelet[2579]: I0906 00:22:28.314472 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cni-path\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.314506 kubelet[2579]: I0906 00:22:28.314508 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-hostproc\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314525 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-bpf-maps\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314539 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-xtables-lock\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314579 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-run\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314593 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-lib-modules\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314616 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjvmd\" (UniqueName: \"kubernetes.io/projected/42283bf1-a286-417a-9e3e-d9146ff319e5-kube-api-access-gjvmd\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314634 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-host-proc-sys-net\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314650 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-etc-cni-netd\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314667 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42283bf1-a286-417a-9e3e-d9146ff319e5-hubble-tls\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314680 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-cgroup\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314698 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42283bf1-a286-417a-9e3e-d9146ff319e5-clustermesh-secrets\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314713 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-ipsec-secrets\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314731 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-config-path\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314754 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-host-proc-sys-kernel\") pod \"42283bf1-a286-417a-9e3e-d9146ff319e5\" (UID: \"42283bf1-a286-417a-9e3e-d9146ff319e5\") " Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314816 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.315033 kubelet[2579]: I0906 00:22:28.314842 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cni-path" (OuterVolumeSpecName: "cni-path") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.315497 kubelet[2579]: I0906 00:22:28.314856 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-hostproc" (OuterVolumeSpecName: "hostproc") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.315497 kubelet[2579]: I0906 00:22:28.314868 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.315497 kubelet[2579]: I0906 00:22:28.314880 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.315497 kubelet[2579]: I0906 00:22:28.314893 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.315497 kubelet[2579]: I0906 00:22:28.314912 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.316207 kubelet[2579]: I0906 00:22:28.315762 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.316207 kubelet[2579]: I0906 00:22:28.316083 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.316207 kubelet[2579]: I0906 00:22:28.316109 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:22:28.319297 systemd[1]: var-lib-kubelet-pods-42283bf1\x2da286\x2d417a\x2d9e3e\x2dd9146ff319e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjvmd.mount: Deactivated successfully. Sep 6 00:22:28.321313 kubelet[2579]: I0906 00:22:28.321277 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42283bf1-a286-417a-9e3e-d9146ff319e5-kube-api-access-gjvmd" (OuterVolumeSpecName: "kube-api-access-gjvmd") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "kube-api-access-gjvmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:22:28.323671 systemd[1]: var-lib-kubelet-pods-42283bf1\x2da286\x2d417a\x2d9e3e\x2dd9146ff319e5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:22:28.324909 kubelet[2579]: I0906 00:22:28.324882 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42283bf1-a286-417a-9e3e-d9146ff319e5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:22:28.326165 kubelet[2579]: I0906 00:22:28.326130 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:22:28.328930 systemd[1]: var-lib-kubelet-pods-42283bf1\x2da286\x2d417a\x2d9e3e\x2dd9146ff319e5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:22:28.329518 kubelet[2579]: I0906 00:22:28.329491 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:22:28.330991 kubelet[2579]: I0906 00:22:28.330968 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42283bf1-a286-417a-9e3e-d9146ff319e5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "42283bf1-a286-417a-9e3e-d9146ff319e5" (UID: "42283bf1-a286-417a-9e3e-d9146ff319e5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415485 2579 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-host-proc-sys-net\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415520 2579 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-etc-cni-netd\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415530 2579 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-cgroup\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415538 2579 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42283bf1-a286-417a-9e3e-d9146ff319e5-clustermesh-secrets\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415547 2579 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-ipsec-secrets\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415585 2579 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42283bf1-a286-417a-9e3e-d9146ff319e5-hubble-tls\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415596 2579 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-config-path\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415604 2579 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-host-proc-sys-kernel\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415613 2579 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cni-path\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415621 2579 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-hostproc\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415629 2579 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-bpf-maps\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.415678 kubelet[2579]: I0906 00:22:28.415636 2579 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-cilium-run\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.417322 kubelet[2579]: I0906 00:22:28.415645 2579 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-xtables-lock\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.417439 kubelet[2579]: I0906 00:22:28.417339 2579 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42283bf1-a286-417a-9e3e-d9146ff319e5-lib-modules\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.417439 kubelet[2579]: I0906 00:22:28.417351 2579 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjvmd\" (UniqueName: \"kubernetes.io/projected/42283bf1-a286-417a-9e3e-d9146ff319e5-kube-api-access-gjvmd\") on node \"ip-172-31-31-235\" DevicePath \"\"" Sep 6 00:22:28.764301 systemd[1]: Removed slice kubepods-burstable-pod42283bf1_a286_417a_9e3e_d9146ff319e5.slice. Sep 6 00:22:28.895167 kubelet[2579]: E0906 00:22:28.895120 2579 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:22:29.103097 kubelet[2579]: I0906 00:22:29.103046 2579 scope.go:117] "RemoveContainer" containerID="d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9" Sep 6 00:22:29.106577 env[1744]: time="2025-09-06T00:22:29.106526332Z" level=info msg="RemoveContainer for \"d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9\"" Sep 6 00:22:29.111999 env[1744]: time="2025-09-06T00:22:29.111950024Z" level=info msg="RemoveContainer for \"d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9\" returns successfully" Sep 6 00:22:29.114019 systemd[1]: var-lib-kubelet-pods-42283bf1\x2da286\x2d417a\x2d9e3e\x2dd9146ff319e5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:22:29.149412 kubelet[2579]: E0906 00:22:29.149360 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42283bf1-a286-417a-9e3e-d9146ff319e5" containerName="mount-cgroup" Sep 6 00:22:29.149412 kubelet[2579]: I0906 00:22:29.149420 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="42283bf1-a286-417a-9e3e-d9146ff319e5" containerName="mount-cgroup" Sep 6 00:22:29.156548 systemd[1]: Created slice kubepods-burstable-poda1d98529_a5cb_4bed_a1e0_0b75b87fffa5.slice. Sep 6 00:22:29.222924 kubelet[2579]: I0906 00:22:29.222889 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-hubble-tls\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223132 kubelet[2579]: I0906 00:22:29.223105 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwht5\" (UniqueName: \"kubernetes.io/projected/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-kube-api-access-jwht5\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223136 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-lib-modules\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223152 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-xtables-lock\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223167 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-host-proc-sys-kernel\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223183 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-clustermesh-secrets\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223197 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-hostproc\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223214 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-cilium-run\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223228 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-cilium-config-path\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223250 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-host-proc-sys-net\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223287 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-bpf-maps\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223305 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-cilium-cgroup\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223328 kubelet[2579]: I0906 00:22:29.223323 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-cni-path\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223671 kubelet[2579]: I0906 00:22:29.223339 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-etc-cni-netd\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.223671 kubelet[2579]: I0906 00:22:29.223353 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1d98529-a5cb-4bed-a1e0-0b75b87fffa5-cilium-ipsec-secrets\") pod \"cilium-lgznm\" (UID: \"a1d98529-a5cb-4bed-a1e0-0b75b87fffa5\") " pod="kube-system/cilium-lgznm" Sep 6 00:22:29.465489 env[1744]: time="2025-09-06T00:22:29.464728603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgznm,Uid:a1d98529-a5cb-4bed-a1e0-0b75b87fffa5,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:29.489478 env[1744]: time="2025-09-06T00:22:29.489378437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:29.489478 env[1744]: time="2025-09-06T00:22:29.489434242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:29.489792 env[1744]: time="2025-09-06T00:22:29.489451662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:29.489792 env[1744]: time="2025-09-06T00:22:29.489731582Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9 pid=4541 runtime=io.containerd.runc.v2 Sep 6 00:22:29.507300 systemd[1]: Started cri-containerd-2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9.scope. Sep 6 00:22:29.537336 env[1744]: time="2025-09-06T00:22:29.537288384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgznm,Uid:a1d98529-a5cb-4bed-a1e0-0b75b87fffa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\"" Sep 6 00:22:29.546324 env[1744]: time="2025-09-06T00:22:29.545800910Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:22:29.567918 env[1744]: time="2025-09-06T00:22:29.567854679Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"519b8b3e2875b88e06bff2f7608a1aa502be1f2932859ada0147281abafe8562\"" Sep 6 00:22:29.573728 env[1744]: time="2025-09-06T00:22:29.568483032Z" level=info msg="StartContainer for \"519b8b3e2875b88e06bff2f7608a1aa502be1f2932859ada0147281abafe8562\"" Sep 6 00:22:29.590048 systemd[1]: Started cri-containerd-519b8b3e2875b88e06bff2f7608a1aa502be1f2932859ada0147281abafe8562.scope. Sep 6 00:22:29.679207 env[1744]: time="2025-09-06T00:22:29.679143085Z" level=info msg="StartContainer for \"519b8b3e2875b88e06bff2f7608a1aa502be1f2932859ada0147281abafe8562\" returns successfully" Sep 6 00:22:29.714169 systemd[1]: cri-containerd-519b8b3e2875b88e06bff2f7608a1aa502be1f2932859ada0147281abafe8562.scope: Deactivated successfully. Sep 6 00:22:29.762076 env[1744]: time="2025-09-06T00:22:29.761948343Z" level=info msg="shim disconnected" id=519b8b3e2875b88e06bff2f7608a1aa502be1f2932859ada0147281abafe8562 Sep 6 00:22:29.762076 env[1744]: time="2025-09-06T00:22:29.761995275Z" level=warning msg="cleaning up after shim disconnected" id=519b8b3e2875b88e06bff2f7608a1aa502be1f2932859ada0147281abafe8562 namespace=k8s.io Sep 6 00:22:29.762076 env[1744]: time="2025-09-06T00:22:29.762005090Z" level=info msg="cleaning up dead shim" Sep 6 00:22:29.770923 env[1744]: time="2025-09-06T00:22:29.770876163Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4628 runtime=io.containerd.runc.v2\n" Sep 6 00:22:30.110418 env[1744]: time="2025-09-06T00:22:30.110376566Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:22:30.144218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851922689.mount: Deactivated successfully. Sep 6 00:22:30.155808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006442846.mount: Deactivated successfully. Sep 6 00:22:30.165517 env[1744]: time="2025-09-06T00:22:30.165455684Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b69634da6801dbb11ff1d28271a4afe96c1cb3ba190144f3e2a89a8d239b6fa\"" Sep 6 00:22:30.168084 env[1744]: time="2025-09-06T00:22:30.166410062Z" level=info msg="StartContainer for \"3b69634da6801dbb11ff1d28271a4afe96c1cb3ba190144f3e2a89a8d239b6fa\"" Sep 6 00:22:30.189640 systemd[1]: Started cri-containerd-3b69634da6801dbb11ff1d28271a4afe96c1cb3ba190144f3e2a89a8d239b6fa.scope. Sep 6 00:22:30.225830 env[1744]: time="2025-09-06T00:22:30.225773790Z" level=info msg="StartContainer for \"3b69634da6801dbb11ff1d28271a4afe96c1cb3ba190144f3e2a89a8d239b6fa\" returns successfully" Sep 6 00:22:30.238080 systemd[1]: cri-containerd-3b69634da6801dbb11ff1d28271a4afe96c1cb3ba190144f3e2a89a8d239b6fa.scope: Deactivated successfully. Sep 6 00:22:30.274364 env[1744]: time="2025-09-06T00:22:30.274313690Z" level=info msg="shim disconnected" id=3b69634da6801dbb11ff1d28271a4afe96c1cb3ba190144f3e2a89a8d239b6fa Sep 6 00:22:30.274732 env[1744]: time="2025-09-06T00:22:30.274710118Z" level=warning msg="cleaning up after shim disconnected" id=3b69634da6801dbb11ff1d28271a4afe96c1cb3ba190144f3e2a89a8d239b6fa namespace=k8s.io Sep 6 00:22:30.274867 env[1744]: time="2025-09-06T00:22:30.274808376Z" level=info msg="cleaning up dead shim" Sep 6 00:22:30.283238 env[1744]: time="2025-09-06T00:22:30.283191091Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4693 runtime=io.containerd.runc.v2\n" Sep 6 00:22:30.574470 kubelet[2579]: W0906 00:22:30.574319 2579 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42283bf1_a286_417a_9e3e_d9146ff319e5.slice/cri-containerd-d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9.scope WatchSource:0}: container "d016285d80403f21bb42e1c9c6078fb1b197c6e833b59f078903c55f2e3ebff9" in namespace "k8s.io": not found Sep 6 00:22:30.733235 kubelet[2579]: I0906 00:22:30.733187 2579 setters.go:600] "Node became not ready" node="ip-172-31-31-235" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:22:30Z","lastTransitionTime":"2025-09-06T00:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:22:30.760079 kubelet[2579]: I0906 00:22:30.759984 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42283bf1-a286-417a-9e3e-d9146ff319e5" path="/var/lib/kubelet/pods/42283bf1-a286-417a-9e3e-d9146ff319e5/volumes" Sep 6 00:22:31.118412 env[1744]: time="2025-09-06T00:22:31.118378165Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:22:31.140839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293691227.mount: Deactivated successfully. Sep 6 00:22:31.157322 env[1744]: time="2025-09-06T00:22:31.157267246Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8\"" Sep 6 00:22:31.159229 env[1744]: time="2025-09-06T00:22:31.157982541Z" level=info msg="StartContainer for \"e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8\"" Sep 6 00:22:31.182231 systemd[1]: Started cri-containerd-e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8.scope. Sep 6 00:22:31.218753 env[1744]: time="2025-09-06T00:22:31.218695489Z" level=info msg="StartContainer for \"e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8\" returns successfully" Sep 6 00:22:31.225155 systemd[1]: cri-containerd-e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8.scope: Deactivated successfully. Sep 6 00:22:31.263003 env[1744]: time="2025-09-06T00:22:31.262949332Z" level=info msg="shim disconnected" id=e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8 Sep 6 00:22:31.263003 env[1744]: time="2025-09-06T00:22:31.262998270Z" level=warning msg="cleaning up after shim disconnected" id=e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8 namespace=k8s.io Sep 6 00:22:31.263003 env[1744]: time="2025-09-06T00:22:31.263008115Z" level=info msg="cleaning up dead shim" Sep 6 00:22:31.273328 env[1744]: time="2025-09-06T00:22:31.273258251Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4750 runtime=io.containerd.runc.v2\n" Sep 6 00:22:32.114423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8-rootfs.mount: Deactivated successfully. Sep 6 00:22:32.122540 env[1744]: time="2025-09-06T00:22:32.122489317Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:22:32.159208 env[1744]: time="2025-09-06T00:22:32.159160198Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb\"" Sep 6 00:22:32.160186 env[1744]: time="2025-09-06T00:22:32.160155279Z" level=info msg="StartContainer for \"0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb\"" Sep 6 00:22:32.186441 systemd[1]: Started cri-containerd-0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb.scope. Sep 6 00:22:32.215588 systemd[1]: cri-containerd-0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb.scope: Deactivated successfully. Sep 6 00:22:32.219154 env[1744]: time="2025-09-06T00:22:32.219106923Z" level=info msg="StartContainer for \"0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb\" returns successfully" Sep 6 00:22:32.249355 env[1744]: time="2025-09-06T00:22:32.249264763Z" level=info msg="shim disconnected" id=0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb Sep 6 00:22:32.249355 env[1744]: time="2025-09-06T00:22:32.249313857Z" level=warning msg="cleaning up after shim disconnected" id=0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb namespace=k8s.io Sep 6 00:22:32.249355 env[1744]: time="2025-09-06T00:22:32.249322952Z" level=info msg="cleaning up dead shim" Sep 6 00:22:32.257874 env[1744]: time="2025-09-06T00:22:32.257825606Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4804 runtime=io.containerd.runc.v2\n" Sep 6 00:22:33.114474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb-rootfs.mount: Deactivated successfully. Sep 6 00:22:33.135989 env[1744]: time="2025-09-06T00:22:33.135944134Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:22:33.169489 env[1744]: time="2025-09-06T00:22:33.169359699Z" level=info msg="CreateContainer within sandbox \"2175478980b7988b82d352b1cc1afcdbb978b3daafa60350d21785d243ac0fd9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1278a14d92a359adb4ec34ed330914f8a07d0a2e7b5e2f26cd77197490030c3f\"" Sep 6 00:22:33.171151 env[1744]: time="2025-09-06T00:22:33.170128996Z" level=info msg="StartContainer for \"1278a14d92a359adb4ec34ed330914f8a07d0a2e7b5e2f26cd77197490030c3f\"" Sep 6 00:22:33.194992 systemd[1]: Started cri-containerd-1278a14d92a359adb4ec34ed330914f8a07d0a2e7b5e2f26cd77197490030c3f.scope. Sep 6 00:22:33.238093 env[1744]: time="2025-09-06T00:22:33.238049396Z" level=info msg="StartContainer for \"1278a14d92a359adb4ec34ed330914f8a07d0a2e7b5e2f26cd77197490030c3f\" returns successfully" Sep 6 00:22:33.687355 kubelet[2579]: W0906 00:22:33.687301 2579 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d98529_a5cb_4bed_a1e0_0b75b87fffa5.slice/cri-containerd-519b8b3e2875b88e06bff2f7608a1aa502be1f2932859ada0147281abafe8562.scope WatchSource:0}: task 519b8b3e2875b88e06bff2f7608a1aa502be1f2932859ada0147281abafe8562 not found: not found Sep 6 00:22:33.984588 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:22:34.164098 kubelet[2579]: I0906 00:22:34.164018 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lgznm" podStartSLOduration=5.163998631 podStartE2EDuration="5.163998631s" podCreationTimestamp="2025-09-06 00:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:22:34.163736531 +0000 UTC m=+105.589861257" watchObservedRunningTime="2025-09-06 00:22:34.163998631 +0000 UTC m=+105.590123351" Sep 6 00:22:36.794853 kubelet[2579]: W0906 00:22:36.794256 2579 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d98529_a5cb_4bed_a1e0_0b75b87fffa5.slice/cri-containerd-3b69634da6801dbb11ff1d28271a4afe96c1cb3ba190144f3e2a89a8d239b6fa.scope WatchSource:0}: task 3b69634da6801dbb11ff1d28271a4afe96c1cb3ba190144f3e2a89a8d239b6fa not found: not found Sep 6 00:22:36.969711 (udev-worker)[4899]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:22:36.970594 (udev-worker)[5376]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:22:36.972172 systemd-networkd[1466]: lxc_health: Link UP Sep 6 00:22:36.994482 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:22:36.991778 systemd-networkd[1466]: lxc_health: Gained carrier Sep 6 00:22:38.231451 systemd-networkd[1466]: lxc_health: Gained IPv6LL Sep 6 00:22:38.244291 systemd[1]: run-containerd-runc-k8s.io-1278a14d92a359adb4ec34ed330914f8a07d0a2e7b5e2f26cd77197490030c3f-runc.bHri1t.mount: Deactivated successfully. Sep 6 00:22:39.904093 kubelet[2579]: W0906 00:22:39.904046 2579 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d98529_a5cb_4bed_a1e0_0b75b87fffa5.slice/cri-containerd-e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8.scope WatchSource:0}: task e96f162c7dafe79a1e87e4354b419d3a93829a9b03f3aae1f55a124975c540a8 not found: not found Sep 6 00:22:43.016683 kubelet[2579]: W0906 00:22:43.016525 2579 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d98529_a5cb_4bed_a1e0_0b75b87fffa5.slice/cri-containerd-0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb.scope WatchSource:0}: task 0e81397c33b618a793e28d8e8e91f6c10075520a618025fbffdf6da58e1d4cfb not found: not found Sep 6 00:22:44.937020 sshd[4466]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:44.940066 systemd[1]: sshd@23-172.31.31.235:22-139.178.68.195:39564.service: Deactivated successfully. Sep 6 00:22:44.940843 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:22:44.941540 systemd-logind[1726]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:22:44.942619 systemd-logind[1726]: Removed session 24.