Sep 13 00:49:20.007859 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:49:20.007903 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:49:20.007924 kernel: BIOS-provided physical RAM map: Sep 13 00:49:20.007936 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:49:20.007947 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 13 00:49:20.007958 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:49:20.007972 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:49:20.007985 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:49:20.007999 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:49:20.008011 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:49:20.008022 kernel: NX (Execute Disable) protection: active Sep 13 00:49:20.008034 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:49:20.008047 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:49:20.008059 kernel: extended physical RAM map: Sep 13 00:49:20.008076 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:49:20.008089 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Sep 13 00:49:20.008101 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Sep 13 00:49:20.008114 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Sep 13 00:49:20.008127 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:49:20.008140 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:49:20.008154 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:49:20.008166 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:49:20.008179 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:49:20.008192 kernel: efi: EFI v2.70 by EDK II Sep 13 00:49:20.008207 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 Sep 13 00:49:20.008219 kernel: SMBIOS 2.7 present. Sep 13 00:49:20.008232 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 13 00:49:20.008245 kernel: Hypervisor detected: KVM Sep 13 00:49:20.008257 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:49:20.008270 kernel: kvm-clock: cpu 0, msr 5119f001, primary cpu clock Sep 13 00:49:20.008282 kernel: kvm-clock: using sched offset of 4108376880 cycles Sep 13 00:49:20.008296 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:49:20.008309 kernel: tsc: Detected 2500.004 MHz processor Sep 13 00:49:20.008322 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:49:20.008336 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:49:20.008352 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 13 00:49:20.008365 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:49:20.008378 kernel: Using GB pages for direct mapping Sep 13 00:49:20.008391 kernel: Secure boot disabled Sep 13 00:49:20.008404 kernel: ACPI: Early table checksum verification disabled Sep 13 00:49:20.008423 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 13 00:49:20.008438 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:49:20.008454 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:49:20.008469 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 13 00:49:20.008483 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 13 00:49:20.008497 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 13 00:49:20.008512 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:49:20.008526 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:49:20.008540 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 13 00:49:20.008557 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 13 00:49:20.008571 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:49:20.008585 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:49:20.008600 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 13 00:49:20.008614 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 13 00:49:20.008628 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 13 00:49:20.008643 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 13 00:49:20.008657 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 13 00:49:20.008670 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 13 00:49:20.008684 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 13 00:49:20.008695 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 13 00:49:20.008707 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 13 00:49:20.008720 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 13 00:49:20.008732 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 13 00:49:20.008744 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 13 00:49:20.008769 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:49:20.008782 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:49:20.008795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 13 00:49:20.008810 kernel: NUMA: Initialized distance table, cnt=1 Sep 13 00:49:20.008823 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 13 00:49:20.008835 kernel: Zone ranges: Sep 13 00:49:20.008848 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:49:20.008860 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 13 00:49:20.008873 kernel: Normal empty Sep 13 00:49:20.008886 kernel: Movable zone start for each node Sep 13 00:49:20.008898 kernel: Early memory node ranges Sep 13 00:49:20.008910 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:49:20.008925 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 13 00:49:20.008938 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 13 00:49:20.008951 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 13 00:49:20.008963 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:49:20.008975 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:49:20.008987 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:49:20.009000 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 13 00:49:20.009012 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:49:20.009025 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:49:20.009041 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 13 00:49:20.009054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:49:20.009066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:49:20.009078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:49:20.009090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:49:20.009103 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:49:20.009115 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:49:20.009127 kernel: TSC deadline timer available Sep 13 00:49:20.009140 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:49:20.009154 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 13 00:49:20.009167 kernel: Booting paravirtualized kernel on KVM Sep 13 00:49:20.009179 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:49:20.009192 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:49:20.009205 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:49:20.009218 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:49:20.009230 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:49:20.009241 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Sep 13 00:49:20.009252 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:49:20.009265 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:49:20.009277 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 13 00:49:20.009287 kernel: Policy zone: DMA32 Sep 13 00:49:20.009300 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:49:20.009313 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:49:20.009326 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:49:20.009338 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:49:20.009350 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:49:20.009367 kernel: Memory: 1876640K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 160904K reserved, 0K cma-reserved) Sep 13 00:49:20.009380 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:49:20.009392 kernel: Kernel/User page tables isolation: enabled Sep 13 00:49:20.009404 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:49:20.009416 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:49:20.009429 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:49:20.009444 kernel: rcu: RCU event tracing is enabled. Sep 13 00:49:20.009469 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:49:20.009482 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:49:20.009495 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:49:20.009508 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:49:20.009521 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:49:20.009538 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:49:20.009550 kernel: random: crng init done Sep 13 00:49:20.009564 kernel: Console: colour dummy device 80x25 Sep 13 00:49:20.009576 kernel: printk: console [tty0] enabled Sep 13 00:49:20.009589 kernel: printk: console [ttyS0] enabled Sep 13 00:49:20.009602 kernel: ACPI: Core revision 20210730 Sep 13 00:49:20.009615 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 13 00:49:20.009631 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:49:20.009645 kernel: x2apic enabled Sep 13 00:49:20.009658 kernel: Switched APIC routing to physical x2apic. Sep 13 00:49:20.009671 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 13 00:49:20.009685 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Sep 13 00:49:20.009698 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:49:20.009711 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:49:20.009726 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:49:20.009739 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:49:20.009752 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:49:20.009774 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:49:20.009788 kernel: RETBleed: Vulnerable Sep 13 00:49:20.009801 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:49:20.009814 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:49:20.009827 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:49:20.009839 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:49:20.009852 kernel: active return thunk: its_return_thunk Sep 13 00:49:20.009864 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:49:20.009880 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:49:20.009894 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:49:20.009907 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:49:20.009920 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:49:20.009933 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:49:20.009946 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:49:20.009959 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:49:20.009973 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:49:20.009986 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 13 00:49:20.009998 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:49:20.010011 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:49:20.010027 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:49:20.010040 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 13 00:49:20.010054 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 13 00:49:20.010066 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 13 00:49:20.010079 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 13 00:49:20.010092 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 13 00:49:20.010105 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:49:20.010118 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:49:20.010131 kernel: LSM: Security Framework initializing Sep 13 00:49:20.010143 kernel: SELinux: Initializing. Sep 13 00:49:20.010157 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:49:20.010171 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:49:20.010185 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:49:20.010198 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:49:20.010211 kernel: signal: max sigframe size: 3632 Sep 13 00:49:20.010224 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:49:20.010237 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:49:20.010250 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:49:20.010263 kernel: x86: Booting SMP configuration: Sep 13 00:49:20.010276 kernel: .... node #0, CPUs: #1 Sep 13 00:49:20.010289 kernel: kvm-clock: cpu 1, msr 5119f041, secondary cpu clock Sep 13 00:49:20.010305 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Sep 13 00:49:20.010318 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:49:20.010332 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:49:20.010345 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:49:20.010359 kernel: smpboot: Max logical packages: 1 Sep 13 00:49:20.010372 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Sep 13 00:49:20.010385 kernel: devtmpfs: initialized Sep 13 00:49:20.010398 kernel: x86/mm: Memory block size: 128MB Sep 13 00:49:20.010413 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 13 00:49:20.010427 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:49:20.010441 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:49:20.010453 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:49:20.010465 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:49:20.010476 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:49:20.010488 kernel: audit: type=2000 audit(1757724560.576:1): state=initialized audit_enabled=0 res=1 Sep 13 00:49:20.010501 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:49:20.010514 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:49:20.010532 kernel: cpuidle: using governor menu Sep 13 00:49:20.010545 kernel: ACPI: bus type PCI registered Sep 13 00:49:20.010558 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:49:20.010570 kernel: dca service started, version 1.12.1 Sep 13 00:49:20.010583 kernel: PCI: Using configuration type 1 for base access Sep 13 00:49:20.010597 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:49:20.010610 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:49:20.010622 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:49:20.010635 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:49:20.010653 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:49:20.010668 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:49:20.010682 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:49:20.010695 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:49:20.010707 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:49:20.010721 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:49:20.010735 kernel: ACPI: Interpreter enabled Sep 13 00:49:20.010749 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:49:20.010788 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:49:20.010804 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:49:20.010816 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:49:20.010828 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:49:20.015955 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:49:20.016116 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 13 00:49:20.016137 kernel: acpiphp: Slot [3] registered Sep 13 00:49:20.016153 kernel: acpiphp: Slot [4] registered Sep 13 00:49:20.016169 kernel: acpiphp: Slot [5] registered Sep 13 00:49:20.016192 kernel: acpiphp: Slot [6] registered Sep 13 00:49:20.016207 kernel: acpiphp: Slot [7] registered Sep 13 00:49:20.016223 kernel: acpiphp: Slot [8] registered Sep 13 00:49:20.016237 kernel: acpiphp: Slot [9] registered Sep 13 00:49:20.016252 kernel: acpiphp: Slot [10] registered Sep 13 00:49:20.016268 kernel: acpiphp: Slot [11] registered Sep 13 00:49:20.016281 kernel: acpiphp: Slot [12] registered Sep 13 00:49:20.016296 kernel: acpiphp: Slot [13] registered Sep 13 00:49:20.016311 kernel: acpiphp: Slot [14] registered Sep 13 00:49:20.016329 kernel: acpiphp: Slot [15] registered Sep 13 00:49:20.016344 kernel: acpiphp: Slot [16] registered Sep 13 00:49:20.016359 kernel: acpiphp: Slot [17] registered Sep 13 00:49:20.016374 kernel: acpiphp: Slot [18] registered Sep 13 00:49:20.016388 kernel: acpiphp: Slot [19] registered Sep 13 00:49:20.016403 kernel: acpiphp: Slot [20] registered Sep 13 00:49:20.016418 kernel: acpiphp: Slot [21] registered Sep 13 00:49:20.016433 kernel: acpiphp: Slot [22] registered Sep 13 00:49:20.016448 kernel: acpiphp: Slot [23] registered Sep 13 00:49:20.016463 kernel: acpiphp: Slot [24] registered Sep 13 00:49:20.016480 kernel: acpiphp: Slot [25] registered Sep 13 00:49:20.016496 kernel: acpiphp: Slot [26] registered Sep 13 00:49:20.016510 kernel: acpiphp: Slot [27] registered Sep 13 00:49:20.016526 kernel: acpiphp: Slot [28] registered Sep 13 00:49:20.016540 kernel: acpiphp: Slot [29] registered Sep 13 00:49:20.016555 kernel: acpiphp: Slot [30] registered Sep 13 00:49:20.016570 kernel: acpiphp: Slot [31] registered Sep 13 00:49:20.016585 kernel: PCI host bridge to bus 0000:00 Sep 13 00:49:20.016717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:49:20.016861 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:49:20.016980 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:49:20.017095 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:49:20.017211 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 13 00:49:20.017325 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:49:20.017473 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:49:20.017617 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:49:20.017756 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 13 00:49:20.017923 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:49:20.018054 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 13 00:49:20.018183 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 13 00:49:20.018308 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 13 00:49:20.018431 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 13 00:49:20.018554 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 13 00:49:20.018673 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 13 00:49:20.018823 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 13 00:49:20.018955 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 13 00:49:20.019073 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:49:20.019192 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 13 00:49:20.019332 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:49:20.019472 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:49:20.020982 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 13 00:49:20.021157 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:49:20.021299 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 13 00:49:20.021319 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:49:20.021335 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:49:20.021350 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:49:20.021373 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:49:20.021388 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:49:20.021403 kernel: iommu: Default domain type: Translated Sep 13 00:49:20.021418 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:49:20.021557 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 13 00:49:20.021692 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:49:20.021844 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 13 00:49:20.021863 kernel: vgaarb: loaded Sep 13 00:49:20.021883 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:49:20.021898 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:49:20.021914 kernel: PTP clock support registered Sep 13 00:49:20.021929 kernel: Registered efivars operations Sep 13 00:49:20.021944 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:49:20.021960 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:49:20.021975 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Sep 13 00:49:20.021990 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 13 00:49:20.022005 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 13 00:49:20.022023 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:49:20.022038 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 13 00:49:20.022053 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:49:20.022068 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:49:20.022084 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:49:20.022099 kernel: pnp: PnP ACPI init Sep 13 00:49:20.022114 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:49:20.022129 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:49:20.022145 kernel: NET: Registered PF_INET protocol family Sep 13 00:49:20.022162 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:49:20.022178 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:49:20.022193 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:49:20.022208 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:49:20.022223 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:49:20.022238 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:49:20.022254 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:49:20.022269 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:49:20.022284 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:49:20.022301 kernel: NET: Registered PF_XDP protocol family Sep 13 00:49:20.022447 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:49:20.022572 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:49:20.022690 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:49:20.022823 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:49:20.022947 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 13 00:49:20.023082 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:49:20.023212 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 13 00:49:20.023234 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:49:20.023248 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:49:20.023262 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 13 00:49:20.023276 kernel: clocksource: Switched to clocksource tsc Sep 13 00:49:20.023289 kernel: Initialise system trusted keyrings Sep 13 00:49:20.023303 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:49:20.023316 kernel: Key type asymmetric registered Sep 13 00:49:20.023328 kernel: Asymmetric key parser 'x509' registered Sep 13 00:49:20.023344 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:49:20.023357 kernel: io scheduler mq-deadline registered Sep 13 00:49:20.023370 kernel: io scheduler kyber registered Sep 13 00:49:20.023384 kernel: io scheduler bfq registered Sep 13 00:49:20.023397 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:49:20.023410 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:49:20.023424 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:49:20.023437 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:49:20.023450 kernel: i8042: Warning: Keylock active Sep 13 00:49:20.023466 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:49:20.023479 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:49:20.023612 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:49:20.023733 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:49:20.023864 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:49:19 UTC (1757724559) Sep 13 00:49:20.023983 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:49:20.023999 kernel: intel_pstate: CPU model not supported Sep 13 00:49:20.024013 kernel: efifb: probing for efifb Sep 13 00:49:20.024029 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 13 00:49:20.024042 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 13 00:49:20.024055 kernel: efifb: scrolling: redraw Sep 13 00:49:20.024068 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:49:20.024081 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:49:20.024095 kernel: fb0: EFI VGA frame buffer device Sep 13 00:49:20.024129 kernel: pstore: Registered efi as persistent store backend Sep 13 00:49:20.024146 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:49:20.024160 kernel: Segment Routing with IPv6 Sep 13 00:49:20.024176 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:49:20.024190 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:49:20.024204 kernel: Key type dns_resolver registered Sep 13 00:49:20.024218 kernel: IPI shorthand broadcast: enabled Sep 13 00:49:20.024233 kernel: sched_clock: Marking stable (354693227, 135461741)->(575124094, -84969126) Sep 13 00:49:20.024247 kernel: registered taskstats version 1 Sep 13 00:49:20.024260 kernel: Loading compiled-in X.509 certificates Sep 13 00:49:20.024274 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:49:20.024287 kernel: Key type .fscrypt registered Sep 13 00:49:20.024303 kernel: Key type fscrypt-provisioning registered Sep 13 00:49:20.024318 kernel: pstore: Using crash dump compression: deflate Sep 13 00:49:20.024331 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:49:20.024345 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:49:20.024359 kernel: ima: No architecture policies found Sep 13 00:49:20.024372 kernel: clk: Disabling unused clocks Sep 13 00:49:20.024386 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:49:20.024400 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:49:20.024414 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:49:20.024431 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:49:20.024447 kernel: Run /init as init process Sep 13 00:49:20.024460 kernel: with arguments: Sep 13 00:49:20.024474 kernel: /init Sep 13 00:49:20.024487 kernel: with environment: Sep 13 00:49:20.024500 kernel: HOME=/ Sep 13 00:49:20.024514 kernel: TERM=linux Sep 13 00:49:20.024528 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:49:20.024544 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:49:20.024564 systemd[1]: Detected virtualization amazon. Sep 13 00:49:20.024578 systemd[1]: Detected architecture x86-64. Sep 13 00:49:20.024592 systemd[1]: Running in initrd. Sep 13 00:49:20.024606 systemd[1]: No hostname configured, using default hostname. Sep 13 00:49:20.024619 systemd[1]: Hostname set to . Sep 13 00:49:20.024634 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:49:20.024648 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:49:20.024665 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:49:20.024679 systemd[1]: Reached target cryptsetup.target. Sep 13 00:49:20.024693 systemd[1]: Reached target paths.target. Sep 13 00:49:20.024707 systemd[1]: Reached target slices.target. Sep 13 00:49:20.024721 systemd[1]: Reached target swap.target. Sep 13 00:49:20.024735 systemd[1]: Reached target timers.target. Sep 13 00:49:20.024753 systemd[1]: Listening on iscsid.socket. Sep 13 00:49:20.024778 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:49:20.024793 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:49:20.024807 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:49:20.024822 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:49:20.024836 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:49:20.024850 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:49:20.024867 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:49:20.024882 systemd[1]: Reached target sockets.target. Sep 13 00:49:20.024896 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:49:20.024911 systemd[1]: Finished network-cleanup.service. Sep 13 00:49:20.024925 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:49:20.024940 systemd[1]: Starting systemd-journald.service... Sep 13 00:49:20.024957 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:49:20.024971 systemd[1]: Starting systemd-resolved.service... Sep 13 00:49:20.024986 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:49:20.025003 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:49:20.025018 kernel: audit: type=1130 audit(1757724559.998:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.025032 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:49:20.025046 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:49:20.025061 kernel: audit: type=1130 audit(1757724560.018:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.025088 systemd-journald[185]: Journal started Sep 13 00:49:20.025160 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2fba84d3d517eb9063b72273b4874b) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:49:19.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.007350 systemd-modules-load[186]: Inserted module 'overlay' Sep 13 00:49:20.036156 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:49:20.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.045446 systemd[1]: Started systemd-journald.service. Sep 13 00:49:20.045520 kernel: audit: type=1130 audit(1757724560.036:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.049204 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:49:20.054565 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:49:20.057659 systemd-resolved[187]: Positive Trust Anchors: Sep 13 00:49:20.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.063641 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:49:20.068957 kernel: audit: type=1130 audit(1757724560.046:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.063751 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:49:20.076747 systemd-resolved[187]: Defaulting to hostname 'linux'. Sep 13 00:49:20.101994 kernel: audit: type=1130 audit(1757724560.082:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.102046 kernel: audit: type=1130 audit(1757724560.092:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.076901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:49:20.113353 kernel: audit: type=1130 audit(1757724560.101:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.113391 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:49:20.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.083007 systemd[1]: Started systemd-resolved.service. Sep 13 00:49:20.094017 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:49:20.102534 systemd[1]: Reached target nss-lookup.target. Sep 13 00:49:20.118260 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:49:20.130807 kernel: Bridge firewalling registered Sep 13 00:49:20.125438 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 13 00:49:20.132894 dracut-cmdline[202]: dracut-dracut-053 Sep 13 00:49:20.137660 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:49:20.157790 kernel: SCSI subsystem initialized Sep 13 00:49:20.180167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:49:20.180246 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:49:20.183142 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:49:20.187642 systemd-modules-load[186]: Inserted module 'dm_multipath' Sep 13 00:49:20.189968 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:49:20.199460 kernel: audit: type=1130 audit(1757724560.190:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.199848 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:49:20.211848 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:49:20.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.220798 kernel: audit: type=1130 audit(1757724560.213:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.235790 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:49:20.254789 kernel: iscsi: registered transport (tcp) Sep 13 00:49:20.279569 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:49:20.279649 kernel: QLogic iSCSI HBA Driver Sep 13 00:49:20.312210 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:49:20.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.314197 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:49:20.365790 kernel: raid6: avx512x4 gen() 17775 MB/s Sep 13 00:49:20.383792 kernel: raid6: avx512x4 xor() 7995 MB/s Sep 13 00:49:20.401782 kernel: raid6: avx512x2 gen() 17811 MB/s Sep 13 00:49:20.419788 kernel: raid6: avx512x2 xor() 24101 MB/s Sep 13 00:49:20.437787 kernel: raid6: avx512x1 gen() 17216 MB/s Sep 13 00:49:20.455804 kernel: raid6: avx512x1 xor() 21654 MB/s Sep 13 00:49:20.473782 kernel: raid6: avx2x4 gen() 17629 MB/s Sep 13 00:49:20.491791 kernel: raid6: avx2x4 xor() 7429 MB/s Sep 13 00:49:20.509783 kernel: raid6: avx2x2 gen() 17783 MB/s Sep 13 00:49:20.527788 kernel: raid6: avx2x2 xor() 17989 MB/s Sep 13 00:49:20.545787 kernel: raid6: avx2x1 gen() 13653 MB/s Sep 13 00:49:20.563788 kernel: raid6: avx2x1 xor() 15718 MB/s Sep 13 00:49:20.581781 kernel: raid6: sse2x4 gen() 9530 MB/s Sep 13 00:49:20.599789 kernel: raid6: sse2x4 xor() 5978 MB/s Sep 13 00:49:20.617780 kernel: raid6: sse2x2 gen() 10541 MB/s Sep 13 00:49:20.635789 kernel: raid6: sse2x2 xor() 6106 MB/s Sep 13 00:49:20.653782 kernel: raid6: sse2x1 gen() 9418 MB/s Sep 13 00:49:20.672113 kernel: raid6: sse2x1 xor() 4801 MB/s Sep 13 00:49:20.672164 kernel: raid6: using algorithm avx512x2 gen() 17811 MB/s Sep 13 00:49:20.672196 kernel: raid6: .... xor() 24101 MB/s, rmw enabled Sep 13 00:49:20.673234 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:49:20.687792 kernel: xor: automatically using best checksumming function avx Sep 13 00:49:20.790788 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:49:20.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.799000 audit: BPF prog-id=7 op=LOAD Sep 13 00:49:20.799000 audit: BPF prog-id=8 op=LOAD Sep 13 00:49:20.799128 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:49:20.800506 systemd[1]: Starting systemd-udevd.service... Sep 13 00:49:20.814653 systemd-udevd[385]: Using default interface naming scheme 'v252'. Sep 13 00:49:20.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.820106 systemd[1]: Started systemd-udevd.service. Sep 13 00:49:20.822726 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:49:20.842451 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation Sep 13 00:49:20.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.874043 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:49:20.875302 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:49:20.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:20.919495 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:49:20.976787 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:49:21.009145 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:49:21.009209 kernel: AES CTR mode by8 optimization enabled Sep 13 00:49:21.009228 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:49:21.013464 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:49:21.023778 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:49:21.032953 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:49:21.033023 kernel: GPT:9289727 != 16777215 Sep 13 00:49:21.033044 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:49:21.036068 kernel: GPT:9289727 != 16777215 Sep 13 00:49:21.036127 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:49:21.038956 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:49:21.045909 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:49:21.055537 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:49:21.055711 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 13 00:49:21.055864 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:fb:c4:34:da:bb Sep 13 00:49:21.064281 (udev-worker)[438]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:49:21.104801 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (431) Sep 13 00:49:21.147691 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:49:21.156394 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:49:21.171382 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:49:21.172623 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:49:21.178832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:49:21.184163 systemd[1]: Starting disk-uuid.service... Sep 13 00:49:21.190863 disk-uuid[594]: Primary Header is updated. Sep 13 00:49:21.190863 disk-uuid[594]: Secondary Entries is updated. Sep 13 00:49:21.190863 disk-uuid[594]: Secondary Header is updated. Sep 13 00:49:21.199796 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:49:21.205786 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:49:22.209797 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:49:22.210139 disk-uuid[595]: The operation has completed successfully. Sep 13 00:49:22.349377 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:49:22.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:22.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:22.349495 systemd[1]: Finished disk-uuid.service. Sep 13 00:49:22.351522 systemd[1]: Starting verity-setup.service... Sep 13 00:49:22.377782 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:49:22.461277 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:49:22.464201 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:49:22.469520 systemd[1]: Finished verity-setup.service. Sep 13 00:49:22.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:22.560784 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:49:22.561560 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:49:22.562409 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:49:22.563563 systemd[1]: Starting ignition-setup.service... Sep 13 00:49:22.568452 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:49:22.592429 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:49:22.592509 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:49:22.592532 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:49:22.626791 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:49:22.641424 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:49:22.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:22.652593 systemd[1]: Finished ignition-setup.service. Sep 13 00:49:22.655919 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:49:22.661177 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:49:22.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:22.662000 audit: BPF prog-id=9 op=LOAD Sep 13 00:49:22.663971 systemd[1]: Starting systemd-networkd.service... Sep 13 00:49:22.687748 systemd-networkd[1024]: lo: Link UP Sep 13 00:49:22.687786 systemd-networkd[1024]: lo: Gained carrier Sep 13 00:49:22.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:22.688822 systemd-networkd[1024]: Enumeration completed Sep 13 00:49:22.688950 systemd[1]: Started systemd-networkd.service. Sep 13 00:49:22.689623 systemd-networkd[1024]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:49:22.690671 systemd[1]: Reached target network.target. Sep 13 00:49:22.692697 systemd[1]: Starting iscsiuio.service... Sep 13 00:49:22.700363 systemd[1]: Started iscsiuio.service. Sep 13 00:49:22.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:22.701709 systemd-networkd[1024]: eth0: Link UP Sep 13 00:49:22.701715 systemd-networkd[1024]: eth0: Gained carrier Sep 13 00:49:22.703150 systemd[1]: Starting iscsid.service... Sep 13 00:49:22.708896 iscsid[1029]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:49:22.708896 iscsid[1029]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:49:22.708896 iscsid[1029]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:49:22.708896 iscsid[1029]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:49:22.708896 iscsid[1029]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:49:22.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:22.725440 iscsid[1029]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:49:22.711356 systemd[1]: Started iscsid.service. Sep 13 00:49:22.714586 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:49:22.719871 systemd-networkd[1024]: eth0: DHCPv4 address 172.31.27.100/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:49:22.733543 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:49:22.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:22.734462 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:49:22.735145 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:49:22.735686 systemd[1]: Reached target remote-fs.target. Sep 13 00:49:22.739100 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:49:22.749431 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:49:22.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:23.069700 ignition[1020]: Ignition 2.14.0 Sep 13 00:49:23.069712 ignition[1020]: Stage: fetch-offline Sep 13 00:49:23.069853 ignition[1020]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:23.069886 ignition[1020]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:23.083186 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:23.083643 ignition[1020]: Ignition finished successfully Sep 13 00:49:23.085334 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:49:23.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:23.086984 systemd[1]: Starting ignition-fetch.service... Sep 13 00:49:23.095194 ignition[1048]: Ignition 2.14.0 Sep 13 00:49:23.095205 ignition[1048]: Stage: fetch Sep 13 00:49:23.095353 ignition[1048]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:23.095375 ignition[1048]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:23.101216 ignition[1048]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:23.101821 ignition[1048]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:23.112202 ignition[1048]: INFO : PUT result: OK Sep 13 00:49:23.113798 ignition[1048]: DEBUG : parsed url from cmdline: "" Sep 13 00:49:23.113798 ignition[1048]: INFO : no config URL provided Sep 13 00:49:23.113798 ignition[1048]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:49:23.113798 ignition[1048]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 13 00:49:23.116020 ignition[1048]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:23.116020 ignition[1048]: INFO : PUT result: OK Sep 13 00:49:23.116020 ignition[1048]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:49:23.116020 ignition[1048]: INFO : GET result: OK Sep 13 00:49:23.116020 ignition[1048]: DEBUG : parsing config with SHA512: 6540d662a7bd8da364337b978396c548bd8b42d280e6e5336333f68ef2e69ba06c41b144e6ace55cd196f1a86686e3a4190bc745b0524c6d04bcef425c145054 Sep 13 00:49:23.122201 unknown[1048]: fetched base config from "system" Sep 13 00:49:23.122216 unknown[1048]: fetched base config from "system" Sep 13 00:49:23.122683 ignition[1048]: fetch: fetch complete Sep 13 00:49:23.122222 unknown[1048]: fetched user config from "aws" Sep 13 00:49:23.122688 ignition[1048]: fetch: fetch passed Sep 13 00:49:23.125018 systemd[1]: Finished ignition-fetch.service. Sep 13 00:49:23.122728 ignition[1048]: Ignition finished successfully Sep 13 00:49:23.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:23.127393 systemd[1]: Starting ignition-kargs.service... Sep 13 00:49:23.139964 ignition[1054]: Ignition 2.14.0 Sep 13 00:49:23.139977 ignition[1054]: Stage: kargs Sep 13 00:49:23.140181 ignition[1054]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:23.140219 ignition[1054]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:23.149069 ignition[1054]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:23.149904 ignition[1054]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:23.150575 ignition[1054]: INFO : PUT result: OK Sep 13 00:49:23.152782 ignition[1054]: kargs: kargs passed Sep 13 00:49:23.152854 ignition[1054]: Ignition finished successfully Sep 13 00:49:23.155313 systemd[1]: Finished ignition-kargs.service. Sep 13 00:49:23.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:23.157132 systemd[1]: Starting ignition-disks.service... Sep 13 00:49:23.166749 ignition[1060]: Ignition 2.14.0 Sep 13 00:49:23.166777 ignition[1060]: Stage: disks Sep 13 00:49:23.166994 ignition[1060]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:23.167026 ignition[1060]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:23.174326 ignition[1060]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:23.175362 ignition[1060]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:23.175362 ignition[1060]: INFO : PUT result: OK Sep 13 00:49:23.178260 ignition[1060]: disks: disks passed Sep 13 00:49:23.178323 ignition[1060]: Ignition finished successfully Sep 13 00:49:23.179804 systemd[1]: Finished ignition-disks.service. Sep 13 00:49:23.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:23.180872 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:49:23.181747 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:49:23.182675 systemd[1]: Reached target local-fs.target. Sep 13 00:49:23.183702 systemd[1]: Reached target sysinit.target. Sep 13 00:49:23.184636 systemd[1]: Reached target basic.target. Sep 13 00:49:23.186783 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:49:23.226591 systemd-fsck[1068]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:49:23.229550 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:49:23.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:23.231196 systemd[1]: Mounting sysroot.mount... Sep 13 00:49:23.262086 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:49:23.263047 systemd[1]: Mounted sysroot.mount. Sep 13 00:49:23.264096 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:49:23.268235 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:49:23.270640 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:49:23.270729 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:49:23.271472 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:49:23.277119 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:49:23.281549 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:49:23.294788 initrd-setup-root[1089]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:49:23.322545 initrd-setup-root[1097]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:49:23.327841 initrd-setup-root[1105]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:49:23.333770 initrd-setup-root[1113]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:49:23.489681 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:49:23.495173 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:49:23.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:23.497019 systemd[1]: Starting ignition-mount.service... Sep 13 00:49:23.501815 systemd[1]: Starting sysroot-boot.service... Sep 13 00:49:23.507801 bash[1131]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:49:23.519797 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1124) Sep 13 00:49:23.519972 ignition[1132]: INFO : Ignition 2.14.0 Sep 13 00:49:23.519972 ignition[1132]: INFO : Stage: mount Sep 13 00:49:23.522259 ignition[1132]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:23.522259 ignition[1132]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:23.535132 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:49:23.535235 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:49:23.535257 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:49:23.535276 ignition[1132]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:23.535276 ignition[1132]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:23.540274 ignition[1132]: INFO : PUT result: OK Sep 13 00:49:23.540274 ignition[1132]: INFO : mount: mount passed Sep 13 00:49:23.540274 ignition[1132]: INFO : Ignition finished successfully Sep 13 00:49:23.543901 systemd[1]: Finished ignition-mount.service. Sep 13 00:49:23.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:23.553818 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:49:23.556082 systemd[1]: Finished sysroot-boot.service. Sep 13 00:49:23.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:23.576772 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:49:23.578191 systemd[1]: Starting ignition-files.service... Sep 13 00:49:23.596058 ignition[1160]: INFO : Ignition 2.14.0 Sep 13 00:49:23.596058 ignition[1160]: INFO : Stage: files Sep 13 00:49:23.598355 ignition[1160]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:23.598355 ignition[1160]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:23.606660 ignition[1160]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:23.607753 ignition[1160]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:23.611041 ignition[1160]: INFO : PUT result: OK Sep 13 00:49:23.614215 ignition[1160]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:49:23.621657 ignition[1160]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:49:23.621657 ignition[1160]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:49:23.636041 ignition[1160]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:49:23.637502 ignition[1160]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:49:23.638778 unknown[1160]: wrote ssh authorized keys file for user: core Sep 13 00:49:23.640308 ignition[1160]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:49:23.640308 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:49:23.640308 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:49:23.640308 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:49:23.648215 ignition[1160]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:49:23.653209 ignition[1160]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2340064407" Sep 13 00:49:23.653209 ignition[1160]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2340064407": device or resource busy Sep 13 00:49:23.653209 ignition[1160]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2340064407", trying btrfs: device or resource busy Sep 13 00:49:23.653209 ignition[1160]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2340064407" Sep 13 00:49:23.653209 ignition[1160]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2340064407" Sep 13 00:49:23.667123 ignition[1160]: INFO : op(3): [started] unmounting "/mnt/oem2340064407" Sep 13 00:49:23.668247 ignition[1160]: INFO : op(3): [finished] unmounting "/mnt/oem2340064407" Sep 13 00:49:23.668247 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:49:23.668247 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:49:23.668247 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:49:23.673663 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:49:23.673663 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:49:23.673663 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:49:23.673663 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:49:23.673663 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:49:23.673663 ignition[1160]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:49:23.668364 systemd[1]: mnt-oem2340064407.mount: Deactivated successfully. Sep 13 00:49:23.684148 ignition[1160]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1135701096" Sep 13 00:49:23.684148 ignition[1160]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1135701096": device or resource busy Sep 13 00:49:23.684148 ignition[1160]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1135701096", trying btrfs: device or resource busy Sep 13 00:49:23.684148 ignition[1160]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1135701096" Sep 13 00:49:23.684148 ignition[1160]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1135701096" Sep 13 00:49:23.684148 ignition[1160]: INFO : op(6): [started] unmounting "/mnt/oem1135701096" Sep 13 00:49:23.684148 ignition[1160]: INFO : op(6): [finished] unmounting "/mnt/oem1135701096" Sep 13 00:49:23.684148 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:49:23.684148 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:49:23.684148 ignition[1160]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:49:23.696912 ignition[1160]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem259376670" Sep 13 00:49:23.696912 ignition[1160]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem259376670": device or resource busy Sep 13 00:49:23.696912 ignition[1160]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem259376670", trying btrfs: device or resource busy Sep 13 00:49:23.696912 ignition[1160]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem259376670" Sep 13 00:49:23.696912 ignition[1160]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem259376670" Sep 13 00:49:23.696912 ignition[1160]: INFO : op(9): [started] unmounting "/mnt/oem259376670" Sep 13 00:49:23.696912 ignition[1160]: INFO : op(9): [finished] unmounting "/mnt/oem259376670" Sep 13 00:49:23.696912 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:49:23.696912 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:49:23.696912 ignition[1160]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:49:23.696912 ignition[1160]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3032892811" Sep 13 00:49:23.696912 ignition[1160]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3032892811": device or resource busy Sep 13 00:49:23.696912 ignition[1160]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3032892811", trying btrfs: device or resource busy Sep 13 00:49:23.696912 ignition[1160]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3032892811" Sep 13 00:49:23.696912 ignition[1160]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3032892811" Sep 13 00:49:23.696912 ignition[1160]: INFO : op(c): [started] unmounting "/mnt/oem3032892811" Sep 13 00:49:23.696912 ignition[1160]: INFO : op(c): [finished] unmounting "/mnt/oem3032892811" Sep 13 00:49:23.696912 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:49:23.696912 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:49:23.696912 ignition[1160]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:49:24.148472 ignition[1160]: INFO : GET result: OK Sep 13 00:49:24.360322 systemd-networkd[1024]: eth0: Gained IPv6LL Sep 13 00:49:24.492157 ignition[1160]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:49:24.492157 ignition[1160]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:49:24.492157 ignition[1160]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:49:24.492157 ignition[1160]: INFO : files: op(d): [started] processing unit "amazon-ssm-agent.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(d): op(e): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(d): op(e): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(d): [finished] processing unit "amazon-ssm-agent.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(f): [started] processing unit "nvidia.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(f): [finished] processing unit "nvidia.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(10): [started] processing unit "containerd.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(10): op(11): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(10): [finished] processing unit "containerd.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(12): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(12): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(13): [started] setting preset to enabled for "nvidia.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(13): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:49:24.502020 ignition[1160]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:49:24.502020 ignition[1160]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:49:24.502020 ignition[1160]: INFO : files: files passed Sep 13 00:49:24.502020 ignition[1160]: INFO : Ignition finished successfully Sep 13 00:49:24.586346 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 13 00:49:24.586383 kernel: audit: type=1130 audit(1757724564.504:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.586406 kernel: audit: type=1130 audit(1757724564.524:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.586426 kernel: audit: type=1131 audit(1757724564.524:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.586446 kernel: audit: type=1130 audit(1757724564.539:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.586467 kernel: audit: type=1130 audit(1757724564.573:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.586486 kernel: audit: type=1131 audit(1757724564.573:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.503087 systemd[1]: Finished ignition-files.service. Sep 13 00:49:24.513400 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:49:24.592563 initrd-setup-root-after-ignition[1186]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:49:24.518820 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:49:24.520124 systemd[1]: Starting ignition-quench.service... Sep 13 00:49:24.524282 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:49:24.524387 systemd[1]: Finished ignition-quench.service. Sep 13 00:49:24.538464 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:49:24.540364 systemd[1]: Reached target ignition-complete.target. Sep 13 00:49:24.549809 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:49:24.572903 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:49:24.615273 kernel: audit: type=1130 audit(1757724564.606:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.573043 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:49:24.574435 systemd[1]: Reached target initrd-fs.target. Sep 13 00:49:24.585638 systemd[1]: Reached target initrd.target. Sep 13 00:49:24.587611 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:49:24.588918 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:49:24.605671 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:49:24.608255 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:49:24.625598 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:49:24.626472 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:49:24.627864 systemd[1]: Stopped target timers.target. Sep 13 00:49:24.629020 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:49:24.635448 kernel: audit: type=1131 audit(1757724564.629:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.629226 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:49:24.630423 systemd[1]: Stopped target initrd.target. Sep 13 00:49:24.636359 systemd[1]: Stopped target basic.target. Sep 13 00:49:24.637522 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:49:24.638657 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:49:24.639917 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:49:24.641076 systemd[1]: Stopped target remote-fs.target. Sep 13 00:49:24.642205 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:49:24.643469 systemd[1]: Stopped target sysinit.target. Sep 13 00:49:24.644597 systemd[1]: Stopped target local-fs.target. Sep 13 00:49:24.645711 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:49:24.646848 systemd[1]: Stopped target swap.target. Sep 13 00:49:24.654315 kernel: audit: type=1131 audit(1757724564.648:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.648006 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:49:24.648213 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:49:24.661750 kernel: audit: type=1131 audit(1757724564.655:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.649385 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:49:24.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.655255 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:49:24.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.655466 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:49:24.656675 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:49:24.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.656899 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:49:24.662644 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:49:24.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.662975 systemd[1]: Stopped ignition-files.service. Sep 13 00:49:24.665264 systemd[1]: Stopping ignition-mount.service... Sep 13 00:49:24.667883 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:49:24.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.668909 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:49:24.669193 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:49:24.671640 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:49:24.673034 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:49:24.680174 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:49:24.681335 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:49:24.698386 ignition[1199]: INFO : Ignition 2.14.0 Sep 13 00:49:24.698386 ignition[1199]: INFO : Stage: umount Sep 13 00:49:24.698386 ignition[1199]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:49:24.698386 ignition[1199]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:49:24.697938 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:49:24.708181 ignition[1199]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:49:24.709089 ignition[1199]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:49:24.709873 ignition[1199]: INFO : PUT result: OK Sep 13 00:49:24.712161 ignition[1199]: INFO : umount: umount passed Sep 13 00:49:24.713162 ignition[1199]: INFO : Ignition finished successfully Sep 13 00:49:24.713583 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:49:24.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.713711 systemd[1]: Stopped ignition-mount.service. Sep 13 00:49:24.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.714681 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:49:24.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.714743 systemd[1]: Stopped ignition-disks.service. Sep 13 00:49:24.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.715870 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:49:24.715926 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:49:24.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.716981 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:49:24.717036 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:49:24.718130 systemd[1]: Stopped target network.target. Sep 13 00:49:24.719327 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:49:24.719387 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:49:24.720511 systemd[1]: Stopped target paths.target. Sep 13 00:49:24.721578 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:49:24.722832 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:49:24.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.724444 systemd[1]: Stopped target slices.target. Sep 13 00:49:24.725031 systemd[1]: Stopped target sockets.target. Sep 13 00:49:24.726138 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:49:24.726182 systemd[1]: Closed iscsid.socket. Sep 13 00:49:24.727694 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:49:24.727730 systemd[1]: Closed iscsiuio.socket. Sep 13 00:49:24.728795 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:49:24.728857 systemd[1]: Stopped ignition-setup.service. Sep 13 00:49:24.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.730160 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:49:24.731497 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:49:24.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.734830 systemd-networkd[1024]: eth0: DHCPv6 lease lost Sep 13 00:49:24.742000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:49:24.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.736150 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:49:24.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.736287 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:49:24.737591 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:49:24.737635 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:49:24.739515 systemd[1]: Stopping network-cleanup.service... Sep 13 00:49:24.741635 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:49:24.741687 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:49:24.742397 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:49:24.742460 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:49:24.743701 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:49:24.743755 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:49:24.750559 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:49:24.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.755499 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:49:24.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.761000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:49:24.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.756614 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:49:24.756751 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:49:24.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.759734 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:49:24.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.759930 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:49:24.762091 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:49:24.762175 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:49:24.763858 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:49:24.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.763916 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:49:24.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.764813 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:49:24.764871 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:49:24.765583 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:49:24.765623 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:49:24.766156 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:49:24.766191 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:49:24.767552 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:49:24.768371 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:49:24.768437 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:49:24.770805 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:49:24.770860 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:49:24.772895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:49:24.772950 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:49:24.776281 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:49:24.777022 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:49:24.779025 systemd[1]: Stopped network-cleanup.service. Sep 13 00:49:24.781174 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:49:24.781293 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:49:24.830510 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:49:24.830608 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:49:24.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.832080 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:49:24.833000 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:49:24.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:24.833062 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:49:24.834826 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:49:24.860782 systemd[1]: Switching root. Sep 13 00:49:24.860000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:49:24.860000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:49:24.860000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:49:24.863000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:49:24.863000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:49:24.881560 iscsid[1029]: iscsid shutting down. Sep 13 00:49:24.882325 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 13 00:49:24.882378 systemd-journald[185]: Journal stopped Sep 13 00:49:30.366655 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:49:30.366736 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:49:30.366756 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:49:30.366787 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:49:30.366810 kernel: SELinux: policy capability open_perms=1 Sep 13 00:49:30.366827 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:49:30.366854 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:49:30.366877 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:49:30.366898 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:49:30.366915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:49:30.366932 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:49:30.366951 systemd[1]: Successfully loaded SELinux policy in 95.690ms. Sep 13 00:49:30.366985 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.413ms. Sep 13 00:49:30.367004 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:49:30.367026 systemd[1]: Detected virtualization amazon. Sep 13 00:49:30.367048 systemd[1]: Detected architecture x86-64. Sep 13 00:49:30.367066 systemd[1]: Detected first boot. Sep 13 00:49:30.367087 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:49:30.367106 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:49:30.367123 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:49:30.367143 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:49:30.367166 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:49:30.367186 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:49:30.367208 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:49:30.367226 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 13 00:49:30.367244 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:49:30.367264 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:49:30.367283 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:49:30.367301 systemd[1]: Created slice system-getty.slice. Sep 13 00:49:30.367319 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:49:30.367337 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:49:30.367356 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:49:30.367377 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:49:30.367395 systemd[1]: Created slice user.slice. Sep 13 00:49:30.367413 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:49:30.367432 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:49:30.367451 systemd[1]: Set up automount boot.automount. Sep 13 00:49:30.367471 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:49:30.367489 systemd[1]: Reached target integritysetup.target. Sep 13 00:49:30.367507 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:49:30.367526 systemd[1]: Reached target remote-fs.target. Sep 13 00:49:30.367547 systemd[1]: Reached target slices.target. Sep 13 00:49:30.367565 systemd[1]: Reached target swap.target. Sep 13 00:49:30.367582 systemd[1]: Reached target torcx.target. Sep 13 00:49:30.367598 systemd[1]: Reached target veritysetup.target. Sep 13 00:49:30.367615 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:49:30.367634 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:49:30.367656 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 13 00:49:30.367675 kernel: audit: type=1400 audit(1757724570.117:87): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:49:30.367700 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:49:30.367723 kernel: audit: type=1335 audit(1757724570.117:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:49:30.367744 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:49:30.367818 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:49:30.367843 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:49:30.367864 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:49:30.367887 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:49:30.367911 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:49:30.367938 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:49:30.367960 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:49:30.367981 systemd[1]: Mounting media.mount... Sep 13 00:49:30.368005 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:30.368028 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:49:30.368050 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:49:30.368072 systemd[1]: Mounting tmp.mount... Sep 13 00:49:30.368094 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:49:30.368118 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:30.368144 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:49:30.368167 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:49:30.368190 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:49:30.368214 systemd[1]: Starting modprobe@drm.service... Sep 13 00:49:30.368237 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:49:30.368272 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:49:30.368300 systemd[1]: Starting modprobe@loop.service... Sep 13 00:49:30.368325 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:49:30.368350 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:49:30.368373 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:49:30.368394 systemd[1]: Starting systemd-journald.service... Sep 13 00:49:30.368419 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:49:30.368442 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:49:30.368469 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:49:30.368493 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:49:30.368515 kernel: loop: module loaded Sep 13 00:49:30.368538 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:30.368560 kernel: fuse: init (API version 7.34) Sep 13 00:49:30.368582 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:49:30.368605 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:49:30.368628 systemd[1]: Mounted media.mount. Sep 13 00:49:30.368650 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:49:30.368677 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:49:30.368701 systemd[1]: Mounted tmp.mount. Sep 13 00:49:30.368724 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:49:30.368749 kernel: audit: type=1130 audit(1757724570.356:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.368787 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:49:30.368817 systemd-journald[1356]: Journal started Sep 13 00:49:30.368897 systemd-journald[1356]: Runtime Journal (/run/log/journal/ec2fba84d3d517eb9063b72273b4874b) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:49:30.117000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:49:30.117000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:49:30.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.363000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:49:30.363000 audit[1356]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd40101d10 a2=4000 a3=7ffd40101dac items=0 ppid=1 pid=1356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:30.385795 kernel: audit: type=1305 audit(1757724570.363:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:49:30.385894 kernel: audit: type=1300 audit(1757724570.363:90): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd40101d10 a2=4000 a3=7ffd40101dac items=0 ppid=1 pid=1356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:30.385936 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:49:30.363000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:49:30.394945 kernel: audit: type=1327 audit(1757724570.363:90): proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:49:30.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.404778 kernel: audit: type=1130 audit(1757724570.397:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.404837 systemd[1]: Started systemd-journald.service. Sep 13 00:49:30.413997 kernel: audit: type=1131 audit(1757724570.397:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.417057 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:49:30.423776 kernel: audit: type=1130 audit(1757724570.414:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.436642 kernel: audit: type=1130 audit(1757724570.423:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.424743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:49:30.425025 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:49:30.426579 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:49:30.427069 systemd[1]: Finished modprobe@drm.service. Sep 13 00:49:30.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.438304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:49:30.438734 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:49:30.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.441164 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:49:30.441561 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:49:30.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.443901 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:49:30.444321 systemd[1]: Finished modprobe@loop.service. Sep 13 00:49:30.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.446501 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:49:30.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.448691 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:49:30.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.450918 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:49:30.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.453101 systemd[1]: Reached target network-pre.target. Sep 13 00:49:30.456317 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:49:30.460319 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:49:30.461704 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:49:30.477084 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:49:30.479999 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:49:30.481425 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:49:30.488875 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:49:30.490202 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:49:30.491231 systemd-journald[1356]: Time spent on flushing to /var/log/journal/ec2fba84d3d517eb9063b72273b4874b is 62.187ms for 1133 entries. Sep 13 00:49:30.491231 systemd-journald[1356]: System Journal (/var/log/journal/ec2fba84d3d517eb9063b72273b4874b) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:49:30.579126 systemd-journald[1356]: Received client request to flush runtime journal. Sep 13 00:49:30.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.492548 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:49:30.496874 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:49:30.500591 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:49:30.506203 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:49:30.580042 udevadm[1396]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:49:30.506921 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:49:30.509798 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:49:30.533096 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:49:30.534349 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:49:30.554037 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:49:30.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.580742 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:49:30.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.657401 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:49:30.660081 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:49:30.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:30.767517 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:49:31.156079 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:49:31.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.158240 systemd[1]: Starting systemd-udevd.service... Sep 13 00:49:31.180838 systemd-udevd[1407]: Using default interface naming scheme 'v252'. Sep 13 00:49:31.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.228809 systemd[1]: Started systemd-udevd.service. Sep 13 00:49:31.231045 systemd[1]: Starting systemd-networkd.service... Sep 13 00:49:31.250328 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:49:31.278606 (udev-worker)[1409]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:49:31.284980 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:49:31.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.304611 systemd[1]: Started systemd-userdbd.service. Sep 13 00:49:31.339782 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:49:31.349004 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:49:31.351784 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 13 00:49:31.355787 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:49:31.389000 audit[1414]: AVC avc: denied { confidentiality } for pid=1414 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:49:31.389000 audit[1414]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561f0b18b9e0 a1=338ec a2=7fbf21d74bc5 a3=5 items=110 ppid=1407 pid=1414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:31.389000 audit: CWD cwd="/" Sep 13 00:49:31.389000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=1 name=(null) inode=14921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=2 name=(null) inode=14921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=3 name=(null) inode=14922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=4 name=(null) inode=14921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=5 name=(null) inode=14923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=6 name=(null) inode=14921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=7 name=(null) inode=14924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=8 name=(null) inode=14924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=9 name=(null) inode=14925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=10 name=(null) inode=14924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=11 name=(null) inode=14926 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=12 name=(null) inode=14924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=13 name=(null) inode=14927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=14 name=(null) inode=14924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=15 name=(null) inode=14928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=16 name=(null) inode=14924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=17 name=(null) inode=14929 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=18 name=(null) inode=14921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=19 name=(null) inode=14930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=20 name=(null) inode=14930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=21 name=(null) inode=14931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=22 name=(null) inode=14930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=23 name=(null) inode=14932 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=24 name=(null) inode=14930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=25 name=(null) inode=14933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=26 name=(null) inode=14930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=27 name=(null) inode=14934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=28 name=(null) inode=14930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=29 name=(null) inode=14935 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=30 name=(null) inode=14921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=31 name=(null) inode=14936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=32 name=(null) inode=14936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=33 name=(null) inode=14937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=34 name=(null) inode=14936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=35 name=(null) inode=14938 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=36 name=(null) inode=14936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=37 name=(null) inode=14939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=38 name=(null) inode=14936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=39 name=(null) inode=14940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=40 name=(null) inode=14936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=41 name=(null) inode=14941 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=42 name=(null) inode=14921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=43 name=(null) inode=14942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=44 name=(null) inode=14942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=45 name=(null) inode=14943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=46 name=(null) inode=14942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=47 name=(null) inode=14944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=48 name=(null) inode=14942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=49 name=(null) inode=14945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=50 name=(null) inode=14942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=51 name=(null) inode=14946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=52 name=(null) inode=14942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=53 name=(null) inode=14947 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=55 name=(null) inode=14948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=56 name=(null) inode=14948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=57 name=(null) inode=14949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=58 name=(null) inode=14948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=59 name=(null) inode=14950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=60 name=(null) inode=14948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=61 name=(null) inode=14951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=62 name=(null) inode=14951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=63 name=(null) inode=14952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=64 name=(null) inode=14951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=65 name=(null) inode=14953 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=66 name=(null) inode=14951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=67 name=(null) inode=14954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=68 name=(null) inode=14951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=69 name=(null) inode=14955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=70 name=(null) inode=14951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=71 name=(null) inode=14956 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=72 name=(null) inode=14948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=73 name=(null) inode=14957 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=74 name=(null) inode=14957 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=75 name=(null) inode=14958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=76 name=(null) inode=14957 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=77 name=(null) inode=14959 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=78 name=(null) inode=14957 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=79 name=(null) inode=14960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=80 name=(null) inode=14957 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=81 name=(null) inode=14961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=82 name=(null) inode=14957 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=83 name=(null) inode=14962 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=84 name=(null) inode=14948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=85 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=86 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=87 name=(null) inode=14964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=88 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=89 name=(null) inode=14965 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=90 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=91 name=(null) inode=14966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=92 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=93 name=(null) inode=14967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=94 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=95 name=(null) inode=14968 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=96 name=(null) inode=14948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=97 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=98 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=99 name=(null) inode=14970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=100 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=101 name=(null) inode=14971 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=102 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=103 name=(null) inode=14972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=104 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=105 name=(null) inode=14973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.409841 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:49:31.438746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:49:31.389000 audit: PATH item=106 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=107 name=(null) inode=14974 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PATH item=109 name=(null) inode=14975 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:49:31.389000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:49:31.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.420048 systemd-networkd[1415]: lo: Link UP Sep 13 00:49:31.420054 systemd-networkd[1415]: lo: Gained carrier Sep 13 00:49:31.420643 systemd-networkd[1415]: Enumeration completed Sep 13 00:49:31.420832 systemd[1]: Started systemd-networkd.service. Sep 13 00:49:31.423433 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:49:31.425332 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:49:31.430751 systemd-networkd[1415]: eth0: Link UP Sep 13 00:49:31.430991 systemd-networkd[1415]: eth0: Gained carrier Sep 13 00:49:31.440032 systemd-networkd[1415]: eth0: DHCPv4 address 172.31.27.100/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:49:31.453780 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:49:31.460812 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:49:31.587088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:49:31.588649 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:49:31.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.591518 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:49:31.635165 lvm[1522]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:49:31.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.661040 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:49:31.661690 systemd[1]: Reached target cryptsetup.target. Sep 13 00:49:31.663972 systemd[1]: Starting lvm2-activation.service... Sep 13 00:49:31.669926 lvm[1524]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:49:31.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.694122 systemd[1]: Finished lvm2-activation.service. Sep 13 00:49:31.694730 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:49:31.695345 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:49:31.695365 systemd[1]: Reached target local-fs.target. Sep 13 00:49:31.695815 systemd[1]: Reached target machines.target. Sep 13 00:49:31.697535 systemd[1]: Starting ldconfig.service... Sep 13 00:49:31.699601 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:31.699675 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:31.701088 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:49:31.703756 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:49:31.706923 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:49:31.710048 systemd[1]: Starting systemd-sysext.service... Sep 13 00:49:31.717739 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1527 (bootctl) Sep 13 00:49:31.719654 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:49:31.736172 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:49:31.744379 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:49:31.745000 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:49:31.762908 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:49:31.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.777127 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:49:31.898203 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:49:31.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.899872 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:49:31.907916 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:49:31.914433 systemd-fsck[1539]: fsck.fat 4.2 (2021-01-31) Sep 13 00:49:31.914433 systemd-fsck[1539]: /dev/nvme0n1p1: 790 files, 120761/258078 clusters Sep 13 00:49:31.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.916270 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:49:31.918298 systemd[1]: Mounting boot.mount... Sep 13 00:49:31.924859 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:49:31.946984 systemd[1]: Mounted boot.mount. Sep 13 00:49:31.948427 (sd-sysext)[1547]: Using extensions 'kubernetes'. Sep 13 00:49:31.949001 (sd-sysext)[1547]: Merged extensions into '/usr'. Sep 13 00:49:31.991603 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:49:31.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:31.993283 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:31.995676 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:49:31.996990 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:31.999105 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:49:32.004087 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:49:32.007165 systemd[1]: Starting modprobe@loop.service... Sep 13 00:49:32.008071 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:32.008378 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:32.008616 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:32.019531 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:49:32.021565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:49:32.021821 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:49:32.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.023971 systemd[1]: Finished systemd-sysext.service. Sep 13 00:49:32.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.025128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:49:32.025355 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:49:32.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.028786 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:49:32.029031 systemd[1]: Finished modprobe@loop.service. Sep 13 00:49:32.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.037234 systemd[1]: Starting ensure-sysext.service... Sep 13 00:49:32.038250 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:49:32.038327 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:49:32.040180 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:49:32.055872 systemd[1]: Reloading. Sep 13 00:49:32.060272 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:49:32.064103 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:49:32.067876 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:49:32.151357 /usr/lib/systemd/system-generators/torcx-generator[1594]: time="2025-09-13T00:49:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:49:32.151395 /usr/lib/systemd/system-generators/torcx-generator[1594]: time="2025-09-13T00:49:32Z" level=info msg="torcx already run" Sep 13 00:49:32.319231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:49:32.319248 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:49:32.343159 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:49:32.417566 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:49:32.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.424087 systemd[1]: Starting audit-rules.service... Sep 13 00:49:32.426916 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:49:32.430056 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:49:32.439677 systemd[1]: Starting systemd-resolved.service... Sep 13 00:49:32.443113 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:49:32.452811 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:49:32.455797 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:49:32.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.472532 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:32.474649 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:49:32.477474 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:49:32.480354 systemd[1]: Starting modprobe@loop.service... Sep 13 00:49:32.481882 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:32.482086 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:32.482266 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:49:32.484137 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:49:32.484390 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:49:32.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.491604 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:32.495168 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:49:32.497999 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:32.498203 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:32.498383 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:49:32.509181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:49:32.509435 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:49:32.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.513574 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:49:32.513953 systemd[1]: Finished modprobe@loop.service. Sep 13 00:49:32.515292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:49:32.516123 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:49:32.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.522360 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:49:32.524554 systemd[1]: Starting modprobe@drm.service... Sep 13 00:49:32.525394 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:49:32.525603 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:32.529000 audit[1668]: SYSTEM_BOOT pid=1668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.530513 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:49:32.530746 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:49:32.530928 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:49:32.532712 systemd[1]: Finished ensure-sysext.service. Sep 13 00:49:32.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.541297 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:49:32.543282 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:49:32.543507 systemd[1]: Finished modprobe@drm.service. Sep 13 00:49:32.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:49:32.609157 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:49:32.641000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:49:32.641000 audit[1696]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8be79b40 a2=420 a3=0 items=0 ppid=1658 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:49:32.641000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:49:32.643418 systemd[1]: Finished audit-rules.service. Sep 13 00:49:32.644555 augenrules[1696]: No rules Sep 13 00:49:32.662399 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:49:32.663201 systemd[1]: Reached target time-set.target. Sep 13 00:49:32.679978 systemd-resolved[1662]: Positive Trust Anchors: Sep 13 00:49:32.680310 systemd-resolved[1662]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:49:32.680393 systemd-resolved[1662]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:49:32.730358 systemd-resolved[1662]: Defaulting to hostname 'linux'. Sep 13 00:49:32.732374 systemd[1]: Started systemd-resolved.service. Sep 13 00:49:32.732842 systemd[1]: Reached target network.target. Sep 13 00:49:32.733195 systemd[1]: Reached target nss-lookup.target. Sep 13 00:49:32.743427 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:32.743452 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:49:32.871873 systemd-networkd[1415]: eth0: Gained IPv6LL Sep 13 00:49:32.874195 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:49:32.875068 systemd[1]: Reached target network-online.target. Sep 13 00:49:34.445563 systemd-timesyncd[1663]: Contacted time server 83.147.242.172:123 (0.flatcar.pool.ntp.org). Sep 13 00:49:34.445631 systemd-timesyncd[1663]: Initial clock synchronization to Sat 2025-09-13 00:49:34.445443 UTC. Sep 13 00:49:34.447024 systemd-resolved[1662]: Clock change detected. Flushing caches. Sep 13 00:49:34.522630 ldconfig[1526]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:49:34.528431 systemd[1]: Finished ldconfig.service. Sep 13 00:49:34.530352 systemd[1]: Starting systemd-update-done.service... Sep 13 00:49:34.538548 systemd[1]: Finished systemd-update-done.service. Sep 13 00:49:34.539047 systemd[1]: Reached target sysinit.target. Sep 13 00:49:34.539473 systemd[1]: Started motdgen.path. Sep 13 00:49:34.539825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:49:34.540340 systemd[1]: Started logrotate.timer. Sep 13 00:49:34.540743 systemd[1]: Started mdadm.timer. Sep 13 00:49:34.541169 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:49:34.541492 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:49:34.541515 systemd[1]: Reached target paths.target. Sep 13 00:49:34.541824 systemd[1]: Reached target timers.target. Sep 13 00:49:34.542476 systemd[1]: Listening on dbus.socket. Sep 13 00:49:34.544402 systemd[1]: Starting docker.socket... Sep 13 00:49:34.547014 systemd[1]: Listening on sshd.socket. Sep 13 00:49:34.547499 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:34.548012 systemd[1]: Listening on docker.socket. Sep 13 00:49:34.548431 systemd[1]: Reached target sockets.target. Sep 13 00:49:34.548747 systemd[1]: Reached target basic.target. Sep 13 00:49:34.550041 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:49:34.550091 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:49:34.550115 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:49:34.551389 systemd[1]: Started amazon-ssm-agent.service. Sep 13 00:49:34.553395 systemd[1]: Starting containerd.service... Sep 13 00:49:34.556345 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:49:34.559117 systemd[1]: Starting dbus.service... Sep 13 00:49:34.561440 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:49:34.563826 systemd[1]: Starting extend-filesystems.service... Sep 13 00:49:34.564509 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:49:34.576069 systemd[1]: Starting kubelet.service... Sep 13 00:49:34.578697 systemd[1]: Starting motdgen.service... Sep 13 00:49:34.591913 systemd[1]: Started nvidia.service. Sep 13 00:49:34.594700 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:49:34.601811 systemd[1]: Starting sshd-keygen.service... Sep 13 00:49:34.608525 systemd[1]: Starting systemd-logind.service... Sep 13 00:49:34.609213 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:49:34.609291 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:49:34.612607 systemd[1]: Starting update-engine.service... Sep 13 00:49:34.616154 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:49:34.649596 jq[1726]: true Sep 13 00:49:34.654279 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:49:34.654624 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:49:34.682283 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:49:34.717574 jq[1714]: false Sep 13 00:49:34.682609 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:49:34.755105 jq[1744]: true Sep 13 00:49:34.783621 extend-filesystems[1715]: Found loop1 Sep 13 00:49:34.783621 extend-filesystems[1715]: Found nvme0n1 Sep 13 00:49:34.783621 extend-filesystems[1715]: Found nvme0n1p7 Sep 13 00:49:34.783621 extend-filesystems[1715]: Found nvme0n1p9 Sep 13 00:49:34.783621 extend-filesystems[1715]: Checking size of /dev/nvme0n1p9 Sep 13 00:49:34.800622 dbus-daemon[1713]: [system] SELinux support is enabled Sep 13 00:49:34.800880 systemd[1]: Started dbus.service. Sep 13 00:49:34.804671 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:49:34.804710 systemd[1]: Reached target system-config.target. Sep 13 00:49:34.805425 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:49:34.805446 systemd[1]: Reached target user-config.target. Sep 13 00:49:34.807990 dbus-daemon[1713]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1415 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:49:34.809227 dbus-daemon[1713]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:49:34.814524 systemd[1]: Starting systemd-hostnamed.service... Sep 13 00:49:34.832077 extend-filesystems[1715]: Resized partition /dev/nvme0n1p9 Sep 13 00:49:34.837345 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:49:34.837748 systemd[1]: Finished motdgen.service. Sep 13 00:49:34.848157 extend-filesystems[1768]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:49:34.865389 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:49:34.942834 update_engine[1725]: I0913 00:49:34.941342 1725 main.cc:92] Flatcar Update Engine starting Sep 13 00:49:34.958764 systemd[1]: Started update-engine.service. Sep 13 00:49:34.987454 update_engine[1725]: I0913 00:49:34.973500 1725 update_check_scheduler.cc:74] Next update check in 6m28s Sep 13 00:49:34.962000 systemd[1]: Started locksmithd.service. Sep 13 00:49:34.998891 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:49:35.022619 env[1732]: time="2025-09-13T00:49:35.015322112Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:49:35.023004 amazon-ssm-agent[1709]: 2025/09/13 00:49:35 Failed to load instance info from vault. RegistrationKey does not exist. Sep 13 00:49:35.025762 extend-filesystems[1768]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:49:35.025762 extend-filesystems[1768]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:49:35.025762 extend-filesystems[1768]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:49:35.033415 extend-filesystems[1715]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:49:35.033415 extend-filesystems[1715]: Found nvme0n1p1 Sep 13 00:49:35.033415 extend-filesystems[1715]: Found nvme0n1p2 Sep 13 00:49:35.033415 extend-filesystems[1715]: Found nvme0n1p3 Sep 13 00:49:35.033415 extend-filesystems[1715]: Found usr Sep 13 00:49:35.033415 extend-filesystems[1715]: Found nvme0n1p4 Sep 13 00:49:35.033415 extend-filesystems[1715]: Found nvme0n1p6 Sep 13 00:49:35.061173 amazon-ssm-agent[1709]: Initializing new seelog logger Sep 13 00:49:35.061173 amazon-ssm-agent[1709]: New Seelog Logger Creation Complete Sep 13 00:49:35.061173 amazon-ssm-agent[1709]: 2025/09/13 00:49:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:49:35.061173 amazon-ssm-agent[1709]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:49:35.061173 amazon-ssm-agent[1709]: 2025/09/13 00:49:35 processing appconfig overrides Sep 13 00:49:35.027309 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:49:35.062287 bash[1780]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:49:35.027660 systemd[1]: Finished extend-filesystems.service. Sep 13 00:49:35.031925 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:49:35.040989 systemd[1]: Created slice system-sshd.slice. Sep 13 00:49:35.137364 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:49:35.199579 systemd-logind[1723]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:49:35.202108 systemd-logind[1723]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:49:35.202290 systemd-logind[1723]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:49:35.204725 systemd-logind[1723]: New seat seat0. Sep 13 00:49:35.210895 systemd[1]: Started systemd-logind.service. Sep 13 00:49:35.229302 env[1732]: time="2025-09-13T00:49:35.229248529Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:49:35.232891 env[1732]: time="2025-09-13T00:49:35.232836688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:35.237498 env[1732]: time="2025-09-13T00:49:35.237356808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:49:35.237902 env[1732]: time="2025-09-13T00:49:35.237875832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:35.239090 env[1732]: time="2025-09-13T00:49:35.239057590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:49:35.239197 env[1732]: time="2025-09-13T00:49:35.239179866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:35.239545 env[1732]: time="2025-09-13T00:49:35.239520020Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:49:35.239644 env[1732]: time="2025-09-13T00:49:35.239628247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:35.240081 env[1732]: time="2025-09-13T00:49:35.240057110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:35.240493 env[1732]: time="2025-09-13T00:49:35.240470862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:49:35.241948 env[1732]: time="2025-09-13T00:49:35.241917301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:49:35.244073 env[1732]: time="2025-09-13T00:49:35.244047656Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:49:35.245045 env[1732]: time="2025-09-13T00:49:35.245017971Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:49:35.245146 env[1732]: time="2025-09-13T00:49:35.245128518Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:49:35.260451 env[1732]: time="2025-09-13T00:49:35.260364415Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:49:35.260640 env[1732]: time="2025-09-13T00:49:35.260617370Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:49:35.260975 env[1732]: time="2025-09-13T00:49:35.260948283Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:49:35.261143 env[1732]: time="2025-09-13T00:49:35.261114738Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:49:35.262091 env[1732]: time="2025-09-13T00:49:35.262055726Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:49:35.262213 env[1732]: time="2025-09-13T00:49:35.262196178Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:49:35.262327 env[1732]: time="2025-09-13T00:49:35.262309762Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:49:35.262699 env[1732]: time="2025-09-13T00:49:35.262650191Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:49:35.262822 env[1732]: time="2025-09-13T00:49:35.262804425Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:49:35.262938 env[1732]: time="2025-09-13T00:49:35.262921063Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:49:35.263043 env[1732]: time="2025-09-13T00:49:35.263025999Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:49:35.263139 env[1732]: time="2025-09-13T00:49:35.263124217Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:49:35.263667 env[1732]: time="2025-09-13T00:49:35.263640361Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:49:35.264042 env[1732]: time="2025-09-13T00:49:35.264002946Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265127856Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265177216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265198337Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265270224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265290542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265309396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265328478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265347030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265366897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265383776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265401111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265420829Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265593414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265614062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.266692 env[1732]: time="2025-09-13T00:49:35.265631819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.267321 env[1732]: time="2025-09-13T00:49:35.265648035Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:49:35.267321 env[1732]: time="2025-09-13T00:49:35.265669231Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:49:35.267321 env[1732]: time="2025-09-13T00:49:35.265686166Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:49:35.267321 env[1732]: time="2025-09-13T00:49:35.265711037Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:49:35.267321 env[1732]: time="2025-09-13T00:49:35.265756198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:49:35.267519 env[1732]: time="2025-09-13T00:49:35.266053879Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:49:35.267519 env[1732]: time="2025-09-13T00:49:35.266136895Z" level=info msg="Connect containerd service" Sep 13 00:49:35.267519 env[1732]: time="2025-09-13T00:49:35.266185524Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:49:35.271227 env[1732]: time="2025-09-13T00:49:35.268068705Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:49:35.271227 env[1732]: time="2025-09-13T00:49:35.269371140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:49:35.271484 env[1732]: time="2025-09-13T00:49:35.271426261Z" level=info msg="Start subscribing containerd event" Sep 13 00:49:35.271619 env[1732]: time="2025-09-13T00:49:35.271602753Z" level=info msg="Start recovering state" Sep 13 00:49:35.271801 env[1732]: time="2025-09-13T00:49:35.271787250Z" level=info msg="Start event monitor" Sep 13 00:49:35.271932 env[1732]: time="2025-09-13T00:49:35.271915786Z" level=info msg="Start snapshots syncer" Sep 13 00:49:35.272032 env[1732]: time="2025-09-13T00:49:35.272017047Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:49:35.272126 env[1732]: time="2025-09-13T00:49:35.272113372Z" level=info msg="Start streaming server" Sep 13 00:49:35.272958 env[1732]: time="2025-09-13T00:49:35.272934670Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:49:35.327295 systemd[1]: Started containerd.service. Sep 13 00:49:35.332532 env[1732]: time="2025-09-13T00:49:35.332489631Z" level=info msg="containerd successfully booted in 0.346543s" Sep 13 00:49:35.334202 dbus-daemon[1713]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:49:35.334388 systemd[1]: Started systemd-hostnamed.service. Sep 13 00:49:35.338152 dbus-daemon[1713]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1758 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:49:35.342068 systemd[1]: Starting polkit.service... Sep 13 00:49:35.366328 polkitd[1846]: Started polkitd version 121 Sep 13 00:49:35.392366 polkitd[1846]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:49:35.392601 polkitd[1846]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:49:35.401440 polkitd[1846]: Finished loading, compiling and executing 2 rules Sep 13 00:49:35.402638 dbus-daemon[1713]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:49:35.402849 systemd[1]: Started polkit.service. Sep 13 00:49:35.404449 polkitd[1846]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:49:35.435096 systemd-hostnamed[1758]: Hostname set to (transient) Sep 13 00:49:35.435210 systemd-resolved[1662]: System hostname changed to 'ip-172-31-27-100'. Sep 13 00:49:35.609072 coreos-metadata[1711]: Sep 13 00:49:35.606 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:49:35.614232 coreos-metadata[1711]: Sep 13 00:49:35.614 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 13 00:49:35.615421 coreos-metadata[1711]: Sep 13 00:49:35.615 INFO Fetch successful Sep 13 00:49:35.615421 coreos-metadata[1711]: Sep 13 00:49:35.615 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:49:35.617195 coreos-metadata[1711]: Sep 13 00:49:35.616 INFO Fetch successful Sep 13 00:49:35.620207 unknown[1711]: wrote ssh authorized keys file for user: core Sep 13 00:49:35.648243 update-ssh-keys[1901]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:49:35.648803 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:49:35.657929 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Create new startup processor Sep 13 00:49:35.672210 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [LongRunningPluginsManager] registered plugins: {} Sep 13 00:49:35.672210 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Initializing bookkeeping folders Sep 13 00:49:35.672210 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO removing the completed state files Sep 13 00:49:35.672210 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Initializing bookkeeping folders for long running plugins Sep 13 00:49:35.672210 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 13 00:49:35.672210 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Initializing healthcheck folders for long running plugins Sep 13 00:49:35.672210 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Initializing locations for inventory plugin Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Initializing default location for custom inventory Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Initializing default location for file inventory Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Initializing default location for role inventory Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Init the cloudwatchlogs publisher Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform independent plugin aws:configureDocker Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform independent plugin aws:runDocument Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform independent plugin aws:softwareInventory Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform independent plugin aws:runDockerAction Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform independent plugin aws:refreshAssociation Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform independent plugin aws:configurePackage Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform independent plugin aws:downloadContent Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Successfully loaded platform dependent plugin aws:runShellScript Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 13 00:49:35.672558 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO OS: linux, Arch: amd64 Sep 13 00:49:35.677723 amazon-ssm-agent[1709]: datastore file /var/lib/amazon/ssm/i-0160ea5f15992b500/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 13 00:49:35.757511 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] Starting document processing engine... Sep 13 00:49:35.852655 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 13 00:49:35.932453 locksmithd[1787]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:49:35.947015 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 13 00:49:36.041512 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] Starting message polling Sep 13 00:49:36.136228 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 13 00:49:36.231166 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [instanceID=i-0160ea5f15992b500] Starting association polling Sep 13 00:49:36.326256 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 13 00:49:36.422006 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 13 00:49:36.489886 sshd_keygen[1748]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:49:36.517902 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 13 00:49:36.518911 systemd[1]: Finished sshd-keygen.service. Sep 13 00:49:36.522280 systemd[1]: Starting issuegen.service... Sep 13 00:49:36.527286 systemd[1]: Started sshd@0-172.31.27.100:22-147.75.109.163:55158.service. Sep 13 00:49:36.535192 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:49:36.535560 systemd[1]: Finished issuegen.service. Sep 13 00:49:36.538893 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:49:36.553525 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:49:36.556193 systemd[1]: Started getty@tty1.service. Sep 13 00:49:36.559530 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:49:36.560907 systemd[1]: Reached target getty.target. Sep 13 00:49:36.613289 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 13 00:49:36.709285 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 13 00:49:36.735823 sshd[1925]: Accepted publickey for core from 147.75.109.163 port 55158 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:36.738801 sshd[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:36.754056 systemd[1]: Created slice user-500.slice. Sep 13 00:49:36.755763 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:49:36.759670 systemd-logind[1723]: New session 1 of user core. Sep 13 00:49:36.769319 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:49:36.771271 systemd[1]: Starting user@500.service... Sep 13 00:49:36.777266 (systemd)[1937]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:36.805958 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessageGatewayService] Starting session document processing engine... Sep 13 00:49:36.894474 systemd[1937]: Queued start job for default target default.target. Sep 13 00:49:36.895670 systemd[1937]: Reached target paths.target. Sep 13 00:49:36.895799 systemd[1937]: Reached target sockets.target. Sep 13 00:49:36.895882 systemd[1937]: Reached target timers.target. Sep 13 00:49:36.895964 systemd[1937]: Reached target basic.target. Sep 13 00:49:36.896144 systemd[1]: Started user@500.service. Sep 13 00:49:36.896821 systemd[1937]: Reached target default.target. Sep 13 00:49:36.896994 systemd[1937]: Startup finished in 110ms. Sep 13 00:49:36.897994 systemd[1]: Started session-1.scope. Sep 13 00:49:36.902252 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 13 00:49:36.998926 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 13 00:49:37.043893 systemd[1]: Started sshd@1-172.31.27.100:22-147.75.109.163:55174.service. Sep 13 00:49:37.095668 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0160ea5f15992b500, requestId: 05e2f912-372b-486e-8f9f-ba053490c6ae Sep 13 00:49:37.192514 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [OfflineService] Starting document processing engine... Sep 13 00:49:37.206998 sshd[1946]: Accepted publickey for core from 147.75.109.163 port 55174 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:37.208510 sshd[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:37.215311 systemd[1]: Started session-2.scope. Sep 13 00:49:37.216389 systemd-logind[1723]: New session 2 of user core. Sep 13 00:49:37.289661 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [OfflineService] [EngineProcessor] Starting Sep 13 00:49:37.340655 sshd[1946]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:37.344393 systemd[1]: sshd@1-172.31.27.100:22-147.75.109.163:55174.service: Deactivated successfully. Sep 13 00:49:37.345186 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:49:37.346114 systemd-logind[1723]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:49:37.347101 systemd-logind[1723]: Removed session 2. Sep 13 00:49:37.364757 systemd[1]: Started sshd@2-172.31.27.100:22-147.75.109.163:55178.service. Sep 13 00:49:37.388227 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [OfflineService] [EngineProcessor] Initial processing Sep 13 00:49:37.390288 systemd[1]: Started kubelet.service. Sep 13 00:49:37.391561 systemd[1]: Reached target multi-user.target. Sep 13 00:49:37.395675 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:49:37.406697 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:49:37.407082 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:49:37.413257 systemd[1]: Startup finished in 6.474s (kernel) + 10.199s (userspace) = 16.673s. Sep 13 00:49:37.486320 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [OfflineService] Starting message polling Sep 13 00:49:37.519944 sshd[1953]: Accepted publickey for core from 147.75.109.163 port 55178 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:37.521564 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:37.525911 systemd-logind[1723]: New session 3 of user core. Sep 13 00:49:37.527108 systemd[1]: Started session-3.scope. Sep 13 00:49:37.584012 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [OfflineService] Starting send replies to MDS Sep 13 00:49:37.652332 sshd[1953]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:37.656717 systemd[1]: sshd@2-172.31.27.100:22-147.75.109.163:55178.service: Deactivated successfully. Sep 13 00:49:37.657841 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:49:37.657888 systemd-logind[1723]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:49:37.660339 systemd-logind[1723]: Removed session 3. Sep 13 00:49:37.682039 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 13 00:49:37.780129 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 13 00:49:37.878327 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:49:37.977046 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 13 00:49:38.075474 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessageGatewayService] listening reply. Sep 13 00:49:38.174318 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [StartupProcessor] Executing startup processor tasks Sep 13 00:49:38.273389 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 13 00:49:38.372536 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 13 00:49:38.472010 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 13 00:49:38.568020 kubelet[1960]: E0913 00:49:38.567899 1960 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:49:38.569714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:49:38.569890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:49:38.571591 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0160ea5f15992b500?role=subscribe&stream=input Sep 13 00:49:38.671448 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0160ea5f15992b500?role=subscribe&stream=input Sep 13 00:49:38.771500 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessageGatewayService] Starting receiving message from control channel Sep 13 00:49:38.871797 amazon-ssm-agent[1709]: 2025-09-13 00:49:35 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 13 00:49:42.526025 amazon-ssm-agent[1709]: 2025-09-13 00:49:42 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 13 00:49:47.677006 systemd[1]: Started sshd@3-172.31.27.100:22-147.75.109.163:55954.service. Sep 13 00:49:47.832111 sshd[1973]: Accepted publickey for core from 147.75.109.163 port 55954 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:47.833523 sshd[1973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:47.837909 systemd-logind[1723]: New session 4 of user core. Sep 13 00:49:47.838709 systemd[1]: Started session-4.scope. Sep 13 00:49:47.965986 sshd[1973]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:47.968586 systemd[1]: sshd@3-172.31.27.100:22-147.75.109.163:55954.service: Deactivated successfully. Sep 13 00:49:47.969754 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:49:47.969766 systemd-logind[1723]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:49:47.970722 systemd-logind[1723]: Removed session 4. Sep 13 00:49:47.987742 systemd[1]: Started sshd@4-172.31.27.100:22-147.75.109.163:55964.service. Sep 13 00:49:48.138783 sshd[1980]: Accepted publickey for core from 147.75.109.163 port 55964 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:48.140328 sshd[1980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:48.145542 systemd-logind[1723]: New session 5 of user core. Sep 13 00:49:48.146231 systemd[1]: Started session-5.scope. Sep 13 00:49:48.265177 sshd[1980]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:48.268060 systemd[1]: sshd@4-172.31.27.100:22-147.75.109.163:55964.service: Deactivated successfully. Sep 13 00:49:48.268763 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:49:48.270242 systemd-logind[1723]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:49:48.271217 systemd-logind[1723]: Removed session 5. Sep 13 00:49:48.288723 systemd[1]: Started sshd@5-172.31.27.100:22-147.75.109.163:55968.service. Sep 13 00:49:48.444061 sshd[1987]: Accepted publickey for core from 147.75.109.163 port 55968 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:48.445799 sshd[1987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:48.451484 systemd[1]: Started session-6.scope. Sep 13 00:49:48.451782 systemd-logind[1723]: New session 6 of user core. Sep 13 00:49:48.577781 sshd[1987]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:48.581035 systemd[1]: sshd@5-172.31.27.100:22-147.75.109.163:55968.service: Deactivated successfully. Sep 13 00:49:48.582538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:49:48.582683 systemd[1]: Stopped kubelet.service. Sep 13 00:49:48.584097 systemd[1]: Starting kubelet.service... Sep 13 00:49:48.584341 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:49:48.585328 systemd-logind[1723]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:49:48.586509 systemd-logind[1723]: Removed session 6. Sep 13 00:49:48.600103 systemd[1]: Started sshd@6-172.31.27.100:22-147.75.109.163:55978.service. Sep 13 00:49:48.750937 sshd[1997]: Accepted publickey for core from 147.75.109.163 port 55978 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:49:48.752373 sshd[1997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:48.759085 systemd[1]: Started session-7.scope. Sep 13 00:49:48.760707 systemd-logind[1723]: New session 7 of user core. Sep 13 00:49:48.877640 systemd[1]: Started kubelet.service. Sep 13 00:49:48.893766 sudo[2001]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:49:48.894128 sudo[2001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:49:48.914717 systemd[1]: Starting coreos-metadata.service... Sep 13 00:49:48.965352 kubelet[2006]: E0913 00:49:48.965310 2006 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:49:48.970231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:49:48.970449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:49:49.021930 coreos-metadata[2016]: Sep 13 00:49:49.021 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:49:49.022803 coreos-metadata[2016]: Sep 13 00:49:49.022 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Sep 13 00:49:49.023550 coreos-metadata[2016]: Sep 13 00:49:49.023 INFO Fetch successful Sep 13 00:49:49.023621 coreos-metadata[2016]: Sep 13 00:49:49.023 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Sep 13 00:49:49.024173 coreos-metadata[2016]: Sep 13 00:49:49.024 INFO Fetch successful Sep 13 00:49:49.024173 coreos-metadata[2016]: Sep 13 00:49:49.024 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Sep 13 00:49:49.024737 coreos-metadata[2016]: Sep 13 00:49:49.024 INFO Fetch successful Sep 13 00:49:49.025089 coreos-metadata[2016]: Sep 13 00:49:49.025 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Sep 13 00:49:49.025886 coreos-metadata[2016]: Sep 13 00:49:49.025 INFO Fetch successful Sep 13 00:49:49.025886 coreos-metadata[2016]: Sep 13 00:49:49.025 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Sep 13 00:49:49.026529 coreos-metadata[2016]: Sep 13 00:49:49.026 INFO Fetch successful Sep 13 00:49:49.026597 coreos-metadata[2016]: Sep 13 00:49:49.026 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Sep 13 00:49:49.027570 coreos-metadata[2016]: Sep 13 00:49:49.027 INFO Fetch successful Sep 13 00:49:49.027570 coreos-metadata[2016]: Sep 13 00:49:49.027 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Sep 13 00:49:49.028272 coreos-metadata[2016]: Sep 13 00:49:49.028 INFO Fetch successful Sep 13 00:49:49.028331 coreos-metadata[2016]: Sep 13 00:49:49.028 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Sep 13 00:49:49.034463 coreos-metadata[2016]: Sep 13 00:49:49.034 INFO Fetch successful Sep 13 00:49:49.047427 systemd[1]: Finished coreos-metadata.service. Sep 13 00:49:50.062745 systemd[1]: Stopped kubelet.service. Sep 13 00:49:50.065477 systemd[1]: Starting kubelet.service... Sep 13 00:49:50.095566 systemd[1]: Reloading. Sep 13 00:49:50.231318 /usr/lib/systemd/system-generators/torcx-generator[2076]: time="2025-09-13T00:49:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:49:50.232705 /usr/lib/systemd/system-generators/torcx-generator[2076]: time="2025-09-13T00:49:50Z" level=info msg="torcx already run" Sep 13 00:49:50.368730 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:49:50.368756 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:49:50.389949 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:49:50.491557 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:49:50.491672 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:49:50.492122 systemd[1]: Stopped kubelet.service. Sep 13 00:49:50.494571 systemd[1]: Starting kubelet.service... Sep 13 00:49:50.694742 systemd[1]: Started kubelet.service. Sep 13 00:49:50.740919 kubelet[2148]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:49:50.740919 kubelet[2148]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:49:50.740919 kubelet[2148]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:49:50.741422 kubelet[2148]: I0913 00:49:50.741014 2148 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:49:51.077149 kubelet[2148]: I0913 00:49:51.076638 2148 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:49:51.077149 kubelet[2148]: I0913 00:49:51.076672 2148 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:49:51.077149 kubelet[2148]: I0913 00:49:51.077061 2148 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:49:51.125420 kubelet[2148]: I0913 00:49:51.125383 2148 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:49:51.135062 kubelet[2148]: E0913 00:49:51.135012 2148 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:49:51.135062 kubelet[2148]: I0913 00:49:51.135056 2148 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:49:51.139676 kubelet[2148]: I0913 00:49:51.139642 2148 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:49:51.140054 kubelet[2148]: I0913 00:49:51.140031 2148 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:49:51.140218 kubelet[2148]: I0913 00:49:51.140179 2148 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:49:51.140455 kubelet[2148]: I0913 00:49:51.140219 2148 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.27.100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:49:51.140604 kubelet[2148]: I0913 00:49:51.140464 2148 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:49:51.140604 kubelet[2148]: I0913 00:49:51.140479 2148 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:49:51.140691 kubelet[2148]: I0913 00:49:51.140605 2148 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:49:51.156578 kubelet[2148]: I0913 00:49:51.156210 2148 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:49:51.156578 kubelet[2148]: I0913 00:49:51.156257 2148 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:49:51.156578 kubelet[2148]: I0913 00:49:51.156287 2148 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:49:51.156578 kubelet[2148]: I0913 00:49:51.156306 2148 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:49:51.158045 kubelet[2148]: E0913 00:49:51.157735 2148 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:51.158045 kubelet[2148]: E0913 00:49:51.157779 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:51.166319 kubelet[2148]: I0913 00:49:51.166283 2148 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:49:51.166813 kubelet[2148]: I0913 00:49:51.166763 2148 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:49:51.169241 kubelet[2148]: W0913 00:49:51.169213 2148 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:49:51.177212 kubelet[2148]: W0913 00:49:51.177184 2148 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.27.100" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 13 00:49:51.177536 kubelet[2148]: E0913 00:49:51.177517 2148 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.27.100\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 13 00:49:51.180946 kubelet[2148]: I0913 00:49:51.180734 2148 server.go:1274] "Started kubelet" Sep 13 00:49:51.188068 kubelet[2148]: W0913 00:49:51.188036 2148 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 13 00:49:51.188245 kubelet[2148]: E0913 00:49:51.188226 2148 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 13 00:49:51.188337 kubelet[2148]: I0913 00:49:51.188317 2148 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:49:51.192892 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:49:51.193040 kubelet[2148]: I0913 00:49:51.193015 2148 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:49:51.194304 kubelet[2148]: I0913 00:49:51.194257 2148 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:49:51.194615 kubelet[2148]: I0913 00:49:51.194593 2148 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:49:51.196265 kubelet[2148]: I0913 00:49:51.196238 2148 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:49:51.200200 kubelet[2148]: I0913 00:49:51.200174 2148 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:49:51.204218 kubelet[2148]: I0913 00:49:51.204192 2148 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:49:51.204736 kubelet[2148]: E0913 00:49:51.204685 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:51.206445 kubelet[2148]: I0913 00:49:51.206424 2148 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:49:51.206625 kubelet[2148]: I0913 00:49:51.206615 2148 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:49:51.228933 kubelet[2148]: E0913 00:49:51.228723 2148 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.27.100\" not found" node="172.31.27.100" Sep 13 00:49:51.232428 kubelet[2148]: I0913 00:49:51.231799 2148 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:49:51.232428 kubelet[2148]: I0913 00:49:51.231938 2148 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:49:51.234562 kubelet[2148]: I0913 00:49:51.234538 2148 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:49:51.247134 kubelet[2148]: E0913 00:49:51.247103 2148 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:49:51.256611 kubelet[2148]: I0913 00:49:51.256590 2148 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:49:51.256806 kubelet[2148]: I0913 00:49:51.256787 2148 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:49:51.256942 kubelet[2148]: I0913 00:49:51.256932 2148 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:49:51.259167 kubelet[2148]: I0913 00:49:51.259141 2148 policy_none.go:49] "None policy: Start" Sep 13 00:49:51.259960 kubelet[2148]: I0913 00:49:51.259943 2148 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:49:51.260081 kubelet[2148]: I0913 00:49:51.260073 2148 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:49:51.268932 kubelet[2148]: I0913 00:49:51.268904 2148 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:49:51.269303 kubelet[2148]: I0913 00:49:51.269289 2148 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:49:51.269433 kubelet[2148]: I0913 00:49:51.269397 2148 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:49:51.271562 kubelet[2148]: I0913 00:49:51.271544 2148 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:49:51.273548 kubelet[2148]: E0913 00:49:51.273529 2148 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.100\" not found" Sep 13 00:49:51.371088 kubelet[2148]: I0913 00:49:51.371008 2148 kubelet_node_status.go:72] "Attempting to register node" node="172.31.27.100" Sep 13 00:49:51.377932 kubelet[2148]: I0913 00:49:51.377902 2148 kubelet_node_status.go:75] "Successfully registered node" node="172.31.27.100" Sep 13 00:49:51.378112 kubelet[2148]: E0913 00:49:51.378099 2148 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.27.100\": node \"172.31.27.100\" not found" Sep 13 00:49:51.394348 kubelet[2148]: E0913 00:49:51.394245 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:51.395229 kubelet[2148]: I0913 00:49:51.395183 2148 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:49:51.396476 kubelet[2148]: I0913 00:49:51.396442 2148 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:49:51.396476 kubelet[2148]: I0913 00:49:51.396475 2148 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:49:51.396575 kubelet[2148]: I0913 00:49:51.396493 2148 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:49:51.396575 kubelet[2148]: E0913 00:49:51.396537 2148 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:49:51.494936 kubelet[2148]: E0913 00:49:51.494895 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:51.595701 kubelet[2148]: E0913 00:49:51.595634 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:51.696209 kubelet[2148]: E0913 00:49:51.696063 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:51.729143 sudo[2001]: pam_unix(sudo:session): session closed for user root Sep 13 00:49:51.752475 sshd[1997]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:51.755389 systemd[1]: sshd@6-172.31.27.100:22-147.75.109.163:55978.service: Deactivated successfully. Sep 13 00:49:51.756182 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:49:51.757808 systemd-logind[1723]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:49:51.758952 systemd-logind[1723]: Removed session 7. Sep 13 00:49:51.797096 kubelet[2148]: E0913 00:49:51.797062 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:51.897765 kubelet[2148]: E0913 00:49:51.897727 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:51.998600 kubelet[2148]: E0913 00:49:51.998477 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:52.078914 kubelet[2148]: I0913 00:49:52.078831 2148 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 13 00:49:52.079051 kubelet[2148]: W0913 00:49:52.079027 2148 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 13 00:49:52.079092 kubelet[2148]: W0913 00:49:52.079062 2148 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 13 00:49:52.079123 kubelet[2148]: W0913 00:49:52.079094 2148 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 13 00:49:52.099081 kubelet[2148]: E0913 00:49:52.099042 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:52.158299 kubelet[2148]: E0913 00:49:52.158257 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:52.199390 kubelet[2148]: E0913 00:49:52.199323 2148 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.100\" not found" Sep 13 00:49:52.300437 kubelet[2148]: I0913 00:49:52.300269 2148 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 13 00:49:52.300987 env[1732]: time="2025-09-13T00:49:52.300952596Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:49:52.301423 kubelet[2148]: I0913 00:49:52.301401 2148 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 13 00:49:53.159003 kubelet[2148]: I0913 00:49:53.158965 2148 apiserver.go:52] "Watching apiserver" Sep 13 00:49:53.159528 kubelet[2148]: E0913 00:49:53.159396 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:53.207580 kubelet[2148]: I0913 00:49:53.207543 2148 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:49:53.219774 kubelet[2148]: I0913 00:49:53.219722 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfrjq\" (UniqueName: \"kubernetes.io/projected/f6a22801-c639-4634-af63-b8e156758506-kube-api-access-gfrjq\") pod \"kube-proxy-49dxk\" (UID: \"f6a22801-c639-4634-af63-b8e156758506\") " pod="kube-system/kube-proxy-49dxk" Sep 13 00:49:53.219928 kubelet[2148]: I0913 00:49:53.219800 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-bpf-maps\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.219928 kubelet[2148]: I0913 00:49:53.219824 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-hostproc\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.219928 kubelet[2148]: I0913 00:49:53.219841 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-xtables-lock\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.219928 kubelet[2148]: I0913 00:49:53.219876 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-host-proc-sys-kernel\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.219928 kubelet[2148]: I0913 00:49:53.219891 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6a22801-c639-4634-af63-b8e156758506-kube-proxy\") pod \"kube-proxy-49dxk\" (UID: \"f6a22801-c639-4634-af63-b8e156758506\") " pod="kube-system/kube-proxy-49dxk" Sep 13 00:49:53.219928 kubelet[2148]: I0913 00:49:53.219908 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-host-proc-sys-net\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220105 kubelet[2148]: I0913 00:49:53.219922 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvq6\" (UniqueName: \"kubernetes.io/projected/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-kube-api-access-dwvq6\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220105 kubelet[2148]: I0913 00:49:53.219939 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6a22801-c639-4634-af63-b8e156758506-xtables-lock\") pod \"kube-proxy-49dxk\" (UID: \"f6a22801-c639-4634-af63-b8e156758506\") " pod="kube-system/kube-proxy-49dxk" Sep 13 00:49:53.220105 kubelet[2148]: I0913 00:49:53.219967 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-run\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220105 kubelet[2148]: I0913 00:49:53.219982 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cni-path\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220105 kubelet[2148]: I0913 00:49:53.219996 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-etc-cni-netd\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220105 kubelet[2148]: I0913 00:49:53.220013 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-lib-modules\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220264 kubelet[2148]: I0913 00:49:53.220028 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-clustermesh-secrets\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220264 kubelet[2148]: I0913 00:49:53.220043 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-config-path\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220264 kubelet[2148]: I0913 00:49:53.220074 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-cgroup\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220264 kubelet[2148]: I0913 00:49:53.220093 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-hubble-tls\") pod \"cilium-f7bwp\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " pod="kube-system/cilium-f7bwp" Sep 13 00:49:53.220264 kubelet[2148]: I0913 00:49:53.220110 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6a22801-c639-4634-af63-b8e156758506-lib-modules\") pod \"kube-proxy-49dxk\" (UID: \"f6a22801-c639-4634-af63-b8e156758506\") " pod="kube-system/kube-proxy-49dxk" Sep 13 00:49:53.321647 kubelet[2148]: I0913 00:49:53.321611 2148 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:49:53.472432 env[1732]: time="2025-09-13T00:49:53.472314095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49dxk,Uid:f6a22801-c639-4634-af63-b8e156758506,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:53.475245 env[1732]: time="2025-09-13T00:49:53.475195051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f7bwp,Uid:3075bd64-39aa-41b5-ac6f-b765a39a0d2e,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:53.952278 env[1732]: time="2025-09-13T00:49:53.952169291Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:53.953385 env[1732]: time="2025-09-13T00:49:53.953334634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:53.957483 env[1732]: time="2025-09-13T00:49:53.957435139Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:53.961849 env[1732]: time="2025-09-13T00:49:53.961802576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:53.962780 env[1732]: time="2025-09-13T00:49:53.962746841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:53.965845 env[1732]: time="2025-09-13T00:49:53.965803653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:53.966829 env[1732]: time="2025-09-13T00:49:53.966795321Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:53.967556 env[1732]: time="2025-09-13T00:49:53.967525192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:54.001236 env[1732]: time="2025-09-13T00:49:53.991799626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:54.001236 env[1732]: time="2025-09-13T00:49:53.991840974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:54.001236 env[1732]: time="2025-09-13T00:49:53.991865333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:54.001236 env[1732]: time="2025-09-13T00:49:53.992080209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bbce3eec2d3845d5f8d284e02c7aab51dbdff5b52692e37c5eba58806f7be6a pid=2206 runtime=io.containerd.runc.v2 Sep 13 00:49:54.001474 env[1732]: time="2025-09-13T00:49:53.995147194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:54.001474 env[1732]: time="2025-09-13T00:49:53.995178590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:54.001474 env[1732]: time="2025-09-13T00:49:53.995188638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:54.001474 env[1732]: time="2025-09-13T00:49:53.995333846Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf pid=2211 runtime=io.containerd.runc.v2 Sep 13 00:49:54.074469 env[1732]: time="2025-09-13T00:49:54.074422212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49dxk,Uid:f6a22801-c639-4634-af63-b8e156758506,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bbce3eec2d3845d5f8d284e02c7aab51dbdff5b52692e37c5eba58806f7be6a\"" Sep 13 00:49:54.077541 env[1732]: time="2025-09-13T00:49:54.077503743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:49:54.080303 env[1732]: time="2025-09-13T00:49:54.080249578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f7bwp,Uid:3075bd64-39aa-41b5-ac6f-b765a39a0d2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\"" Sep 13 00:49:54.159631 kubelet[2148]: E0913 00:49:54.159575 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:54.331370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826092915.mount: Deactivated successfully. Sep 13 00:49:55.159866 kubelet[2148]: E0913 00:49:55.159770 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:55.167087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191285909.mount: Deactivated successfully. Sep 13 00:49:55.829312 env[1732]: time="2025-09-13T00:49:55.829240999Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:55.831708 env[1732]: time="2025-09-13T00:49:55.831665358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:55.833630 env[1732]: time="2025-09-13T00:49:55.833594752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:55.835083 env[1732]: time="2025-09-13T00:49:55.835050767Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:49:55.835403 env[1732]: time="2025-09-13T00:49:55.835370589Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:49:55.842404 env[1732]: time="2025-09-13T00:49:55.842215189Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:49:55.843415 env[1732]: time="2025-09-13T00:49:55.843375220Z" level=info msg="CreateContainer within sandbox \"3bbce3eec2d3845d5f8d284e02c7aab51dbdff5b52692e37c5eba58806f7be6a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:49:55.857332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548894261.mount: Deactivated successfully. Sep 13 00:49:55.865406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2148578933.mount: Deactivated successfully. Sep 13 00:49:55.872811 env[1732]: time="2025-09-13T00:49:55.872666262Z" level=info msg="CreateContainer within sandbox \"3bbce3eec2d3845d5f8d284e02c7aab51dbdff5b52692e37c5eba58806f7be6a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00192aa0738a7ce8d4cf0c1ec31a1dbfe7d56139351b6b6f35f1990af8c06419\"" Sep 13 00:49:55.874111 env[1732]: time="2025-09-13T00:49:55.874061145Z" level=info msg="StartContainer for \"00192aa0738a7ce8d4cf0c1ec31a1dbfe7d56139351b6b6f35f1990af8c06419\"" Sep 13 00:49:55.930652 env[1732]: time="2025-09-13T00:49:55.930604285Z" level=info msg="StartContainer for \"00192aa0738a7ce8d4cf0c1ec31a1dbfe7d56139351b6b6f35f1990af8c06419\" returns successfully" Sep 13 00:49:56.160977 kubelet[2148]: E0913 00:49:56.160666 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:57.161397 kubelet[2148]: E0913 00:49:57.161343 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:58.162026 kubelet[2148]: E0913 00:49:58.161953 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:49:59.162648 kubelet[2148]: E0913 00:49:59.162608 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:00.163604 kubelet[2148]: E0913 00:50:00.163524 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:01.164074 kubelet[2148]: E0913 00:50:01.164026 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:01.375422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1902021472.mount: Deactivated successfully. Sep 13 00:50:02.169983 kubelet[2148]: E0913 00:50:02.169920 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:03.171170 kubelet[2148]: E0913 00:50:03.171089 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:04.171495 kubelet[2148]: E0913 00:50:04.171444 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:05.171795 kubelet[2148]: E0913 00:50:05.171723 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:05.468586 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:50:06.172193 kubelet[2148]: E0913 00:50:06.172081 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:06.955581 env[1732]: time="2025-09-13T00:50:06.955522119Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:06.958596 env[1732]: time="2025-09-13T00:50:06.958553520Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:06.960876 env[1732]: time="2025-09-13T00:50:06.960810448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:06.961606 env[1732]: time="2025-09-13T00:50:06.961566596Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:50:06.964618 env[1732]: time="2025-09-13T00:50:06.964583953Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:50:06.979162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624361155.mount: Deactivated successfully. Sep 13 00:50:06.988574 env[1732]: time="2025-09-13T00:50:06.988435478Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\"" Sep 13 00:50:06.989963 env[1732]: time="2025-09-13T00:50:06.989913895Z" level=info msg="StartContainer for \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\"" Sep 13 00:50:07.067278 env[1732]: time="2025-09-13T00:50:07.067182129Z" level=info msg="StartContainer for \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\" returns successfully" Sep 13 00:50:07.172904 kubelet[2148]: E0913 00:50:07.172757 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:07.223296 env[1732]: time="2025-09-13T00:50:07.223174104Z" level=info msg="shim disconnected" id=a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc Sep 13 00:50:07.223296 env[1732]: time="2025-09-13T00:50:07.223231736Z" level=warning msg="cleaning up after shim disconnected" id=a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc namespace=k8s.io Sep 13 00:50:07.223296 env[1732]: time="2025-09-13T00:50:07.223243444Z" level=info msg="cleaning up dead shim" Sep 13 00:50:07.232201 env[1732]: time="2025-09-13T00:50:07.232154442Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2499 runtime=io.containerd.runc.v2\n" Sep 13 00:50:07.449297 env[1732]: time="2025-09-13T00:50:07.449254735Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:50:07.468926 kubelet[2148]: I0913 00:50:07.468571 2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-49dxk" podStartSLOduration=14.70530258 podStartE2EDuration="16.468552469s" podCreationTimestamp="2025-09-13 00:49:51 +0000 UTC" firstStartedPulling="2025-09-13 00:49:54.076700066 +0000 UTC m=+3.372708372" lastFinishedPulling="2025-09-13 00:49:55.839949971 +0000 UTC m=+5.135958261" observedRunningTime="2025-09-13 00:49:56.425275303 +0000 UTC m=+5.721283616" watchObservedRunningTime="2025-09-13 00:50:07.468552469 +0000 UTC m=+16.764560781" Sep 13 00:50:07.469564 env[1732]: time="2025-09-13T00:50:07.469516813Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\"" Sep 13 00:50:07.470094 env[1732]: time="2025-09-13T00:50:07.470060052Z" level=info msg="StartContainer for \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\"" Sep 13 00:50:07.523047 env[1732]: time="2025-09-13T00:50:07.522576275Z" level=info msg="StartContainer for \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\" returns successfully" Sep 13 00:50:07.533527 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:50:07.533791 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:50:07.534770 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:50:07.536516 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:50:07.554560 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:50:07.578176 env[1732]: time="2025-09-13T00:50:07.578122629Z" level=info msg="shim disconnected" id=dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0 Sep 13 00:50:07.578176 env[1732]: time="2025-09-13T00:50:07.578173826Z" level=warning msg="cleaning up after shim disconnected" id=dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0 namespace=k8s.io Sep 13 00:50:07.578176 env[1732]: time="2025-09-13T00:50:07.578183618Z" level=info msg="cleaning up dead shim" Sep 13 00:50:07.587566 env[1732]: time="2025-09-13T00:50:07.587518734Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2564 runtime=io.containerd.runc.v2\n" Sep 13 00:50:07.974423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc-rootfs.mount: Deactivated successfully. Sep 13 00:50:08.173750 kubelet[2148]: E0913 00:50:08.173701 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:08.453840 env[1732]: time="2025-09-13T00:50:08.453799302Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:50:08.470542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2528612148.mount: Deactivated successfully. Sep 13 00:50:08.484398 env[1732]: time="2025-09-13T00:50:08.484227725Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\"" Sep 13 00:50:08.488223 env[1732]: time="2025-09-13T00:50:08.488170644Z" level=info msg="StartContainer for \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\"" Sep 13 00:50:08.550285 env[1732]: time="2025-09-13T00:50:08.549846067Z" level=info msg="StartContainer for \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\" returns successfully" Sep 13 00:50:08.589106 env[1732]: time="2025-09-13T00:50:08.589064479Z" level=info msg="shim disconnected" id=6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa Sep 13 00:50:08.589518 env[1732]: time="2025-09-13T00:50:08.589463949Z" level=warning msg="cleaning up after shim disconnected" id=6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa namespace=k8s.io Sep 13 00:50:08.589518 env[1732]: time="2025-09-13T00:50:08.589490184Z" level=info msg="cleaning up dead shim" Sep 13 00:50:08.600282 env[1732]: time="2025-09-13T00:50:08.600227375Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2620 runtime=io.containerd.runc.v2\n" Sep 13 00:50:08.974285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa-rootfs.mount: Deactivated successfully. Sep 13 00:50:09.174155 kubelet[2148]: E0913 00:50:09.174088 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:09.458060 env[1732]: time="2025-09-13T00:50:09.458010430Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:50:09.475881 env[1732]: time="2025-09-13T00:50:09.475780593Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\"" Sep 13 00:50:09.477677 env[1732]: time="2025-09-13T00:50:09.477636771Z" level=info msg="StartContainer for \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\"" Sep 13 00:50:09.544221 env[1732]: time="2025-09-13T00:50:09.544167103Z" level=info msg="StartContainer for \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\" returns successfully" Sep 13 00:50:09.568807 env[1732]: time="2025-09-13T00:50:09.568727784Z" level=info msg="shim disconnected" id=ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a Sep 13 00:50:09.568807 env[1732]: time="2025-09-13T00:50:09.568800798Z" level=warning msg="cleaning up after shim disconnected" id=ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a namespace=k8s.io Sep 13 00:50:09.568807 env[1732]: time="2025-09-13T00:50:09.568814582Z" level=info msg="cleaning up dead shim" Sep 13 00:50:09.578226 env[1732]: time="2025-09-13T00:50:09.578178192Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2676 runtime=io.containerd.runc.v2\n" Sep 13 00:50:09.975729 systemd[1]: run-containerd-runc-k8s.io-ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a-runc.Rb2z4J.mount: Deactivated successfully. Sep 13 00:50:09.975941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a-rootfs.mount: Deactivated successfully. Sep 13 00:50:10.174634 kubelet[2148]: E0913 00:50:10.174569 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:10.462804 env[1732]: time="2025-09-13T00:50:10.462754854Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:50:10.478804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount978598291.mount: Deactivated successfully. Sep 13 00:50:10.488451 env[1732]: time="2025-09-13T00:50:10.488311029Z" level=info msg="CreateContainer within sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\"" Sep 13 00:50:10.489282 env[1732]: time="2025-09-13T00:50:10.489229738Z" level=info msg="StartContainer for \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\"" Sep 13 00:50:10.561865 env[1732]: time="2025-09-13T00:50:10.561793924Z" level=info msg="StartContainer for \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\" returns successfully" Sep 13 00:50:10.744590 kubelet[2148]: I0913 00:50:10.744483 2148 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:50:11.033918 kernel: Initializing XFRM netlink socket Sep 13 00:50:11.157342 kubelet[2148]: E0913 00:50:11.157295 2148 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:11.175455 kubelet[2148]: E0913 00:50:11.175318 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:11.488024 kubelet[2148]: I0913 00:50:11.487666 2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f7bwp" podStartSLOduration=7.6067788400000005 podStartE2EDuration="20.487647476s" podCreationTimestamp="2025-09-13 00:49:51 +0000 UTC" firstStartedPulling="2025-09-13 00:49:54.082117755 +0000 UTC m=+3.378126058" lastFinishedPulling="2025-09-13 00:50:06.962986387 +0000 UTC m=+16.258994694" observedRunningTime="2025-09-13 00:50:11.487565275 +0000 UTC m=+20.783573588" watchObservedRunningTime="2025-09-13 00:50:11.487647476 +0000 UTC m=+20.783655792" Sep 13 00:50:12.176142 kubelet[2148]: E0913 00:50:12.176091 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:12.557040 amazon-ssm-agent[1709]: 2025-09-13 00:50:12 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 13 00:50:12.698294 (udev-worker)[2779]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:12.699008 (udev-worker)[2780]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:12.699590 systemd-networkd[1415]: cilium_host: Link UP Sep 13 00:50:12.700924 systemd-networkd[1415]: cilium_net: Link UP Sep 13 00:50:12.703560 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:50:12.703604 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:50:12.701904 systemd-networkd[1415]: cilium_net: Gained carrier Sep 13 00:50:12.704440 systemd-networkd[1415]: cilium_host: Gained carrier Sep 13 00:50:12.834712 systemd-networkd[1415]: cilium_vxlan: Link UP Sep 13 00:50:12.834720 systemd-networkd[1415]: cilium_vxlan: Gained carrier Sep 13 00:50:12.865996 systemd-networkd[1415]: cilium_net: Gained IPv6LL Sep 13 00:50:13.010002 systemd-networkd[1415]: cilium_host: Gained IPv6LL Sep 13 00:50:13.077893 kernel: NET: Registered PF_ALG protocol family Sep 13 00:50:13.176640 kubelet[2148]: E0913 00:50:13.176511 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:13.868319 systemd-networkd[1415]: lxc_health: Link UP Sep 13 00:50:13.876307 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:50:13.877163 systemd-networkd[1415]: lxc_health: Gained carrier Sep 13 00:50:13.894883 (udev-worker)[2829]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:13.994184 systemd-networkd[1415]: cilium_vxlan: Gained IPv6LL Sep 13 00:50:14.177605 kubelet[2148]: E0913 00:50:14.177435 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:15.082083 systemd-networkd[1415]: lxc_health: Gained IPv6LL Sep 13 00:50:15.178596 kubelet[2148]: E0913 00:50:15.178556 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:15.197879 kubelet[2148]: I0913 00:50:15.197801 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx8pk\" (UniqueName: \"kubernetes.io/projected/f4748fc2-8684-482f-9359-1c6d81fcc2e4-kube-api-access-dx8pk\") pod \"nginx-deployment-8587fbcb89-lc6v9\" (UID: \"f4748fc2-8684-482f-9359-1c6d81fcc2e4\") " pod="default/nginx-deployment-8587fbcb89-lc6v9" Sep 13 00:50:15.418513 env[1732]: time="2025-09-13T00:50:15.418370465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-lc6v9,Uid:f4748fc2-8684-482f-9359-1c6d81fcc2e4,Namespace:default,Attempt:0,}" Sep 13 00:50:15.535318 systemd-networkd[1415]: lxc6777380ab2b6: Link UP Sep 13 00:50:15.536880 kernel: eth0: renamed from tmp6e7b9 Sep 13 00:50:15.547023 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:50:15.547142 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6777380ab2b6: link becomes ready Sep 13 00:50:15.546592 systemd-networkd[1415]: lxc6777380ab2b6: Gained carrier Sep 13 00:50:16.180271 kubelet[2148]: E0913 00:50:16.180208 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:17.181705 kubelet[2148]: E0913 00:50:17.181658 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:17.322033 systemd-networkd[1415]: lxc6777380ab2b6: Gained IPv6LL Sep 13 00:50:18.183196 kubelet[2148]: E0913 00:50:18.183151 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:18.806024 env[1732]: time="2025-09-13T00:50:18.805943648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:18.806024 env[1732]: time="2025-09-13T00:50:18.805989618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:18.806559 env[1732]: time="2025-09-13T00:50:18.806505986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:18.806925 env[1732]: time="2025-09-13T00:50:18.806850299Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e7b973f019dab37027d3c93c209df28d1082f315d5c879008be1ec252675395 pid=3197 runtime=io.containerd.runc.v2 Sep 13 00:50:18.832208 systemd[1]: run-containerd-runc-k8s.io-6e7b973f019dab37027d3c93c209df28d1082f315d5c879008be1ec252675395-runc.Y2DNLE.mount: Deactivated successfully. Sep 13 00:50:18.881626 env[1732]: time="2025-09-13T00:50:18.881590796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-lc6v9,Uid:f4748fc2-8684-482f-9359-1c6d81fcc2e4,Namespace:default,Attempt:0,} returns sandbox id \"6e7b973f019dab37027d3c93c209df28d1082f315d5c879008be1ec252675395\"" Sep 13 00:50:18.883712 env[1732]: time="2025-09-13T00:50:18.883683318Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 13 00:50:19.183990 kubelet[2148]: E0913 00:50:19.183874 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:19.745157 update_engine[1725]: I0913 00:50:19.745095 1725 update_attempter.cc:509] Updating boot flags... Sep 13 00:50:20.184334 kubelet[2148]: E0913 00:50:20.184287 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:21.185184 kubelet[2148]: E0913 00:50:21.185115 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:21.418850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314001661.mount: Deactivated successfully. Sep 13 00:50:21.673988 kubelet[2148]: I0913 00:50:21.673017 2148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:50:22.185902 kubelet[2148]: E0913 00:50:22.185846 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:23.038722 env[1732]: time="2025-09-13T00:50:23.038664352Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:23.041128 env[1732]: time="2025-09-13T00:50:23.041088039Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:23.042936 env[1732]: time="2025-09-13T00:50:23.042898829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:23.044968 env[1732]: time="2025-09-13T00:50:23.044927266Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:23.045680 env[1732]: time="2025-09-13T00:50:23.045639691Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 13 00:50:23.048351 env[1732]: time="2025-09-13T00:50:23.048305433Z" level=info msg="CreateContainer within sandbox \"6e7b973f019dab37027d3c93c209df28d1082f315d5c879008be1ec252675395\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 13 00:50:23.061551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2455790404.mount: Deactivated successfully. Sep 13 00:50:23.068629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3537011206.mount: Deactivated successfully. Sep 13 00:50:23.075991 env[1732]: time="2025-09-13T00:50:23.075942751Z" level=info msg="CreateContainer within sandbox \"6e7b973f019dab37027d3c93c209df28d1082f315d5c879008be1ec252675395\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0535569446f22978aa712d7636c03e6514177b822ef13f664186a23577f0746f\"" Sep 13 00:50:23.077005 env[1732]: time="2025-09-13T00:50:23.076976830Z" level=info msg="StartContainer for \"0535569446f22978aa712d7636c03e6514177b822ef13f664186a23577f0746f\"" Sep 13 00:50:23.133778 env[1732]: time="2025-09-13T00:50:23.133696804Z" level=info msg="StartContainer for \"0535569446f22978aa712d7636c03e6514177b822ef13f664186a23577f0746f\" returns successfully" Sep 13 00:50:23.187758 kubelet[2148]: E0913 00:50:23.187708 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:23.499843 kubelet[2148]: I0913 00:50:23.499712 2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-lc6v9" podStartSLOduration=5.335977698 podStartE2EDuration="9.499694248s" podCreationTimestamp="2025-09-13 00:50:14 +0000 UTC" firstStartedPulling="2025-09-13 00:50:18.883222227 +0000 UTC m=+28.179230521" lastFinishedPulling="2025-09-13 00:50:23.04693878 +0000 UTC m=+32.342947071" observedRunningTime="2025-09-13 00:50:23.49949326 +0000 UTC m=+32.795501572" watchObservedRunningTime="2025-09-13 00:50:23.499694248 +0000 UTC m=+32.795702572" Sep 13 00:50:24.188433 kubelet[2148]: E0913 00:50:24.188378 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:25.189146 kubelet[2148]: E0913 00:50:25.189090 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:26.190088 kubelet[2148]: E0913 00:50:26.190041 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:27.190793 kubelet[2148]: E0913 00:50:27.190690 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:28.191190 kubelet[2148]: E0913 00:50:28.191134 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:29.192049 kubelet[2148]: E0913 00:50:29.191967 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:30.193156 kubelet[2148]: E0913 00:50:30.193112 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:31.156892 kubelet[2148]: E0913 00:50:31.156828 2148 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:31.194025 kubelet[2148]: E0913 00:50:31.193983 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:32.194837 kubelet[2148]: E0913 00:50:32.194793 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:33.195749 kubelet[2148]: E0913 00:50:33.195698 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:33.223195 kubelet[2148]: I0913 00:50:33.223144 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtx5n\" (UniqueName: \"kubernetes.io/projected/d5df944d-47fe-4f4e-8710-50edcf01e802-kube-api-access-vtx5n\") pod \"nfs-server-provisioner-0\" (UID: \"d5df944d-47fe-4f4e-8710-50edcf01e802\") " pod="default/nfs-server-provisioner-0" Sep 13 00:50:33.223489 kubelet[2148]: I0913 00:50:33.223463 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d5df944d-47fe-4f4e-8710-50edcf01e802-data\") pod \"nfs-server-provisioner-0\" (UID: \"d5df944d-47fe-4f4e-8710-50edcf01e802\") " pod="default/nfs-server-provisioner-0" Sep 13 00:50:33.411494 env[1732]: time="2025-09-13T00:50:33.411447922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d5df944d-47fe-4f4e-8710-50edcf01e802,Namespace:default,Attempt:0,}" Sep 13 00:50:33.535344 systemd-networkd[1415]: lxc400fcd64cd84: Link UP Sep 13 00:50:33.537655 (udev-worker)[3404]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:33.542953 kernel: eth0: renamed from tmp6daf4 Sep 13 00:50:33.559986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:50:33.560086 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc400fcd64cd84: link becomes ready Sep 13 00:50:33.557347 systemd-networkd[1415]: lxc400fcd64cd84: Gained carrier Sep 13 00:50:33.730824 env[1732]: time="2025-09-13T00:50:33.730745678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:33.731020 env[1732]: time="2025-09-13T00:50:33.730833203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:33.731020 env[1732]: time="2025-09-13T00:50:33.730891405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:33.731126 env[1732]: time="2025-09-13T00:50:33.731060213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6daf4e8772da920096d64c297a746289ca8774d0a3feb3912f6ab80dd8f6b7c2 pid=3419 runtime=io.containerd.runc.v2 Sep 13 00:50:33.799365 env[1732]: time="2025-09-13T00:50:33.799321712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d5df944d-47fe-4f4e-8710-50edcf01e802,Namespace:default,Attempt:0,} returns sandbox id \"6daf4e8772da920096d64c297a746289ca8774d0a3feb3912f6ab80dd8f6b7c2\"" Sep 13 00:50:33.801220 env[1732]: time="2025-09-13T00:50:33.801193148Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 13 00:50:34.196883 kubelet[2148]: E0913 00:50:34.196754 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:34.986151 systemd-networkd[1415]: lxc400fcd64cd84: Gained IPv6LL Sep 13 00:50:35.197288 kubelet[2148]: E0913 00:50:35.197212 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:36.123221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945973655.mount: Deactivated successfully. Sep 13 00:50:36.198386 kubelet[2148]: E0913 00:50:36.198347 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:37.198897 kubelet[2148]: E0913 00:50:37.198817 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:38.192266 env[1732]: time="2025-09-13T00:50:38.192196423Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:38.195455 env[1732]: time="2025-09-13T00:50:38.195408941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:38.197922 env[1732]: time="2025-09-13T00:50:38.197882861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:38.199126 kubelet[2148]: E0913 00:50:38.199075 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:38.199921 env[1732]: time="2025-09-13T00:50:38.199887676Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:38.200880 env[1732]: time="2025-09-13T00:50:38.200829080Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Sep 13 00:50:38.203464 env[1732]: time="2025-09-13T00:50:38.203431001Z" level=info msg="CreateContainer within sandbox \"6daf4e8772da920096d64c297a746289ca8774d0a3feb3912f6ab80dd8f6b7c2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 13 00:50:38.214932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559119996.mount: Deactivated successfully. Sep 13 00:50:38.226079 env[1732]: time="2025-09-13T00:50:38.226023866Z" level=info msg="CreateContainer within sandbox \"6daf4e8772da920096d64c297a746289ca8774d0a3feb3912f6ab80dd8f6b7c2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"951e98b86af72920ec0c1ed0433eaf8f44191e98f128f764e0d113b181bd655a\"" Sep 13 00:50:38.226802 env[1732]: time="2025-09-13T00:50:38.226768090Z" level=info msg="StartContainer for \"951e98b86af72920ec0c1ed0433eaf8f44191e98f128f764e0d113b181bd655a\"" Sep 13 00:50:38.311426 env[1732]: time="2025-09-13T00:50:38.311376610Z" level=info msg="StartContainer for \"951e98b86af72920ec0c1ed0433eaf8f44191e98f128f764e0d113b181bd655a\" returns successfully" Sep 13 00:50:38.559693 kubelet[2148]: I0913 00:50:38.559616 2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.158176155 podStartE2EDuration="5.559597615s" podCreationTimestamp="2025-09-13 00:50:33 +0000 UTC" firstStartedPulling="2025-09-13 00:50:33.800767949 +0000 UTC m=+43.096776239" lastFinishedPulling="2025-09-13 00:50:38.202189387 +0000 UTC m=+47.498197699" observedRunningTime="2025-09-13 00:50:38.559503455 +0000 UTC m=+47.855511768" watchObservedRunningTime="2025-09-13 00:50:38.559597615 +0000 UTC m=+47.855605927" Sep 13 00:50:39.199330 kubelet[2148]: E0913 00:50:39.199278 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:40.200028 kubelet[2148]: E0913 00:50:40.199974 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:41.200394 kubelet[2148]: E0913 00:50:41.200320 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:42.201179 kubelet[2148]: E0913 00:50:42.201119 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:43.202346 kubelet[2148]: E0913 00:50:43.202288 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:44.203520 kubelet[2148]: E0913 00:50:44.203477 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:45.204081 kubelet[2148]: E0913 00:50:45.204027 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:46.204799 kubelet[2148]: E0913 00:50:46.204746 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:47.205494 kubelet[2148]: E0913 00:50:47.205438 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:47.935014 kubelet[2148]: I0913 00:50:47.934968 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzs5\" (UniqueName: \"kubernetes.io/projected/d0767df9-1eba-4a9c-a31d-890e3df50be7-kube-api-access-ztzs5\") pod \"test-pod-1\" (UID: \"d0767df9-1eba-4a9c-a31d-890e3df50be7\") " pod="default/test-pod-1" Sep 13 00:50:47.935191 kubelet[2148]: I0913 00:50:47.935024 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5af1f553-c3ec-4725-a1f3-8ded93920a36\" (UniqueName: \"kubernetes.io/nfs/d0767df9-1eba-4a9c-a31d-890e3df50be7-pvc-5af1f553-c3ec-4725-a1f3-8ded93920a36\") pod \"test-pod-1\" (UID: \"d0767df9-1eba-4a9c-a31d-890e3df50be7\") " pod="default/test-pod-1" Sep 13 00:50:48.123887 kernel: FS-Cache: Loaded Sep 13 00:50:48.175033 kernel: RPC: Registered named UNIX socket transport module. Sep 13 00:50:48.175258 kernel: RPC: Registered udp transport module. Sep 13 00:50:48.175291 kernel: RPC: Registered tcp transport module. Sep 13 00:50:48.177485 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 13 00:50:48.206080 kubelet[2148]: E0913 00:50:48.205945 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:48.247889 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 13 00:50:48.447195 kernel: NFS: Registering the id_resolver key type Sep 13 00:50:48.447349 kernel: Key type id_resolver registered Sep 13 00:50:48.447382 kernel: Key type id_legacy registered Sep 13 00:50:48.558043 nfsidmap[3535]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 13 00:50:48.561904 nfsidmap[3536]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 13 00:50:48.723385 env[1732]: time="2025-09-13T00:50:48.723333766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d0767df9-1eba-4a9c-a31d-890e3df50be7,Namespace:default,Attempt:0,}" Sep 13 00:50:48.753662 (udev-worker)[3529]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:48.754340 (udev-worker)[3537]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:50:48.755558 systemd-networkd[1415]: lxc2ac2e5a6aa7a: Link UP Sep 13 00:50:48.763887 kernel: eth0: renamed from tmpb975c Sep 13 00:50:48.771635 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:50:48.771729 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2ac2e5a6aa7a: link becomes ready Sep 13 00:50:48.771471 systemd-networkd[1415]: lxc2ac2e5a6aa7a: Gained carrier Sep 13 00:50:49.106934 env[1732]: time="2025-09-13T00:50:49.106822471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:50:49.107132 env[1732]: time="2025-09-13T00:50:49.106944533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:50:49.107132 env[1732]: time="2025-09-13T00:50:49.106976808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:50:49.107284 env[1732]: time="2025-09-13T00:50:49.107203128Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b975cd6582045aaf805d85151a863c22d2f0b5d1d84a6764f8bd3a075dc7e1a0 pid=3562 runtime=io.containerd.runc.v2 Sep 13 00:50:49.139653 systemd[1]: run-containerd-runc-k8s.io-b975cd6582045aaf805d85151a863c22d2f0b5d1d84a6764f8bd3a075dc7e1a0-runc.79v0tm.mount: Deactivated successfully. Sep 13 00:50:49.190434 env[1732]: time="2025-09-13T00:50:49.189868373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d0767df9-1eba-4a9c-a31d-890e3df50be7,Namespace:default,Attempt:0,} returns sandbox id \"b975cd6582045aaf805d85151a863c22d2f0b5d1d84a6764f8bd3a075dc7e1a0\"" Sep 13 00:50:49.191661 env[1732]: time="2025-09-13T00:50:49.191606561Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 13 00:50:49.206534 kubelet[2148]: E0913 00:50:49.206478 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:49.535596 env[1732]: time="2025-09-13T00:50:49.534944592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:49.537472 env[1732]: time="2025-09-13T00:50:49.537427246Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:49.539555 env[1732]: time="2025-09-13T00:50:49.539509495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:49.541717 env[1732]: time="2025-09-13T00:50:49.541679183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:50:49.542367 env[1732]: time="2025-09-13T00:50:49.542330960Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 13 00:50:49.545481 env[1732]: time="2025-09-13T00:50:49.545439563Z" level=info msg="CreateContainer within sandbox \"b975cd6582045aaf805d85151a863c22d2f0b5d1d84a6764f8bd3a075dc7e1a0\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 13 00:50:49.561822 env[1732]: time="2025-09-13T00:50:49.561760582Z" level=info msg="CreateContainer within sandbox \"b975cd6582045aaf805d85151a863c22d2f0b5d1d84a6764f8bd3a075dc7e1a0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"384444b0753cf1ed6d35f0c55fd4c3ebb06697fc5479796157090e9c1abcbbe5\"" Sep 13 00:50:49.562347 env[1732]: time="2025-09-13T00:50:49.562316689Z" level=info msg="StartContainer for \"384444b0753cf1ed6d35f0c55fd4c3ebb06697fc5479796157090e9c1abcbbe5\"" Sep 13 00:50:49.616722 env[1732]: time="2025-09-13T00:50:49.616662881Z" level=info msg="StartContainer for \"384444b0753cf1ed6d35f0c55fd4c3ebb06697fc5479796157090e9c1abcbbe5\" returns successfully" Sep 13 00:50:50.026123 systemd-networkd[1415]: lxc2ac2e5a6aa7a: Gained IPv6LL Sep 13 00:50:50.111320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943838046.mount: Deactivated successfully. Sep 13 00:50:50.207673 kubelet[2148]: E0913 00:50:50.207617 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:50.585750 kubelet[2148]: I0913 00:50:50.585690 2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.233185255 podStartE2EDuration="17.585671944s" podCreationTimestamp="2025-09-13 00:50:33 +0000 UTC" firstStartedPulling="2025-09-13 00:50:49.191261364 +0000 UTC m=+58.487269672" lastFinishedPulling="2025-09-13 00:50:49.543748054 +0000 UTC m=+58.839756361" observedRunningTime="2025-09-13 00:50:50.585471083 +0000 UTC m=+59.881479396" watchObservedRunningTime="2025-09-13 00:50:50.585671944 +0000 UTC m=+59.881680256" Sep 13 00:50:51.156813 kubelet[2148]: E0913 00:50:51.156765 2148 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:51.208760 kubelet[2148]: E0913 00:50:51.208705 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:52.209874 kubelet[2148]: E0913 00:50:52.209796 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:53.210651 kubelet[2148]: E0913 00:50:53.210591 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:54.211678 kubelet[2148]: E0913 00:50:54.211603 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:55.212203 kubelet[2148]: E0913 00:50:55.212162 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:56.212914 kubelet[2148]: E0913 00:50:56.212850 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:57.213923 kubelet[2148]: E0913 00:50:57.213876 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:58.092964 systemd[1]: run-containerd-runc-k8s.io-42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604-runc.jpLCnz.mount: Deactivated successfully. Sep 13 00:50:58.214290 kubelet[2148]: E0913 00:50:58.214233 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:58.279988 env[1732]: time="2025-09-13T00:50:58.279828643Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:50:58.317720 env[1732]: time="2025-09-13T00:50:58.317667659Z" level=info msg="StopContainer for \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\" with timeout 2 (s)" Sep 13 00:50:58.318189 env[1732]: time="2025-09-13T00:50:58.318162253Z" level=info msg="Stop container \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\" with signal terminated" Sep 13 00:50:58.325543 systemd-networkd[1415]: lxc_health: Link DOWN Sep 13 00:50:58.325668 systemd-networkd[1415]: lxc_health: Lost carrier Sep 13 00:50:58.372089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604-rootfs.mount: Deactivated successfully. Sep 13 00:50:58.385582 env[1732]: time="2025-09-13T00:50:58.385522488Z" level=info msg="shim disconnected" id=42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604 Sep 13 00:50:58.385582 env[1732]: time="2025-09-13T00:50:58.385578651Z" level=warning msg="cleaning up after shim disconnected" id=42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604 namespace=k8s.io Sep 13 00:50:58.385937 env[1732]: time="2025-09-13T00:50:58.385590919Z" level=info msg="cleaning up dead shim" Sep 13 00:50:58.394162 env[1732]: time="2025-09-13T00:50:58.394113342Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3700 runtime=io.containerd.runc.v2\n" Sep 13 00:50:58.396062 env[1732]: time="2025-09-13T00:50:58.396019857Z" level=info msg="StopContainer for \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\" returns successfully" Sep 13 00:50:58.396787 env[1732]: time="2025-09-13T00:50:58.396717145Z" level=info msg="StopPodSandbox for \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\"" Sep 13 00:50:58.396787 env[1732]: time="2025-09-13T00:50:58.396775732Z" level=info msg="Container to stop \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:50:58.396787 env[1732]: time="2025-09-13T00:50:58.396791006Z" level=info msg="Container to stop \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:50:58.397001 env[1732]: time="2025-09-13T00:50:58.396801255Z" level=info msg="Container to stop \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:50:58.397001 env[1732]: time="2025-09-13T00:50:58.396811976Z" level=info msg="Container to stop \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:50:58.397001 env[1732]: time="2025-09-13T00:50:58.396822980Z" level=info msg="Container to stop \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:50:58.400822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf-shm.mount: Deactivated successfully. Sep 13 00:50:58.430194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf-rootfs.mount: Deactivated successfully. Sep 13 00:50:58.441170 env[1732]: time="2025-09-13T00:50:58.441114483Z" level=info msg="shim disconnected" id=f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf Sep 13 00:50:58.441548 env[1732]: time="2025-09-13T00:50:58.441501904Z" level=warning msg="cleaning up after shim disconnected" id=f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf namespace=k8s.io Sep 13 00:50:58.441548 env[1732]: time="2025-09-13T00:50:58.441526635Z" level=info msg="cleaning up dead shim" Sep 13 00:50:58.450973 env[1732]: time="2025-09-13T00:50:58.450927271Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:50:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3733 runtime=io.containerd.runc.v2\n" Sep 13 00:50:58.452237 env[1732]: time="2025-09-13T00:50:58.452197055Z" level=info msg="TearDown network for sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" successfully" Sep 13 00:50:58.452237 env[1732]: time="2025-09-13T00:50:58.452230948Z" level=info msg="StopPodSandbox for \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" returns successfully" Sep 13 00:50:58.595460 kubelet[2148]: I0913 00:50:58.595434 2148 scope.go:117] "RemoveContainer" containerID="42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604" Sep 13 00:50:58.597147 env[1732]: time="2025-09-13T00:50:58.597104782Z" level=info msg="RemoveContainer for \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\"" Sep 13 00:50:58.600533 env[1732]: time="2025-09-13T00:50:58.600349621Z" level=info msg="RemoveContainer for \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\" returns successfully" Sep 13 00:50:58.600782 kubelet[2148]: I0913 00:50:58.600750 2148 scope.go:117] "RemoveContainer" containerID="ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a" Sep 13 00:50:58.602077 env[1732]: time="2025-09-13T00:50:58.601813988Z" level=info msg="RemoveContainer for \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\"" Sep 13 00:50:58.604884 env[1732]: time="2025-09-13T00:50:58.604833472Z" level=info msg="RemoveContainer for \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\" returns successfully" Sep 13 00:50:58.605196 kubelet[2148]: I0913 00:50:58.605179 2148 scope.go:117] "RemoveContainer" containerID="6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa" Sep 13 00:50:58.606370 env[1732]: time="2025-09-13T00:50:58.606342404Z" level=info msg="RemoveContainer for \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\"" Sep 13 00:50:58.606717 kubelet[2148]: I0913 00:50:58.606686 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-xtables-lock\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.606808 kubelet[2148]: I0913 00:50:58.606721 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-clustermesh-secrets\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.606808 kubelet[2148]: I0913 00:50:58.606739 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-bpf-maps\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.606808 kubelet[2148]: I0913 00:50:58.606753 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-host-proc-sys-net\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.606808 kubelet[2148]: I0913 00:50:58.606770 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwvq6\" (UniqueName: \"kubernetes.io/projected/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-kube-api-access-dwvq6\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.606808 kubelet[2148]: I0913 00:50:58.606784 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-run\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.606808 kubelet[2148]: I0913 00:50:58.606797 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cni-path\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.607023 kubelet[2148]: I0913 00:50:58.606810 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-hostproc\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.607023 kubelet[2148]: I0913 00:50:58.606823 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-host-proc-sys-kernel\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.607023 kubelet[2148]: I0913 00:50:58.606839 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-hubble-tls\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.607023 kubelet[2148]: I0913 00:50:58.606872 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-etc-cni-netd\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.607023 kubelet[2148]: I0913 00:50:58.606892 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-lib-modules\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.607023 kubelet[2148]: I0913 00:50:58.606929 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-config-path\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.607180 kubelet[2148]: I0913 00:50:58.606942 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-cgroup\") pod \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\" (UID: \"3075bd64-39aa-41b5-ac6f-b765a39a0d2e\") " Sep 13 00:50:58.607180 kubelet[2148]: I0913 00:50:58.607018 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.607180 kubelet[2148]: I0913 00:50:58.607044 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.607317 kubelet[2148]: I0913 00:50:58.607303 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-hostproc" (OuterVolumeSpecName: "hostproc") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.607369 kubelet[2148]: I0913 00:50:58.607325 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.607369 kubelet[2148]: I0913 00:50:58.607340 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.607914 kubelet[2148]: I0913 00:50:58.607886 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.608151 kubelet[2148]: I0913 00:50:58.608133 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.608151 kubelet[2148]: I0913 00:50:58.608159 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cni-path" (OuterVolumeSpecName: "cni-path") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.608245 kubelet[2148]: I0913 00:50:58.608182 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.608245 kubelet[2148]: I0913 00:50:58.608198 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:50:58.610152 kubelet[2148]: I0913 00:50:58.610128 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:50:58.610329 env[1732]: time="2025-09-13T00:50:58.610295280Z" level=info msg="RemoveContainer for \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\" returns successfully" Sep 13 00:50:58.611998 kubelet[2148]: I0913 00:50:58.611975 2148 scope.go:117] "RemoveContainer" containerID="dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0" Sep 13 00:50:58.613197 kubelet[2148]: I0913 00:50:58.613177 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-kube-api-access-dwvq6" (OuterVolumeSpecName: "kube-api-access-dwvq6") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "kube-api-access-dwvq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:50:58.613729 env[1732]: time="2025-09-13T00:50:58.613427076Z" level=info msg="RemoveContainer for \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\"" Sep 13 00:50:58.615077 kubelet[2148]: I0913 00:50:58.615056 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:50:58.616258 kubelet[2148]: I0913 00:50:58.615511 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3075bd64-39aa-41b5-ac6f-b765a39a0d2e" (UID: "3075bd64-39aa-41b5-ac6f-b765a39a0d2e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:50:58.616326 env[1732]: time="2025-09-13T00:50:58.616187242Z" level=info msg="RemoveContainer for \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\" returns successfully" Sep 13 00:50:58.616366 kubelet[2148]: I0913 00:50:58.616355 2148 scope.go:117] "RemoveContainer" containerID="a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc" Sep 13 00:50:58.617679 env[1732]: time="2025-09-13T00:50:58.617652299Z" level=info msg="RemoveContainer for \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\"" Sep 13 00:50:58.620233 env[1732]: time="2025-09-13T00:50:58.620199923Z" level=info msg="RemoveContainer for \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\" returns successfully" Sep 13 00:50:58.620540 kubelet[2148]: I0913 00:50:58.620361 2148 scope.go:117] "RemoveContainer" containerID="42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604" Sep 13 00:50:58.620951 env[1732]: time="2025-09-13T00:50:58.620844475Z" level=error msg="ContainerStatus for \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\": not found" Sep 13 00:50:58.621106 kubelet[2148]: E0913 00:50:58.621071 2148 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\": not found" containerID="42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604" Sep 13 00:50:58.621173 kubelet[2148]: I0913 00:50:58.621109 2148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604"} err="failed to get container status \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\": rpc error: code = NotFound desc = an error occurred when try to find container \"42f27841aafe7ae4188ae8434bf1b97aa299c5b7580e40c9990ff4e15e63f604\": not found" Sep 13 00:50:58.621213 kubelet[2148]: I0913 00:50:58.621174 2148 scope.go:117] "RemoveContainer" containerID="ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a" Sep 13 00:50:58.621469 env[1732]: time="2025-09-13T00:50:58.621408617Z" level=error msg="ContainerStatus for \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\": not found" Sep 13 00:50:58.621702 kubelet[2148]: E0913 00:50:58.621675 2148 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\": not found" containerID="ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a" Sep 13 00:50:58.621754 kubelet[2148]: I0913 00:50:58.621730 2148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a"} err="failed to get container status \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff7accb2a033e75387b7442702690304e6ddc5a2bf00ceae419cf474386f248a\": not found" Sep 13 00:50:58.621786 kubelet[2148]: I0913 00:50:58.621748 2148 scope.go:117] "RemoveContainer" containerID="6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa" Sep 13 00:50:58.622483 env[1732]: time="2025-09-13T00:50:58.622426225Z" level=error msg="ContainerStatus for \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\": not found" Sep 13 00:50:58.622684 kubelet[2148]: E0913 00:50:58.622641 2148 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\": not found" containerID="6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa" Sep 13 00:50:58.622764 kubelet[2148]: I0913 00:50:58.622685 2148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa"} err="failed to get container status \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"6deb4fe425086f041872ebac2e0e2aeedabf3d419968fd1d519f1a3afc2cc3aa\": not found" Sep 13 00:50:58.622764 kubelet[2148]: I0913 00:50:58.622701 2148 scope.go:117] "RemoveContainer" containerID="dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0" Sep 13 00:50:58.623889 env[1732]: time="2025-09-13T00:50:58.622930198Z" level=error msg="ContainerStatus for \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\": not found" Sep 13 00:50:58.625059 kubelet[2148]: E0913 00:50:58.625031 2148 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\": not found" containerID="dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0" Sep 13 00:50:58.625156 kubelet[2148]: I0913 00:50:58.625060 2148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0"} err="failed to get container status \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd46ca773b9d66fddb621cb3b2cd363f3a81b70de977e61748f5ec375c86d6c0\": not found" Sep 13 00:50:58.625156 kubelet[2148]: I0913 00:50:58.625079 2148 scope.go:117] "RemoveContainer" containerID="a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc" Sep 13 00:50:58.625466 env[1732]: time="2025-09-13T00:50:58.625392748Z" level=error msg="ContainerStatus for \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\": not found" Sep 13 00:50:58.625655 kubelet[2148]: E0913 00:50:58.625631 2148 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\": not found" containerID="a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc" Sep 13 00:50:58.625735 kubelet[2148]: I0913 00:50:58.625655 2148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc"} err="failed to get container status \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9bea9f1d6e332f4db1d95761ec4582cd485ce2df2a674419104ecaf3f362abc\": not found" Sep 13 00:50:58.707846 kubelet[2148]: I0913 00:50:58.707783 2148 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cni-path\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.707846 kubelet[2148]: I0913 00:50:58.707818 2148 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-bpf-maps\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.707846 kubelet[2148]: I0913 00:50:58.707829 2148 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-host-proc-sys-net\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.707846 kubelet[2148]: I0913 00:50:58.707837 2148 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwvq6\" (UniqueName: \"kubernetes.io/projected/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-kube-api-access-dwvq6\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.707846 kubelet[2148]: I0913 00:50:58.707847 2148 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-run\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.707846 kubelet[2148]: I0913 00:50:58.707879 2148 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-hostproc\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.708164 kubelet[2148]: I0913 00:50:58.707887 2148 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-host-proc-sys-kernel\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.708164 kubelet[2148]: I0913 00:50:58.707895 2148 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-hubble-tls\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.708164 kubelet[2148]: I0913 00:50:58.707902 2148 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-cgroup\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.708164 kubelet[2148]: I0913 00:50:58.707909 2148 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-etc-cni-netd\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.708164 kubelet[2148]: I0913 00:50:58.707917 2148 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-lib-modules\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.708164 kubelet[2148]: I0913 00:50:58.707925 2148 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-cilium-config-path\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.708164 kubelet[2148]: I0913 00:50:58.707932 2148 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-xtables-lock\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:58.708164 kubelet[2148]: I0913 00:50:58.707949 2148 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3075bd64-39aa-41b5-ac6f-b765a39a0d2e-clustermesh-secrets\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:50:59.086562 systemd[1]: var-lib-kubelet-pods-3075bd64\x2d39aa\x2d41b5\x2dac6f\x2db765a39a0d2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddwvq6.mount: Deactivated successfully. Sep 13 00:50:59.086719 systemd[1]: var-lib-kubelet-pods-3075bd64\x2d39aa\x2d41b5\x2dac6f\x2db765a39a0d2e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:50:59.086833 systemd[1]: var-lib-kubelet-pods-3075bd64\x2d39aa\x2d41b5\x2dac6f\x2db765a39a0d2e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:50:59.215287 kubelet[2148]: E0913 00:50:59.215211 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:50:59.399051 kubelet[2148]: I0913 00:50:59.398946 2148 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3075bd64-39aa-41b5-ac6f-b765a39a0d2e" path="/var/lib/kubelet/pods/3075bd64-39aa-41b5-ac6f-b765a39a0d2e/volumes" Sep 13 00:51:00.216331 kubelet[2148]: E0913 00:51:00.216273 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:01.185743 kubelet[2148]: E0913 00:51:01.185689 2148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3075bd64-39aa-41b5-ac6f-b765a39a0d2e" containerName="apply-sysctl-overwrites" Sep 13 00:51:01.185743 kubelet[2148]: E0913 00:51:01.185730 2148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3075bd64-39aa-41b5-ac6f-b765a39a0d2e" containerName="clean-cilium-state" Sep 13 00:51:01.185743 kubelet[2148]: E0913 00:51:01.185743 2148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3075bd64-39aa-41b5-ac6f-b765a39a0d2e" containerName="mount-cgroup" Sep 13 00:51:01.185743 kubelet[2148]: E0913 00:51:01.185752 2148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3075bd64-39aa-41b5-ac6f-b765a39a0d2e" containerName="mount-bpf-fs" Sep 13 00:51:01.186193 kubelet[2148]: E0913 00:51:01.185761 2148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3075bd64-39aa-41b5-ac6f-b765a39a0d2e" containerName="cilium-agent" Sep 13 00:51:01.186193 kubelet[2148]: I0913 00:51:01.185787 2148 memory_manager.go:354] "RemoveStaleState removing state" podUID="3075bd64-39aa-41b5-ac6f-b765a39a0d2e" containerName="cilium-agent" Sep 13 00:51:01.217142 kubelet[2148]: E0913 00:51:01.217063 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:01.303766 kubelet[2148]: E0913 00:51:01.303716 2148 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:51:01.325604 kubelet[2148]: I0913 00:51:01.325551 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9mch\" (UniqueName: \"kubernetes.io/projected/145bb108-9e9e-4a2e-9891-0483b1b5de8b-kube-api-access-v9mch\") pod \"cilium-operator-5d85765b45-7pz6x\" (UID: \"145bb108-9e9e-4a2e-9891-0483b1b5de8b\") " pod="kube-system/cilium-operator-5d85765b45-7pz6x" Sep 13 00:51:01.325795 kubelet[2148]: I0913 00:51:01.325616 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-cgroup\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.325795 kubelet[2148]: I0913 00:51:01.325641 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-etc-cni-netd\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.325795 kubelet[2148]: I0913 00:51:01.325663 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-xtables-lock\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.325795 kubelet[2148]: I0913 00:51:01.325685 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6597395a-1b94-4fea-932c-f8d57edb7b66-hubble-tls\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.325795 kubelet[2148]: I0913 00:51:01.325709 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vpwn\" (UniqueName: \"kubernetes.io/projected/6597395a-1b94-4fea-932c-f8d57edb7b66-kube-api-access-8vpwn\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.325795 kubelet[2148]: I0913 00:51:01.325731 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-host-proc-sys-net\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.326127 kubelet[2148]: I0913 00:51:01.325754 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6597395a-1b94-4fea-932c-f8d57edb7b66-clustermesh-secrets\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.326127 kubelet[2148]: I0913 00:51:01.325780 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-config-path\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.326127 kubelet[2148]: I0913 00:51:01.325801 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-ipsec-secrets\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.326127 kubelet[2148]: I0913 00:51:01.325827 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-host-proc-sys-kernel\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.326127 kubelet[2148]: I0913 00:51:01.325847 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/145bb108-9e9e-4a2e-9891-0483b1b5de8b-cilium-config-path\") pod \"cilium-operator-5d85765b45-7pz6x\" (UID: \"145bb108-9e9e-4a2e-9891-0483b1b5de8b\") " pod="kube-system/cilium-operator-5d85765b45-7pz6x" Sep 13 00:51:01.327149 kubelet[2148]: I0913 00:51:01.325906 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-run\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.327149 kubelet[2148]: I0913 00:51:01.325927 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-bpf-maps\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.327149 kubelet[2148]: I0913 00:51:01.325954 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-lib-modules\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.327149 kubelet[2148]: I0913 00:51:01.325982 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-hostproc\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.327149 kubelet[2148]: I0913 00:51:01.326008 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cni-path\") pod \"cilium-2mzhq\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " pod="kube-system/cilium-2mzhq" Sep 13 00:51:01.799701 env[1732]: time="2025-09-13T00:51:01.799338725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mzhq,Uid:6597395a-1b94-4fea-932c-f8d57edb7b66,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:01.809003 env[1732]: time="2025-09-13T00:51:01.808835417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7pz6x,Uid:145bb108-9e9e-4a2e-9891-0483b1b5de8b,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:01.869814 env[1732]: time="2025-09-13T00:51:01.869709134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:01.869814 env[1732]: time="2025-09-13T00:51:01.869770357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:01.877737 env[1732]: time="2025-09-13T00:51:01.877654213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:01.880430 env[1732]: time="2025-09-13T00:51:01.880236071Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48 pid=3765 runtime=io.containerd.runc.v2 Sep 13 00:51:01.894382 env[1732]: time="2025-09-13T00:51:01.894278588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:01.894570 env[1732]: time="2025-09-13T00:51:01.894410212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:01.894570 env[1732]: time="2025-09-13T00:51:01.894446626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:01.894735 env[1732]: time="2025-09-13T00:51:01.894678545Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b5205242ee962fc50fe6cbd50a5e97d780add25bd6dc1c0f4534121b41b0fbd pid=3777 runtime=io.containerd.runc.v2 Sep 13 00:51:02.189726 env[1732]: time="2025-09-13T00:51:02.188874115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mzhq,Uid:6597395a-1b94-4fea-932c-f8d57edb7b66,Namespace:kube-system,Attempt:0,} returns sandbox id \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\"" Sep 13 00:51:02.197392 env[1732]: time="2025-09-13T00:51:02.197338844Z" level=info msg="CreateContainer within sandbox \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:51:02.217346 kubelet[2148]: E0913 00:51:02.217252 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:02.231080 env[1732]: time="2025-09-13T00:51:02.231026577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7pz6x,Uid:145bb108-9e9e-4a2e-9891-0483b1b5de8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b5205242ee962fc50fe6cbd50a5e97d780add25bd6dc1c0f4534121b41b0fbd\"" Sep 13 00:51:02.235137 env[1732]: time="2025-09-13T00:51:02.235049699Z" level=info msg="CreateContainer within sandbox \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf\"" Sep 13 00:51:02.235982 env[1732]: time="2025-09-13T00:51:02.235938889Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:51:02.244067 env[1732]: time="2025-09-13T00:51:02.241243852Z" level=info msg="StartContainer for \"6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf\"" Sep 13 00:51:02.577047 env[1732]: time="2025-09-13T00:51:02.576984700Z" level=info msg="StartContainer for \"6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf\" returns successfully" Sep 13 00:51:02.996392 kubelet[2148]: I0913 00:51:02.996118 2148 setters.go:600] "Node became not ready" node="172.31.27.100" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:51:02Z","lastTransitionTime":"2025-09-13T00:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:51:03.039894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf-rootfs.mount: Deactivated successfully. Sep 13 00:51:03.058503 env[1732]: time="2025-09-13T00:51:03.058069867Z" level=info msg="shim disconnected" id=6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf Sep 13 00:51:03.058503 env[1732]: time="2025-09-13T00:51:03.058500252Z" level=warning msg="cleaning up after shim disconnected" id=6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf namespace=k8s.io Sep 13 00:51:03.059093 env[1732]: time="2025-09-13T00:51:03.058515079Z" level=info msg="cleaning up dead shim" Sep 13 00:51:03.087160 env[1732]: time="2025-09-13T00:51:03.087106973Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:51:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3889 runtime=io.containerd.runc.v2\n" Sep 13 00:51:03.218194 kubelet[2148]: E0913 00:51:03.218098 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:03.498629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693628477.mount: Deactivated successfully. Sep 13 00:51:03.650832 env[1732]: time="2025-09-13T00:51:03.650783243Z" level=info msg="StopPodSandbox for \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\"" Sep 13 00:51:03.660312 env[1732]: time="2025-09-13T00:51:03.650849861Z" level=info msg="Container to stop \"6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:51:03.654341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48-shm.mount: Deactivated successfully. Sep 13 00:51:03.689371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48-rootfs.mount: Deactivated successfully. Sep 13 00:51:03.702378 env[1732]: time="2025-09-13T00:51:03.702272173Z" level=info msg="shim disconnected" id=962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48 Sep 13 00:51:03.702378 env[1732]: time="2025-09-13T00:51:03.702380829Z" level=warning msg="cleaning up after shim disconnected" id=962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48 namespace=k8s.io Sep 13 00:51:03.703106 env[1732]: time="2025-09-13T00:51:03.702393249Z" level=info msg="cleaning up dead shim" Sep 13 00:51:03.727478 env[1732]: time="2025-09-13T00:51:03.727405713Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:51:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3924 runtime=io.containerd.runc.v2\n" Sep 13 00:51:03.727844 env[1732]: time="2025-09-13T00:51:03.727808027Z" level=info msg="TearDown network for sandbox \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\" successfully" Sep 13 00:51:03.729764 env[1732]: time="2025-09-13T00:51:03.727846442Z" level=info msg="StopPodSandbox for \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\" returns successfully" Sep 13 00:51:03.798330 kubelet[2148]: I0913 00:51:03.798294 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-etc-cni-netd\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.798609 kubelet[2148]: I0913 00:51:03.798592 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-host-proc-sys-net\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.798736 kubelet[2148]: I0913 00:51:03.798722 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6597395a-1b94-4fea-932c-f8d57edb7b66-clustermesh-secrets\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.798843 kubelet[2148]: I0913 00:51:03.798830 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-ipsec-secrets\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.798975 kubelet[2148]: I0913 00:51:03.798961 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-run\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.799091 kubelet[2148]: I0913 00:51:03.799078 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-xtables-lock\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.799198 kubelet[2148]: I0913 00:51:03.799185 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6597395a-1b94-4fea-932c-f8d57edb7b66-hubble-tls\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.799303 kubelet[2148]: I0913 00:51:03.799291 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-config-path\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.799515 kubelet[2148]: I0913 00:51:03.799502 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-host-proc-sys-kernel\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.800479 kubelet[2148]: I0913 00:51:03.800442 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-bpf-maps\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.806066 systemd[1]: var-lib-kubelet-pods-6597395a\x2d1b94\x2d4fea\x2d932c\x2df8d57edb7b66-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:51:03.812430 kubelet[2148]: I0913 00:51:03.812224 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-lib-modules\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.812732 kubelet[2148]: I0913 00:51:03.799623 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.812872 kubelet[2148]: I0913 00:51:03.799645 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.812966 kubelet[2148]: I0913 00:51:03.799719 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.813040 kubelet[2148]: I0913 00:51:03.799743 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.813104 kubelet[2148]: I0913 00:51:03.799758 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.813171 kubelet[2148]: I0913 00:51:03.800912 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.813242 kubelet[2148]: I0913 00:51:03.812281 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.813732 kubelet[2148]: I0913 00:51:03.812687 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vpwn\" (UniqueName: \"kubernetes.io/projected/6597395a-1b94-4fea-932c-f8d57edb7b66-kube-api-access-8vpwn\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.814482 kubelet[2148]: I0913 00:51:03.814462 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cni-path\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.814627 kubelet[2148]: I0913 00:51:03.814612 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-cgroup\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.814888 kubelet[2148]: I0913 00:51:03.814874 2148 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-hostproc\") pod \"6597395a-1b94-4fea-932c-f8d57edb7b66\" (UID: \"6597395a-1b94-4fea-932c-f8d57edb7b66\") " Sep 13 00:51:03.815049 kubelet[2148]: I0913 00:51:03.815029 2148 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-xtables-lock\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.815138 kubelet[2148]: I0913 00:51:03.815126 2148 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-host-proc-sys-kernel\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.815229 kubelet[2148]: I0913 00:51:03.815219 2148 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-bpf-maps\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.815318 kubelet[2148]: I0913 00:51:03.815308 2148 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-lib-modules\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.815409 kubelet[2148]: I0913 00:51:03.815399 2148 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-etc-cni-netd\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.815501 kubelet[2148]: I0913 00:51:03.815491 2148 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-host-proc-sys-net\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.815591 kubelet[2148]: I0913 00:51:03.815581 2148 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-run\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.815702 kubelet[2148]: I0913 00:51:03.814764 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:51:03.815803 kubelet[2148]: I0913 00:51:03.814810 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.815892 kubelet[2148]: I0913 00:51:03.814827 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cni-path" (OuterVolumeSpecName: "cni-path") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.815981 kubelet[2148]: I0913 00:51:03.815687 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-hostproc" (OuterVolumeSpecName: "hostproc") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:51:03.823910 systemd[1]: var-lib-kubelet-pods-6597395a\x2d1b94\x2d4fea\x2d932c\x2df8d57edb7b66-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:51:03.826706 kubelet[2148]: I0913 00:51:03.826667 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:51:03.828234 kubelet[2148]: I0913 00:51:03.828197 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6597395a-1b94-4fea-932c-f8d57edb7b66-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:51:03.836746 kubelet[2148]: I0913 00:51:03.833525 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6597395a-1b94-4fea-932c-f8d57edb7b66-kube-api-access-8vpwn" (OuterVolumeSpecName: "kube-api-access-8vpwn") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "kube-api-access-8vpwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:51:03.833557 systemd[1]: var-lib-kubelet-pods-6597395a\x2d1b94\x2d4fea\x2d932c\x2df8d57edb7b66-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:51:03.833763 systemd[1]: var-lib-kubelet-pods-6597395a\x2d1b94\x2d4fea\x2d932c\x2df8d57edb7b66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8vpwn.mount: Deactivated successfully. Sep 13 00:51:03.840052 kubelet[2148]: I0913 00:51:03.840009 2148 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6597395a-1b94-4fea-932c-f8d57edb7b66-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6597395a-1b94-4fea-932c-f8d57edb7b66" (UID: "6597395a-1b94-4fea-932c-f8d57edb7b66"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:51:03.916711 kubelet[2148]: I0913 00:51:03.916671 2148 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-cgroup\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.916978 kubelet[2148]: I0913 00:51:03.916963 2148 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vpwn\" (UniqueName: \"kubernetes.io/projected/6597395a-1b94-4fea-932c-f8d57edb7b66-kube-api-access-8vpwn\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.917090 kubelet[2148]: I0913 00:51:03.917079 2148 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-cni-path\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.917186 kubelet[2148]: I0913 00:51:03.917175 2148 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6597395a-1b94-4fea-932c-f8d57edb7b66-hostproc\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.917278 kubelet[2148]: I0913 00:51:03.917268 2148 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6597395a-1b94-4fea-932c-f8d57edb7b66-clustermesh-secrets\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.917372 kubelet[2148]: I0913 00:51:03.917362 2148 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-ipsec-secrets\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.917461 kubelet[2148]: I0913 00:51:03.917451 2148 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6597395a-1b94-4fea-932c-f8d57edb7b66-hubble-tls\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:03.917574 kubelet[2148]: I0913 00:51:03.917562 2148 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6597395a-1b94-4fea-932c-f8d57edb7b66-cilium-config-path\") on node \"172.31.27.100\" DevicePath \"\"" Sep 13 00:51:04.221231 kubelet[2148]: E0913 00:51:04.218769 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:04.356541 env[1732]: time="2025-09-13T00:51:04.356407497Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:04.358427 env[1732]: time="2025-09-13T00:51:04.358387672Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:04.360198 env[1732]: time="2025-09-13T00:51:04.360160658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:04.360824 env[1732]: time="2025-09-13T00:51:04.360790069Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:51:04.363413 env[1732]: time="2025-09-13T00:51:04.363375334Z" level=info msg="CreateContainer within sandbox \"1b5205242ee962fc50fe6cbd50a5e97d780add25bd6dc1c0f4534121b41b0fbd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:51:04.378508 env[1732]: time="2025-09-13T00:51:04.378455917Z" level=info msg="CreateContainer within sandbox \"1b5205242ee962fc50fe6cbd50a5e97d780add25bd6dc1c0f4534121b41b0fbd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"22290205ac982624a104900f840843ed0217ec328e583a976da9f8a88c9913a5\"" Sep 13 00:51:04.379363 env[1732]: time="2025-09-13T00:51:04.379325168Z" level=info msg="StartContainer for \"22290205ac982624a104900f840843ed0217ec328e583a976da9f8a88c9913a5\"" Sep 13 00:51:04.439871 env[1732]: time="2025-09-13T00:51:04.437835626Z" level=info msg="StartContainer for \"22290205ac982624a104900f840843ed0217ec328e583a976da9f8a88c9913a5\" returns successfully" Sep 13 00:51:04.658902 kubelet[2148]: I0913 00:51:04.654447 2148 scope.go:117] "RemoveContainer" containerID="6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf" Sep 13 00:51:04.660013 env[1732]: time="2025-09-13T00:51:04.657042625Z" level=info msg="RemoveContainer for \"6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf\"" Sep 13 00:51:04.664953 env[1732]: time="2025-09-13T00:51:04.664907816Z" level=info msg="RemoveContainer for \"6773b201966923cbef5f1ab06c45be982b7a77182e573ee0a3e2170de9a2cbbf\" returns successfully" Sep 13 00:51:04.704076 kubelet[2148]: I0913 00:51:04.704018 2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7pz6x" podStartSLOduration=1.577361953 podStartE2EDuration="3.70400126s" podCreationTimestamp="2025-09-13 00:51:01 +0000 UTC" firstStartedPulling="2025-09-13 00:51:02.2354839 +0000 UTC m=+71.531492206" lastFinishedPulling="2025-09-13 00:51:04.362123222 +0000 UTC m=+73.658131513" observedRunningTime="2025-09-13 00:51:04.6706318 +0000 UTC m=+73.966640112" watchObservedRunningTime="2025-09-13 00:51:04.70400126 +0000 UTC m=+74.000009571" Sep 13 00:51:04.731841 kubelet[2148]: E0913 00:51:04.731805 2148 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6597395a-1b94-4fea-932c-f8d57edb7b66" containerName="mount-cgroup" Sep 13 00:51:04.731841 kubelet[2148]: I0913 00:51:04.731845 2148 memory_manager.go:354] "RemoveStaleState removing state" podUID="6597395a-1b94-4fea-932c-f8d57edb7b66" containerName="mount-cgroup" Sep 13 00:51:04.825341 kubelet[2148]: I0913 00:51:04.825265 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-host-proc-sys-kernel\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825341 kubelet[2148]: I0913 00:51:04.825328 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-lib-modules\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825341 kubelet[2148]: I0913 00:51:04.825344 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-etc-cni-netd\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825561 kubelet[2148]: I0913 00:51:04.825360 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-bpf-maps\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825561 kubelet[2148]: I0913 00:51:04.825375 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-cni-path\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825561 kubelet[2148]: I0913 00:51:04.825391 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-cilium-ipsec-secrets\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825561 kubelet[2148]: I0913 00:51:04.825409 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-hubble-tls\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825561 kubelet[2148]: I0913 00:51:04.825423 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jcv\" (UniqueName: \"kubernetes.io/projected/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-kube-api-access-r9jcv\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825561 kubelet[2148]: I0913 00:51:04.825448 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-cilium-run\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825731 kubelet[2148]: I0913 00:51:04.825466 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-cilium-cgroup\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825731 kubelet[2148]: I0913 00:51:04.825480 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-xtables-lock\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825731 kubelet[2148]: I0913 00:51:04.825496 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-clustermesh-secrets\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825731 kubelet[2148]: I0913 00:51:04.825511 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-cilium-config-path\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825731 kubelet[2148]: I0913 00:51:04.825527 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-host-proc-sys-net\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:04.825731 kubelet[2148]: I0913 00:51:04.825541 2148 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cc16eca-9231-4c9f-bf5a-b47bd71937c2-hostproc\") pod \"cilium-n95dp\" (UID: \"2cc16eca-9231-4c9f-bf5a-b47bd71937c2\") " pod="kube-system/cilium-n95dp" Sep 13 00:51:05.037452 env[1732]: time="2025-09-13T00:51:05.036296034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n95dp,Uid:2cc16eca-9231-4c9f-bf5a-b47bd71937c2,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:05.053575 env[1732]: time="2025-09-13T00:51:05.053475877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:51:05.053575 env[1732]: time="2025-09-13T00:51:05.053527687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:51:05.053846 env[1732]: time="2025-09-13T00:51:05.053544126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:51:05.054006 env[1732]: time="2025-09-13T00:51:05.053903690Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4 pid=3992 runtime=io.containerd.runc.v2 Sep 13 00:51:05.100194 env[1732]: time="2025-09-13T00:51:05.100152748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n95dp,Uid:2cc16eca-9231-4c9f-bf5a-b47bd71937c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\"" Sep 13 00:51:05.103415 env[1732]: time="2025-09-13T00:51:05.103371398Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:51:05.116748 env[1732]: time="2025-09-13T00:51:05.116702645Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"efffd19fcc9f9dd72f38e95b90d947fb868fe8c23cdc00c3a51d63e231f5c866\"" Sep 13 00:51:05.117452 env[1732]: time="2025-09-13T00:51:05.117421333Z" level=info msg="StartContainer for \"efffd19fcc9f9dd72f38e95b90d947fb868fe8c23cdc00c3a51d63e231f5c866\"" Sep 13 00:51:05.174923 env[1732]: time="2025-09-13T00:51:05.174737562Z" level=info msg="StartContainer for \"efffd19fcc9f9dd72f38e95b90d947fb868fe8c23cdc00c3a51d63e231f5c866\" returns successfully" Sep 13 00:51:05.219497 kubelet[2148]: E0913 00:51:05.219192 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:05.225501 env[1732]: time="2025-09-13T00:51:05.225450716Z" level=info msg="shim disconnected" id=efffd19fcc9f9dd72f38e95b90d947fb868fe8c23cdc00c3a51d63e231f5c866 Sep 13 00:51:05.225501 env[1732]: time="2025-09-13T00:51:05.225491157Z" level=warning msg="cleaning up after shim disconnected" id=efffd19fcc9f9dd72f38e95b90d947fb868fe8c23cdc00c3a51d63e231f5c866 namespace=k8s.io Sep 13 00:51:05.225501 env[1732]: time="2025-09-13T00:51:05.225500535Z" level=info msg="cleaning up dead shim" Sep 13 00:51:05.234475 env[1732]: time="2025-09-13T00:51:05.234417749Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:51:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4079 runtime=io.containerd.runc.v2\n" Sep 13 00:51:05.399647 kubelet[2148]: I0913 00:51:05.399168 2148 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6597395a-1b94-4fea-932c-f8d57edb7b66" path="/var/lib/kubelet/pods/6597395a-1b94-4fea-932c-f8d57edb7b66/volumes" Sep 13 00:51:05.660546 env[1732]: time="2025-09-13T00:51:05.660346459Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:51:05.686388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334730473.mount: Deactivated successfully. Sep 13 00:51:05.698070 env[1732]: time="2025-09-13T00:51:05.697992432Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"16b93526284222f2669dd6107f50b8a42b4eb23e6eeb824140e7aa791212b4ee\"" Sep 13 00:51:05.698822 env[1732]: time="2025-09-13T00:51:05.698762069Z" level=info msg="StartContainer for \"16b93526284222f2669dd6107f50b8a42b4eb23e6eeb824140e7aa791212b4ee\"" Sep 13 00:51:05.761453 env[1732]: time="2025-09-13T00:51:05.761400700Z" level=info msg="StartContainer for \"16b93526284222f2669dd6107f50b8a42b4eb23e6eeb824140e7aa791212b4ee\" returns successfully" Sep 13 00:51:05.948446 env[1732]: time="2025-09-13T00:51:05.948197961Z" level=info msg="shim disconnected" id=16b93526284222f2669dd6107f50b8a42b4eb23e6eeb824140e7aa791212b4ee Sep 13 00:51:05.948446 env[1732]: time="2025-09-13T00:51:05.948268039Z" level=warning msg="cleaning up after shim disconnected" id=16b93526284222f2669dd6107f50b8a42b4eb23e6eeb824140e7aa791212b4ee namespace=k8s.io Sep 13 00:51:05.948446 env[1732]: time="2025-09-13T00:51:05.948280995Z" level=info msg="cleaning up dead shim" Sep 13 00:51:05.957532 env[1732]: time="2025-09-13T00:51:05.957487399Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:51:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4143 runtime=io.containerd.runc.v2\n" Sep 13 00:51:06.220092 kubelet[2148]: E0913 00:51:06.219971 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:06.304586 kubelet[2148]: E0913 00:51:06.304429 2148 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:51:06.653538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16b93526284222f2669dd6107f50b8a42b4eb23e6eeb824140e7aa791212b4ee-rootfs.mount: Deactivated successfully. Sep 13 00:51:06.664469 env[1732]: time="2025-09-13T00:51:06.664414234Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:51:06.679454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2981588828.mount: Deactivated successfully. Sep 13 00:51:06.686781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851257732.mount: Deactivated successfully. Sep 13 00:51:06.692094 env[1732]: time="2025-09-13T00:51:06.692049967Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d767200219355e9d9a86a53ba25e1554c510a0f6c0c2fedfc83fd15ee8c3bcb\"" Sep 13 00:51:06.692775 env[1732]: time="2025-09-13T00:51:06.692725649Z" level=info msg="StartContainer for \"5d767200219355e9d9a86a53ba25e1554c510a0f6c0c2fedfc83fd15ee8c3bcb\"" Sep 13 00:51:06.753602 env[1732]: time="2025-09-13T00:51:06.753544930Z" level=info msg="StartContainer for \"5d767200219355e9d9a86a53ba25e1554c510a0f6c0c2fedfc83fd15ee8c3bcb\" returns successfully" Sep 13 00:51:06.789639 env[1732]: time="2025-09-13T00:51:06.789565228Z" level=info msg="shim disconnected" id=5d767200219355e9d9a86a53ba25e1554c510a0f6c0c2fedfc83fd15ee8c3bcb Sep 13 00:51:06.789639 env[1732]: time="2025-09-13T00:51:06.789623920Z" level=warning msg="cleaning up after shim disconnected" id=5d767200219355e9d9a86a53ba25e1554c510a0f6c0c2fedfc83fd15ee8c3bcb namespace=k8s.io Sep 13 00:51:06.789639 env[1732]: time="2025-09-13T00:51:06.789633925Z" level=info msg="cleaning up dead shim" Sep 13 00:51:06.797970 env[1732]: time="2025-09-13T00:51:06.797918741Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:51:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4204 runtime=io.containerd.runc.v2\n" Sep 13 00:51:07.220884 kubelet[2148]: E0913 00:51:07.220826 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:07.669158 env[1732]: time="2025-09-13T00:51:07.669109928Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:51:07.689017 env[1732]: time="2025-09-13T00:51:07.683340313Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ce20094f0636aced8e2b6a731e1fb82ec3574feae4f635c814eba478e776b6ae\"" Sep 13 00:51:07.689017 env[1732]: time="2025-09-13T00:51:07.684183588Z" level=info msg="StartContainer for \"ce20094f0636aced8e2b6a731e1fb82ec3574feae4f635c814eba478e776b6ae\"" Sep 13 00:51:07.751492 env[1732]: time="2025-09-13T00:51:07.751436561Z" level=info msg="StartContainer for \"ce20094f0636aced8e2b6a731e1fb82ec3574feae4f635c814eba478e776b6ae\" returns successfully" Sep 13 00:51:07.775204 env[1732]: time="2025-09-13T00:51:07.775116226Z" level=info msg="shim disconnected" id=ce20094f0636aced8e2b6a731e1fb82ec3574feae4f635c814eba478e776b6ae Sep 13 00:51:07.775204 env[1732]: time="2025-09-13T00:51:07.775176221Z" level=warning msg="cleaning up after shim disconnected" id=ce20094f0636aced8e2b6a731e1fb82ec3574feae4f635c814eba478e776b6ae namespace=k8s.io Sep 13 00:51:07.775204 env[1732]: time="2025-09-13T00:51:07.775185613Z" level=info msg="cleaning up dead shim" Sep 13 00:51:07.783163 env[1732]: time="2025-09-13T00:51:07.783118974Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:51:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4262 runtime=io.containerd.runc.v2\n" Sep 13 00:51:08.221339 kubelet[2148]: E0913 00:51:08.221298 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:08.653729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce20094f0636aced8e2b6a731e1fb82ec3574feae4f635c814eba478e776b6ae-rootfs.mount: Deactivated successfully. Sep 13 00:51:08.677530 env[1732]: time="2025-09-13T00:51:08.677492238Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:51:08.694089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389886847.mount: Deactivated successfully. Sep 13 00:51:08.705606 env[1732]: time="2025-09-13T00:51:08.705549286Z" level=info msg="CreateContainer within sandbox \"8227e1a6fa88d5ebbab5dd0f08033ab978c9f9ff9d434a658cfa2903b8bb51f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6662661fe239ba8d5b2d8574c028284f2fda63ca4bd5aeb888a2baf07828f3f3\"" Sep 13 00:51:08.706365 env[1732]: time="2025-09-13T00:51:08.706321363Z" level=info msg="StartContainer for \"6662661fe239ba8d5b2d8574c028284f2fda63ca4bd5aeb888a2baf07828f3f3\"" Sep 13 00:51:08.766496 env[1732]: time="2025-09-13T00:51:08.766446996Z" level=info msg="StartContainer for \"6662661fe239ba8d5b2d8574c028284f2fda63ca4bd5aeb888a2baf07828f3f3\" returns successfully" Sep 13 00:51:09.221937 kubelet[2148]: E0913 00:51:09.221883 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:10.222344 kubelet[2148]: E0913 00:51:10.222293 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:10.700777 kubelet[2148]: I0913 00:51:10.700719 2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n95dp" podStartSLOduration=6.700701668 podStartE2EDuration="6.700701668s" podCreationTimestamp="2025-09-13 00:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:51:10.700619053 +0000 UTC m=+79.996627365" watchObservedRunningTime="2025-09-13 00:51:10.700701668 +0000 UTC m=+79.996709979" Sep 13 00:51:11.156702 kubelet[2148]: E0913 00:51:11.156648 2148 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:11.223093 kubelet[2148]: E0913 00:51:11.223041 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:11.711896 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:51:11.773088 systemd[1]: run-containerd-runc-k8s.io-6662661fe239ba8d5b2d8574c028284f2fda63ca4bd5aeb888a2baf07828f3f3-runc.cskRtN.mount: Deactivated successfully. Sep 13 00:51:12.224193 kubelet[2148]: E0913 00:51:12.224113 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:13.225363 kubelet[2148]: E0913 00:51:13.225291 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:14.225607 kubelet[2148]: E0913 00:51:14.225562 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:15.205516 (udev-worker)[4843]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:51:15.207087 (udev-worker)[4844]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:51:15.217249 systemd-networkd[1415]: lxc_health: Link UP Sep 13 00:51:15.225563 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:51:15.225311 systemd-networkd[1415]: lxc_health: Gained carrier Sep 13 00:51:15.226705 kubelet[2148]: E0913 00:51:15.226660 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:16.227051 kubelet[2148]: E0913 00:51:16.227009 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:16.617349 systemd[1]: run-containerd-runc-k8s.io-6662661fe239ba8d5b2d8574c028284f2fda63ca4bd5aeb888a2baf07828f3f3-runc.PU0td6.mount: Deactivated successfully. Sep 13 00:51:17.162037 systemd-networkd[1415]: lxc_health: Gained IPv6LL Sep 13 00:51:17.228426 kubelet[2148]: E0913 00:51:17.228379 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:18.229170 kubelet[2148]: E0913 00:51:18.229119 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:19.229841 kubelet[2148]: E0913 00:51:19.229766 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:20.230265 kubelet[2148]: E0913 00:51:20.230165 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:21.230995 kubelet[2148]: E0913 00:51:21.230935 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:22.231557 kubelet[2148]: E0913 00:51:22.231500 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:23.232187 kubelet[2148]: E0913 00:51:23.232139 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:24.233423 kubelet[2148]: E0913 00:51:24.233352 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:25.233888 kubelet[2148]: E0913 00:51:25.233820 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:26.234305 kubelet[2148]: E0913 00:51:26.234251 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:27.234469 kubelet[2148]: E0913 00:51:27.234410 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:28.235595 kubelet[2148]: E0913 00:51:28.235542 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:29.236763 kubelet[2148]: E0913 00:51:29.236693 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:30.237755 kubelet[2148]: E0913 00:51:30.237679 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:31.156920 kubelet[2148]: E0913 00:51:31.156876 2148 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:31.238042 kubelet[2148]: E0913 00:51:31.237999 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:32.238516 kubelet[2148]: E0913 00:51:32.238450 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:33.239181 kubelet[2148]: E0913 00:51:33.239137 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:34.239615 kubelet[2148]: E0913 00:51:34.239560 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:35.240051 kubelet[2148]: E0913 00:51:35.240007 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:36.240949 kubelet[2148]: E0913 00:51:36.240892 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:36.557760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22290205ac982624a104900f840843ed0217ec328e583a976da9f8a88c9913a5-rootfs.mount: Deactivated successfully. Sep 13 00:51:36.567298 env[1732]: time="2025-09-13T00:51:36.567247463Z" level=info msg="shim disconnected" id=22290205ac982624a104900f840843ed0217ec328e583a976da9f8a88c9913a5 Sep 13 00:51:36.567298 env[1732]: time="2025-09-13T00:51:36.567293839Z" level=warning msg="cleaning up after shim disconnected" id=22290205ac982624a104900f840843ed0217ec328e583a976da9f8a88c9913a5 namespace=k8s.io Sep 13 00:51:36.567298 env[1732]: time="2025-09-13T00:51:36.567306189Z" level=info msg="cleaning up dead shim" Sep 13 00:51:36.575742 env[1732]: time="2025-09-13T00:51:36.575694137Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:51:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4959 runtime=io.containerd.runc.v2\n" Sep 13 00:51:36.742773 kubelet[2148]: I0913 00:51:36.742610 2148 scope.go:117] "RemoveContainer" containerID="22290205ac982624a104900f840843ed0217ec328e583a976da9f8a88c9913a5" Sep 13 00:51:36.744771 env[1732]: time="2025-09-13T00:51:36.744731394Z" level=info msg="CreateContainer within sandbox \"1b5205242ee962fc50fe6cbd50a5e97d780add25bd6dc1c0f4534121b41b0fbd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Sep 13 00:51:36.769430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420925246.mount: Deactivated successfully. Sep 13 00:51:36.778040 env[1732]: time="2025-09-13T00:51:36.777985059Z" level=info msg="CreateContainer within sandbox \"1b5205242ee962fc50fe6cbd50a5e97d780add25bd6dc1c0f4534121b41b0fbd\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"22083c790eac85a2d0ea4eb4e9e9a757149bac27932de1fedc9013ae1ff34b57\"" Sep 13 00:51:36.778516 env[1732]: time="2025-09-13T00:51:36.778484100Z" level=info msg="StartContainer for \"22083c790eac85a2d0ea4eb4e9e9a757149bac27932de1fedc9013ae1ff34b57\"" Sep 13 00:51:36.837578 env[1732]: time="2025-09-13T00:51:36.837482346Z" level=info msg="StartContainer for \"22083c790eac85a2d0ea4eb4e9e9a757149bac27932de1fedc9013ae1ff34b57\" returns successfully" Sep 13 00:51:37.241455 kubelet[2148]: E0913 00:51:37.241307 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:38.242473 kubelet[2148]: E0913 00:51:38.242430 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:39.243391 kubelet[2148]: E0913 00:51:39.243327 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:40.243798 kubelet[2148]: E0913 00:51:40.243687 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:41.244080 kubelet[2148]: E0913 00:51:41.244028 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:42.244770 kubelet[2148]: E0913 00:51:42.244728 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:42.854340 kubelet[2148]: E0913 00:51:42.854287 2148 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 13 00:51:43.245418 kubelet[2148]: E0913 00:51:43.245297 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:44.245480 kubelet[2148]: E0913 00:51:44.245432 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:45.245895 kubelet[2148]: E0913 00:51:45.245816 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:46.246238 kubelet[2148]: E0913 00:51:46.246187 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:47.247154 kubelet[2148]: E0913 00:51:47.247093 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:48.247351 kubelet[2148]: E0913 00:51:48.247283 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:49.250873 kubelet[2148]: E0913 00:51:49.250804 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:50.251989 kubelet[2148]: E0913 00:51:50.251933 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:51.156954 kubelet[2148]: E0913 00:51:51.156912 2148 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:51.237204 env[1732]: time="2025-09-13T00:51:51.237167765Z" level=info msg="StopPodSandbox for \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\"" Sep 13 00:51:51.237580 env[1732]: time="2025-09-13T00:51:51.237251660Z" level=info msg="TearDown network for sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" successfully" Sep 13 00:51:51.237580 env[1732]: time="2025-09-13T00:51:51.237282979Z" level=info msg="StopPodSandbox for \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" returns successfully" Sep 13 00:51:51.237648 env[1732]: time="2025-09-13T00:51:51.237584345Z" level=info msg="RemovePodSandbox for \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\"" Sep 13 00:51:51.237648 env[1732]: time="2025-09-13T00:51:51.237606407Z" level=info msg="Forcibly stopping sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\"" Sep 13 00:51:51.237701 env[1732]: time="2025-09-13T00:51:51.237664647Z" level=info msg="TearDown network for sandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" successfully" Sep 13 00:51:51.243562 env[1732]: time="2025-09-13T00:51:51.243513510Z" level=info msg="RemovePodSandbox \"f66d5196f0119f8055b1ce3634ae67bf184926b4ddb4cbbefdcb523403d001cf\" returns successfully" Sep 13 00:51:51.244340 env[1732]: time="2025-09-13T00:51:51.244246182Z" level=info msg="StopPodSandbox for \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\"" Sep 13 00:51:51.244462 env[1732]: time="2025-09-13T00:51:51.244387069Z" level=info msg="TearDown network for sandbox \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\" successfully" Sep 13 00:51:51.244462 env[1732]: time="2025-09-13T00:51:51.244451040Z" level=info msg="StopPodSandbox for \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\" returns successfully" Sep 13 00:51:51.244905 env[1732]: time="2025-09-13T00:51:51.244872346Z" level=info msg="RemovePodSandbox for \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\"" Sep 13 00:51:51.245002 env[1732]: time="2025-09-13T00:51:51.244911844Z" level=info msg="Forcibly stopping sandbox \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\"" Sep 13 00:51:51.245049 env[1732]: time="2025-09-13T00:51:51.245004843Z" level=info msg="TearDown network for sandbox \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\" successfully" Sep 13 00:51:51.250877 env[1732]: time="2025-09-13T00:51:51.250474010Z" level=info msg="RemovePodSandbox \"962a71fe9705815bcd24046523c2d6d8434c471d7513e1eaf6a0a3ee436cbc48\" returns successfully" Sep 13 00:51:51.252471 kubelet[2148]: E0913 00:51:51.252442 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:52.252953 kubelet[2148]: E0913 00:51:52.252909 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:52.855323 kubelet[2148]: E0913 00:51:52.855267 2148 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 13 00:51:53.253811 kubelet[2148]: E0913 00:51:53.253665 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:54.254633 kubelet[2148]: E0913 00:51:54.254548 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:55.254994 kubelet[2148]: E0913 00:51:55.254950 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:56.255906 kubelet[2148]: E0913 00:51:56.255843 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:57.256761 kubelet[2148]: E0913 00:51:57.256713 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:58.257913 kubelet[2148]: E0913 00:51:58.257840 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:51:59.258270 kubelet[2148]: E0913 00:51:59.258210 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:00.258518 kubelet[2148]: E0913 00:52:00.258441 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:01.258920 kubelet[2148]: E0913 00:52:01.258869 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:02.259372 kubelet[2148]: E0913 00:52:02.259313 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:02.856599 kubelet[2148]: E0913 00:52:02.856541 2148 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 13 00:52:03.003655 kubelet[2148]: E0913 00:52:03.003601 2148 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": unexpected EOF" Sep 13 00:52:03.012586 kubelet[2148]: E0913 00:52:03.004127 2148 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.167:6443/api/v1/namespaces/kube-system/events\": unexpected EOF" event="&Event{ObjectMeta:{cilium-operator-5d85765b45-7pz6x.1864b14c825074eb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:cilium-operator-5d85765b45-7pz6x,UID:145bb108-9e9e-4a2e-9891-0483b1b5de8b,APIVersion:v1,ResourceVersion:957,FieldPath:spec.containers{cilium-operator},},Reason:Pulled,Message:Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine,Source:EventSource{Component:kubelet,Host:172.31.27.100,},FirstTimestamp:2025-09-13 00:51:36.743277803 +0000 UTC m=+106.039286115,LastTimestamp:2025-09-13 00:51:36.743277803 +0000 UTC m=+106.039286115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.27.100,}" Sep 13 00:52:03.023588 kubelet[2148]: E0913 00:52:03.023540 2148 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": read tcp 172.31.27.100:50312->172.31.19.167:6443: read: connection reset by peer" Sep 13 00:52:03.023588 kubelet[2148]: I0913 00:52:03.023581 2148 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Sep 13 00:52:03.024634 kubelet[2148]: E0913 00:52:03.024584 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": dial tcp 172.31.19.167:6443: connect: connection refused" interval="200ms" Sep 13 00:52:03.225744 kubelet[2148]: E0913 00:52:03.225604 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": dial tcp 172.31.19.167:6443: connect: connection refused" interval="400ms" Sep 13 00:52:03.260030 kubelet[2148]: E0913 00:52:03.259949 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:03.627385 kubelet[2148]: E0913 00:52:03.627324 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": dial tcp 172.31.19.167:6443: connect: connection refused" interval="800ms" Sep 13 00:52:04.003790 kubelet[2148]: E0913 00:52:04.003607 2148 desired_state_of_world_populator.go:313] "Error processing volume" err="error processing PVC default/test-dynamic-volume-claim: failed to fetch PVC from API server: Get \"https://172.31.19.167:6443/api/v1/namespaces/default/persistentvolumeclaims/test-dynamic-volume-claim\": dial tcp 172.31.19.167:6443: connect: connection refused - error from a previous attempt: unexpected EOF" pod="default/test-pod-1" volumeName="config" Sep 13 00:52:04.007039 kubelet[2148]: I0913 00:52:04.006973 2148 status_manager.go:851] "Failed to get status for pod" podUID="145bb108-9e9e-4a2e-9891-0483b1b5de8b" pod="kube-system/cilium-operator-5d85765b45-7pz6x" err="Get \"https://172.31.19.167:6443/api/v1/namespaces/kube-system/pods/cilium-operator-5d85765b45-7pz6x\": dial tcp 172.31.19.167:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Sep 13 00:52:04.008687 kubelet[2148]: I0913 00:52:04.008631 2148 status_manager.go:851] "Failed to get status for pod" podUID="145bb108-9e9e-4a2e-9891-0483b1b5de8b" pod="kube-system/cilium-operator-5d85765b45-7pz6x" err="Get \"https://172.31.19.167:6443/api/v1/namespaces/kube-system/pods/cilium-operator-5d85765b45-7pz6x\": dial tcp 172.31.19.167:6443: connect: connection refused" Sep 13 00:52:04.009791 kubelet[2148]: I0913 00:52:04.009706 2148 status_manager.go:851] "Failed to get status for pod" podUID="145bb108-9e9e-4a2e-9891-0483b1b5de8b" pod="kube-system/cilium-operator-5d85765b45-7pz6x" err="Get \"https://172.31.19.167:6443/api/v1/namespaces/kube-system/pods/cilium-operator-5d85765b45-7pz6x\": dial tcp 172.31.19.167:6443: connect: connection refused" Sep 13 00:52:04.260948 kubelet[2148]: E0913 00:52:04.260788 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:04.429070 kubelet[2148]: E0913 00:52:04.429010 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": dial tcp 172.31.19.167:6443: connect: connection refused" interval="1.6s" Sep 13 00:52:04.792994 kubelet[2148]: E0913 00:52:04.792829 2148 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.167:6443/api/v1/namespaces/kube-system/events\": dial tcp 172.31.19.167:6443: connect: connection refused" event="&Event{ObjectMeta:{cilium-operator-5d85765b45-7pz6x.1864b14c825074eb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:cilium-operator-5d85765b45-7pz6x,UID:145bb108-9e9e-4a2e-9891-0483b1b5de8b,APIVersion:v1,ResourceVersion:957,FieldPath:spec.containers{cilium-operator},},Reason:Pulled,Message:Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine,Source:EventSource{Component:kubelet,Host:172.31.27.100,},FirstTimestamp:2025-09-13 00:51:36.743277803 +0000 UTC m=+106.039286115,LastTimestamp:2025-09-13 00:51:36.743277803 +0000 UTC m=+106.039286115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.27.100,}" Sep 13 00:52:05.261912 kubelet[2148]: E0913 00:52:05.261785 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:06.030224 kubelet[2148]: E0913 00:52:06.030169 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": dial tcp 172.31.19.167:6443: connect: connection refused" interval="3.2s" Sep 13 00:52:06.262541 kubelet[2148]: E0913 00:52:06.262467 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:07.263555 kubelet[2148]: E0913 00:52:07.263512 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:08.263812 kubelet[2148]: E0913 00:52:08.263635 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:09.263825 kubelet[2148]: E0913 00:52:09.263781 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:10.264776 kubelet[2148]: E0913 00:52:10.264714 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:11.156749 kubelet[2148]: E0913 00:52:11.156694 2148 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:11.265397 kubelet[2148]: E0913 00:52:11.265352 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:12.266301 kubelet[2148]: E0913 00:52:12.266261 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:13.267500 kubelet[2148]: E0913 00:52:13.267440 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:14.268415 kubelet[2148]: E0913 00:52:14.268300 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:15.268871 kubelet[2148]: E0913 00:52:15.268825 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:16.269702 kubelet[2148]: E0913 00:52:16.269622 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:17.270247 kubelet[2148]: E0913 00:52:17.270189 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:18.271004 kubelet[2148]: E0913 00:52:18.270943 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:19.231676 kubelet[2148]: E0913 00:52:19.231624 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.100?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Sep 13 00:52:19.272195 kubelet[2148]: E0913 00:52:19.272136 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:20.273355 kubelet[2148]: E0913 00:52:20.273305 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:21.273881 kubelet[2148]: E0913 00:52:21.273750 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:22.274547 kubelet[2148]: E0913 00:52:22.274495 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:23.275584 kubelet[2148]: E0913 00:52:23.275521 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:24.276382 kubelet[2148]: E0913 00:52:24.276314 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:25.276673 kubelet[2148]: E0913 00:52:25.276632 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:26.277495 kubelet[2148]: E0913 00:52:26.277443 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:27.278361 kubelet[2148]: E0913 00:52:27.278319 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:52:28.278945 kubelet[2148]: E0913 00:52:28.278902 2148 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"