Nov 1 00:41:08.996817 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:41:08.996848 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:41:08.996865 kernel: BIOS-provided physical RAM map: Nov 1 00:41:08.996875 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:41:08.996884 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 1 00:41:08.996894 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 1 00:41:08.996906 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 1 00:41:08.996916 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 1 00:41:08.996928 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 1 00:41:08.996938 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 1 00:41:08.996949 kernel: NX (Execute Disable) protection: active Nov 1 00:41:08.996958 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Nov 1 00:41:08.996969 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Nov 1 00:41:08.996979 kernel: extended physical RAM map: Nov 1 00:41:08.996995 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:41:08.997006 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Nov 1 00:41:08.997017 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Nov 1 00:41:08.997028 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Nov 1 00:41:08.997038 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 1 00:41:08.997050 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 1 00:41:08.997060 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 1 00:41:08.997072 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 1 00:41:08.997082 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 1 00:41:08.997093 kernel: efi: EFI v2.70 by EDK II Nov 1 00:41:08.997106 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003a98 Nov 1 00:41:08.997117 kernel: SMBIOS 2.7 present. Nov 1 00:41:08.997128 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 1 00:41:08.997139 kernel: Hypervisor detected: KVM Nov 1 00:41:08.997150 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:41:08.997161 kernel: kvm-clock: cpu 0, msr 451a0001, primary cpu clock Nov 1 00:41:08.997171 kernel: kvm-clock: using sched offset of 3998560973 cycles Nov 1 00:41:08.997183 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:41:08.997194 kernel: tsc: Detected 2499.998 MHz processor Nov 1 00:41:08.997206 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:41:08.997217 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:41:08.997245 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 1 00:41:08.997256 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:41:08.997268 kernel: Using GB pages for direct mapping Nov 1 00:41:08.997279 kernel: Secure boot disabled Nov 1 00:41:08.997291 kernel: ACPI: Early table checksum verification disabled Nov 1 00:41:08.997307 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 1 00:41:08.997319 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 1 00:41:08.997333 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 1 00:41:08.997346 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 1 00:41:08.997358 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 1 00:41:08.997370 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 1 00:41:08.997381 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 1 00:41:08.997393 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 1 00:41:08.997406 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 1 00:41:08.997421 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 1 00:41:08.997432 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 1 00:41:08.997445 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 1 00:41:08.997457 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 1 00:41:08.997469 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 1 00:41:08.997481 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 1 00:41:08.997493 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 1 00:41:08.997506 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 1 00:41:08.997517 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 1 00:41:08.997532 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 1 00:41:08.997544 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 1 00:41:08.997556 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 1 00:41:08.997568 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 1 00:41:08.997580 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 1 00:41:08.997593 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 1 00:41:08.997604 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:41:08.997616 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:41:08.997628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 1 00:41:08.997642 kernel: NUMA: Initialized distance table, cnt=1 Nov 1 00:41:08.997654 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Nov 1 00:41:08.997666 kernel: Zone ranges: Nov 1 00:41:08.997679 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:41:08.997691 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 1 00:41:08.997703 kernel: Normal empty Nov 1 00:41:08.997715 kernel: Movable zone start for each node Nov 1 00:41:08.997727 kernel: Early memory node ranges Nov 1 00:41:08.997739 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:41:08.997753 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 1 00:41:08.997765 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 1 00:41:08.997777 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 1 00:41:08.997789 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:41:08.997801 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:41:08.997813 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 1 00:41:08.997825 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 1 00:41:08.997837 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 1 00:41:08.997848 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:41:08.997862 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 1 00:41:08.999304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:41:08.999328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:41:08.999343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:41:08.999358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:41:08.999372 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:41:08.999386 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:41:08.999401 kernel: TSC deadline timer available Nov 1 00:41:08.999415 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:41:08.999433 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 1 00:41:08.999447 kernel: Booting paravirtualized kernel on KVM Nov 1 00:41:08.999461 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:41:08.999475 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:41:08.999489 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 00:41:08.999503 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 00:41:08.999518 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:41:08.999531 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Nov 1 00:41:08.999545 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:41:08.999562 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:41:08.999576 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Nov 1 00:41:08.999590 kernel: Policy zone: DMA32 Nov 1 00:41:08.999606 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:41:08.999621 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:41:08.999635 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:41:08.999649 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:41:08.999663 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:41:08.999680 kernel: Memory: 1876636K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 160908K reserved, 0K cma-reserved) Nov 1 00:41:08.999694 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:41:08.999709 kernel: Kernel/User page tables isolation: enabled Nov 1 00:41:08.999723 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:41:08.999737 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:41:08.999750 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:41:08.999766 kernel: rcu: RCU event tracing is enabled. Nov 1 00:41:08.999793 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:41:08.999808 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:41:08.999822 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:41:08.999837 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:41:08.999852 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:41:08.999870 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:41:08.999885 kernel: random: crng init done Nov 1 00:41:08.999899 kernel: Console: colour dummy device 80x25 Nov 1 00:41:08.999913 kernel: printk: console [tty0] enabled Nov 1 00:41:08.999928 kernel: printk: console [ttyS0] enabled Nov 1 00:41:08.999942 kernel: ACPI: Core revision 20210730 Nov 1 00:41:08.999958 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 1 00:41:08.999975 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:41:08.999990 kernel: x2apic enabled Nov 1 00:41:09.000005 kernel: Switched APIC routing to physical x2apic. Nov 1 00:41:09.000019 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 1 00:41:09.000035 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 1 00:41:09.000050 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:41:09.000065 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:41:09.000082 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:41:09.000096 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:41:09.000111 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:41:09.000126 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 1 00:41:09.000141 kernel: RETBleed: Vulnerable Nov 1 00:41:09.000155 kernel: Speculative Store Bypass: Vulnerable Nov 1 00:41:09.000170 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:41:09.000184 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:41:09.000198 kernel: GDS: Unknown: Dependent on hypervisor status Nov 1 00:41:09.000212 kernel: active return thunk: its_return_thunk Nov 1 00:41:09.000237 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:41:09.000256 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:41:09.000271 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:41:09.000286 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:41:09.000301 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 00:41:09.000316 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 00:41:09.000330 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 00:41:09.000345 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 00:41:09.000359 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 00:41:09.000374 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 00:41:09.000388 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:41:09.000403 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 00:41:09.000420 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 00:41:09.000435 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 1 00:41:09.000449 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 1 00:41:09.000464 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 1 00:41:09.000478 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 1 00:41:09.000493 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 1 00:41:09.000508 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:41:09.000522 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:41:09.000536 kernel: LSM: Security Framework initializing Nov 1 00:41:09.000551 kernel: SELinux: Initializing. Nov 1 00:41:09.000565 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:41:09.000583 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:41:09.000598 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 1 00:41:09.000612 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 00:41:09.000625 kernel: signal: max sigframe size: 3632 Nov 1 00:41:09.000640 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:41:09.000654 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:41:09.000669 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:41:09.000683 kernel: x86: Booting SMP configuration: Nov 1 00:41:09.000697 kernel: .... node #0, CPUs: #1 Nov 1 00:41:09.000711 kernel: kvm-clock: cpu 1, msr 451a0041, secondary cpu clock Nov 1 00:41:09.000729 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Nov 1 00:41:09.000744 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 1 00:41:09.000760 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:41:09.000774 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:41:09.000788 kernel: smpboot: Max logical packages: 1 Nov 1 00:41:09.000803 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 1 00:41:09.000817 kernel: devtmpfs: initialized Nov 1 00:41:09.000831 kernel: x86/mm: Memory block size: 128MB Nov 1 00:41:09.000848 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 1 00:41:09.000862 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:41:09.000876 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:41:09.000891 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:41:09.000905 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:41:09.000919 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:41:09.000933 kernel: audit: type=2000 audit(1761957668.284:1): state=initialized audit_enabled=0 res=1 Nov 1 00:41:09.000947 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:41:09.000961 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:41:09.000978 kernel: cpuidle: using governor menu Nov 1 00:41:09.000992 kernel: ACPI: bus type PCI registered Nov 1 00:41:09.001006 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:41:09.001021 kernel: dca service started, version 1.12.1 Nov 1 00:41:09.001035 kernel: PCI: Using configuration type 1 for base access Nov 1 00:41:09.001049 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:41:09.001063 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:41:09.001078 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:41:09.001092 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:41:09.001109 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:41:09.001123 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:41:09.001137 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:41:09.001151 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:41:09.001165 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:41:09.001179 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 1 00:41:09.001191 kernel: ACPI: Interpreter enabled Nov 1 00:41:09.001205 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:41:09.001218 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:41:09.001246 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:41:09.001260 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 1 00:41:09.001274 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:41:09.001484 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:41:09.001609 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Nov 1 00:41:09.001627 kernel: acpiphp: Slot [3] registered Nov 1 00:41:09.001641 kernel: acpiphp: Slot [4] registered Nov 1 00:41:09.001654 kernel: acpiphp: Slot [5] registered Nov 1 00:41:09.001670 kernel: acpiphp: Slot [6] registered Nov 1 00:41:09.001683 kernel: acpiphp: Slot [7] registered Nov 1 00:41:09.001696 kernel: acpiphp: Slot [8] registered Nov 1 00:41:09.001710 kernel: acpiphp: Slot [9] registered Nov 1 00:41:09.001724 kernel: acpiphp: Slot [10] registered Nov 1 00:41:09.001737 kernel: acpiphp: Slot [11] registered Nov 1 00:41:09.001750 kernel: acpiphp: Slot [12] registered Nov 1 00:41:09.001764 kernel: acpiphp: Slot [13] registered Nov 1 00:41:09.001777 kernel: acpiphp: Slot [14] registered Nov 1 00:41:09.001792 kernel: acpiphp: Slot [15] registered Nov 1 00:41:09.001806 kernel: acpiphp: Slot [16] registered Nov 1 00:41:09.001819 kernel: acpiphp: Slot [17] registered Nov 1 00:41:09.001833 kernel: acpiphp: Slot [18] registered Nov 1 00:41:09.001846 kernel: acpiphp: Slot [19] registered Nov 1 00:41:09.001859 kernel: acpiphp: Slot [20] registered Nov 1 00:41:09.001873 kernel: acpiphp: Slot [21] registered Nov 1 00:41:09.001886 kernel: acpiphp: Slot [22] registered Nov 1 00:41:09.001900 kernel: acpiphp: Slot [23] registered Nov 1 00:41:09.001913 kernel: acpiphp: Slot [24] registered Nov 1 00:41:09.001928 kernel: acpiphp: Slot [25] registered Nov 1 00:41:09.001942 kernel: acpiphp: Slot [26] registered Nov 1 00:41:09.001955 kernel: acpiphp: Slot [27] registered Nov 1 00:41:09.001968 kernel: acpiphp: Slot [28] registered Nov 1 00:41:09.001980 kernel: acpiphp: Slot [29] registered Nov 1 00:41:09.001993 kernel: acpiphp: Slot [30] registered Nov 1 00:41:09.002007 kernel: acpiphp: Slot [31] registered Nov 1 00:41:09.002020 kernel: PCI host bridge to bus 0000:00 Nov 1 00:41:09.002165 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:41:09.002304 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:41:09.018883 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:41:09.019042 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 1 00:41:09.019160 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 1 00:41:09.019288 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:41:09.019433 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:41:09.019579 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 1 00:41:09.019715 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 1 00:41:09.019841 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 1 00:41:09.019966 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 1 00:41:09.020088 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 1 00:41:09.028893 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 1 00:41:09.029088 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 1 00:41:09.029258 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 1 00:41:09.029403 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 1 00:41:09.029564 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 1 00:41:09.029708 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Nov 1 00:41:09.029849 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 1 00:41:09.029992 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Nov 1 00:41:09.035468 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:41:09.035629 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 1 00:41:09.035764 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Nov 1 00:41:09.035904 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 1 00:41:09.036034 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Nov 1 00:41:09.036054 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:41:09.036069 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:41:09.036084 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:41:09.036103 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:41:09.036118 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:41:09.036133 kernel: iommu: Default domain type: Translated Nov 1 00:41:09.036149 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:41:09.036299 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 1 00:41:09.036429 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:41:09.036555 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 1 00:41:09.036574 kernel: vgaarb: loaded Nov 1 00:41:09.036593 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:41:09.036608 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:41:09.036622 kernel: PTP clock support registered Nov 1 00:41:09.036637 kernel: Registered efivars operations Nov 1 00:41:09.036653 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:41:09.036668 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:41:09.036683 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Nov 1 00:41:09.036697 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 1 00:41:09.036713 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 1 00:41:09.036731 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 1 00:41:09.036747 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 1 00:41:09.036763 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:41:09.036778 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:41:09.036794 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:41:09.036809 kernel: pnp: PnP ACPI init Nov 1 00:41:09.036824 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:41:09.036839 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:41:09.036854 kernel: NET: Registered PF_INET protocol family Nov 1 00:41:09.036872 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:41:09.036888 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:41:09.036904 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:41:09.036919 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:41:09.036934 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:41:09.036949 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:41:09.036964 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:41:09.036980 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:41:09.036994 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:41:09.037012 kernel: NET: Registered PF_XDP protocol family Nov 1 00:41:09.037137 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:41:09.039314 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:41:09.039458 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:41:09.039581 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 1 00:41:09.039715 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 1 00:41:09.039854 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:41:09.039998 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Nov 1 00:41:09.040024 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:41:09.040041 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:41:09.040057 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 1 00:41:09.040073 kernel: clocksource: Switched to clocksource tsc Nov 1 00:41:09.040089 kernel: Initialise system trusted keyrings Nov 1 00:41:09.040104 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:41:09.040119 kernel: Key type asymmetric registered Nov 1 00:41:09.040134 kernel: Asymmetric key parser 'x509' registered Nov 1 00:41:09.040152 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:41:09.040166 kernel: io scheduler mq-deadline registered Nov 1 00:41:09.040181 kernel: io scheduler kyber registered Nov 1 00:41:09.040195 kernel: io scheduler bfq registered Nov 1 00:41:09.040209 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:41:09.040276 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:41:09.040291 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:41:09.040305 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:41:09.040319 kernel: i8042: Warning: Keylock active Nov 1 00:41:09.040333 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:41:09.040351 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:41:09.040496 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 1 00:41:09.040615 kernel: rtc_cmos 00:00: registered as rtc0 Nov 1 00:41:09.040732 kernel: rtc_cmos 00:00: setting system clock to 2025-11-01T00:41:08 UTC (1761957668) Nov 1 00:41:09.040848 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 1 00:41:09.040867 kernel: intel_pstate: CPU model not supported Nov 1 00:41:09.040882 kernel: efifb: probing for efifb Nov 1 00:41:09.040900 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Nov 1 00:41:09.040915 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 1 00:41:09.040930 kernel: efifb: scrolling: redraw Nov 1 00:41:09.040945 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:41:09.040960 kernel: Console: switching to colour frame buffer device 100x37 Nov 1 00:41:09.040975 kernel: fb0: EFI VGA frame buffer device Nov 1 00:41:09.041013 kernel: pstore: Registered efi as persistent store backend Nov 1 00:41:09.041031 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:41:09.041046 kernel: Segment Routing with IPv6 Nov 1 00:41:09.041065 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:41:09.041081 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:41:09.041096 kernel: Key type dns_resolver registered Nov 1 00:41:09.041111 kernel: IPI shorthand broadcast: enabled Nov 1 00:41:09.041127 kernel: sched_clock: Marking stable (373303572, 132685340)->(572693034, -66704122) Nov 1 00:41:09.041142 kernel: registered taskstats version 1 Nov 1 00:41:09.041157 kernel: Loading compiled-in X.509 certificates Nov 1 00:41:09.041173 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:41:09.041188 kernel: Key type .fscrypt registered Nov 1 00:41:09.041205 kernel: Key type fscrypt-provisioning registered Nov 1 00:41:09.041221 kernel: pstore: Using crash dump compression: deflate Nov 1 00:41:09.041261 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:41:09.041276 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:41:09.041292 kernel: ima: No architecture policies found Nov 1 00:41:09.041307 kernel: clk: Disabling unused clocks Nov 1 00:41:09.041323 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:41:09.041339 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:41:09.041354 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:41:09.041373 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:41:09.041388 kernel: Run /init as init process Nov 1 00:41:09.041404 kernel: with arguments: Nov 1 00:41:09.041419 kernel: /init Nov 1 00:41:09.041433 kernel: with environment: Nov 1 00:41:09.041450 kernel: HOME=/ Nov 1 00:41:09.041465 kernel: TERM=linux Nov 1 00:41:09.041483 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:41:09.041502 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:41:09.041524 systemd[1]: Detected virtualization amazon. Nov 1 00:41:09.041541 systemd[1]: Detected architecture x86-64. Nov 1 00:41:09.041556 systemd[1]: Running in initrd. Nov 1 00:41:09.041572 systemd[1]: No hostname configured, using default hostname. Nov 1 00:41:09.041588 systemd[1]: Hostname set to . Nov 1 00:41:09.041604 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:41:09.041621 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:41:09.041639 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:41:09.041655 systemd[1]: Reached target cryptsetup.target. Nov 1 00:41:09.041671 systemd[1]: Reached target paths.target. Nov 1 00:41:09.041687 systemd[1]: Reached target slices.target. Nov 1 00:41:09.041703 systemd[1]: Reached target swap.target. Nov 1 00:41:09.041719 systemd[1]: Reached target timers.target. Nov 1 00:41:09.041739 systemd[1]: Listening on iscsid.socket. Nov 1 00:41:09.041755 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:41:09.041771 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:41:09.041787 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:41:09.041803 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:41:09.041819 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:41:09.041836 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:41:09.041854 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:41:09.041871 systemd[1]: Reached target sockets.target. Nov 1 00:41:09.041887 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:41:09.041904 systemd[1]: Finished network-cleanup.service. Nov 1 00:41:09.041920 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:41:09.041936 systemd[1]: Starting systemd-journald.service... Nov 1 00:41:09.041955 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:41:09.041972 systemd[1]: Starting systemd-resolved.service... Nov 1 00:41:09.041988 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:41:09.042007 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:41:09.042023 kernel: audit: type=1130 audit(1761957668.990:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.042039 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:41:09.042055 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:41:09.042072 kernel: audit: type=1130 audit(1761957669.003:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.042088 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:41:09.042104 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:41:09.042122 kernel: audit: type=1130 audit(1761957669.015:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.042141 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:41:09.042162 systemd-journald[184]: Journal started Nov 1 00:41:09.042249 systemd-journald[184]: Runtime Journal (/run/log/journal/ec222dfad1d907c6e780c42467920b63) is 4.8M, max 38.3M, 33.5M free. Nov 1 00:41:08.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:08.998342 systemd-modules-load[185]: Inserted module 'overlay' Nov 1 00:41:09.053383 systemd[1]: Started systemd-journald.service. Nov 1 00:41:09.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.056027 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:41:09.067410 kernel: audit: type=1130 audit(1761957669.055:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.074806 systemd-resolved[186]: Positive Trust Anchors: Nov 1 00:41:09.085678 kernel: audit: type=1130 audit(1761957669.075:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.075076 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:41:09.075136 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:41:09.136265 kernel: audit: type=1130 audit(1761957669.076:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.136302 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:41:09.136322 kernel: Bridge firewalling registered Nov 1 00:41:09.136346 kernel: audit: type=1130 audit(1761957669.126:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.075669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:41:09.137685 dracut-cmdline[203]: dracut-dracut-053 Nov 1 00:41:09.137685 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Nov 1 00:41:09.137685 dracut-cmdline[203]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:41:09.153492 kernel: SCSI subsystem initialized Nov 1 00:41:09.089429 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:41:09.099429 systemd-resolved[186]: Defaulting to hostname 'linux'. Nov 1 00:41:09.101844 systemd[1]: Started systemd-resolved.service. Nov 1 00:41:09.114916 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 1 00:41:09.126390 systemd[1]: Reached target nss-lookup.target. Nov 1 00:41:09.173778 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:41:09.173844 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:41:09.175158 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:41:09.181118 systemd-modules-load[185]: Inserted module 'dm_multipath' Nov 1 00:41:09.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.183376 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:41:09.194330 kernel: audit: type=1130 audit(1761957669.184:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.194488 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:41:09.204217 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:41:09.213715 kernel: audit: type=1130 audit(1761957669.205:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.230259 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:41:09.249256 kernel: iscsi: registered transport (tcp) Nov 1 00:41:09.274814 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:41:09.274898 kernel: QLogic iSCSI HBA Driver Nov 1 00:41:09.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.306970 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:41:09.308622 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:41:09.360276 kernel: raid6: avx512x4 gen() 18054 MB/s Nov 1 00:41:09.378250 kernel: raid6: avx512x4 xor() 7817 MB/s Nov 1 00:41:09.396250 kernel: raid6: avx512x2 gen() 17976 MB/s Nov 1 00:41:09.414248 kernel: raid6: avx512x2 xor() 24235 MB/s Nov 1 00:41:09.432250 kernel: raid6: avx512x1 gen() 17994 MB/s Nov 1 00:41:09.450248 kernel: raid6: avx512x1 xor() 21886 MB/s Nov 1 00:41:09.468254 kernel: raid6: avx2x4 gen() 17853 MB/s Nov 1 00:41:09.486248 kernel: raid6: avx2x4 xor() 7647 MB/s Nov 1 00:41:09.504250 kernel: raid6: avx2x2 gen() 17917 MB/s Nov 1 00:41:09.522249 kernel: raid6: avx2x2 xor() 18105 MB/s Nov 1 00:41:09.540252 kernel: raid6: avx2x1 gen() 13779 MB/s Nov 1 00:41:09.558252 kernel: raid6: avx2x1 xor() 15791 MB/s Nov 1 00:41:09.576251 kernel: raid6: sse2x4 gen() 9561 MB/s Nov 1 00:41:09.594249 kernel: raid6: sse2x4 xor() 6192 MB/s Nov 1 00:41:09.612249 kernel: raid6: sse2x2 gen() 10573 MB/s Nov 1 00:41:09.630248 kernel: raid6: sse2x2 xor() 6138 MB/s Nov 1 00:41:09.648248 kernel: raid6: sse2x1 gen() 9473 MB/s Nov 1 00:41:09.666490 kernel: raid6: sse2x1 xor() 4831 MB/s Nov 1 00:41:09.666528 kernel: raid6: using algorithm avx512x4 gen() 18054 MB/s Nov 1 00:41:09.666559 kernel: raid6: .... xor() 7817 MB/s, rmw enabled Nov 1 00:41:09.667579 kernel: raid6: using avx512x2 recovery algorithm Nov 1 00:41:09.682254 kernel: xor: automatically using best checksumming function avx Nov 1 00:41:09.785258 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:41:09.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.794000 audit: BPF prog-id=7 op=LOAD Nov 1 00:41:09.794000 audit: BPF prog-id=8 op=LOAD Nov 1 00:41:09.793987 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:41:09.795349 systemd[1]: Starting systemd-udevd.service... Nov 1 00:41:09.808867 systemd-udevd[385]: Using default interface naming scheme 'v252'. Nov 1 00:41:09.814181 systemd[1]: Started systemd-udevd.service. Nov 1 00:41:09.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.815987 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:41:09.834295 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation Nov 1 00:41:09.863990 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:41:09.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.865583 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:41:09.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:09.907586 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:41:09.966245 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:41:09.994936 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:41:09.995007 kernel: AES CTR mode by8 optimization enabled Nov 1 00:41:10.007982 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 1 00:41:10.020879 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 1 00:41:10.021056 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 1 00:41:10.021209 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 1 00:41:10.021394 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 1 00:41:10.021424 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:0b:0e:cb:98:43 Nov 1 00:41:10.029402 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 1 00:41:10.040264 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:41:10.040331 kernel: GPT:9289727 != 33554431 Nov 1 00:41:10.040351 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:41:10.040369 kernel: GPT:9289727 != 33554431 Nov 1 00:41:10.040386 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:41:10.040404 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:41:10.050180 (udev-worker)[438]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:41:10.101250 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (428) Nov 1 00:41:10.131290 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:41:10.139036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:41:10.147422 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:41:10.160857 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:41:10.162718 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:41:10.171567 systemd[1]: Starting disk-uuid.service... Nov 1 00:41:10.177948 disk-uuid[590]: Primary Header is updated. Nov 1 00:41:10.177948 disk-uuid[590]: Secondary Entries is updated. Nov 1 00:41:10.177948 disk-uuid[590]: Secondary Header is updated. Nov 1 00:41:10.186288 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:41:10.191256 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:41:10.197245 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:41:11.197928 disk-uuid[591]: The operation has completed successfully. Nov 1 00:41:11.198697 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:41:11.332804 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:41:11.332934 systemd[1]: Finished disk-uuid.service. Nov 1 00:41:11.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:11.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:11.334820 systemd[1]: Starting verity-setup.service... Nov 1 00:41:11.353668 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:41:11.451955 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:41:11.453169 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:41:11.455654 systemd[1]: Finished verity-setup.service. Nov 1 00:41:11.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:11.549250 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:41:11.549398 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:41:11.550133 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:41:11.550853 systemd[1]: Starting ignition-setup.service... Nov 1 00:41:11.553356 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:41:11.578203 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:41:11.578274 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:41:11.578293 kernel: BTRFS info (device nvme0n1p6): has skinny extents Nov 1 00:41:11.589252 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:41:11.600872 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:41:11.608746 systemd[1]: Finished ignition-setup.service. Nov 1 00:41:11.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:11.610098 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:41:11.638967 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:41:11.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:11.640000 audit: BPF prog-id=9 op=LOAD Nov 1 00:41:11.640989 systemd[1]: Starting systemd-networkd.service... Nov 1 00:41:11.661970 systemd-networkd[1102]: lo: Link UP Nov 1 00:41:11.661983 systemd-networkd[1102]: lo: Gained carrier Nov 1 00:41:11.662447 systemd-networkd[1102]: Enumeration completed Nov 1 00:41:11.662545 systemd[1]: Started systemd-networkd.service. Nov 1 00:41:11.662753 systemd-networkd[1102]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:41:11.665304 systemd-networkd[1102]: eth0: Link UP Nov 1 00:41:11.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:11.665308 systemd-networkd[1102]: eth0: Gained carrier Nov 1 00:41:11.666640 systemd[1]: Reached target network.target. Nov 1 00:41:11.668785 systemd[1]: Starting iscsiuio.service... Nov 1 00:41:11.674766 systemd[1]: Started iscsiuio.service. Nov 1 00:41:11.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:11.676246 systemd[1]: Starting iscsid.service... Nov 1 00:41:11.679347 systemd-networkd[1102]: eth0: DHCPv4 address 172.31.16.189/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 1 00:41:11.680282 iscsid[1107]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:41:11.680282 iscsid[1107]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:41:11.680282 iscsid[1107]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:41:11.680282 iscsid[1107]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:41:11.680282 iscsid[1107]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:41:11.680282 iscsid[1107]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:41:11.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:11.681185 systemd[1]: Started iscsid.service. Nov 1 00:41:11.683666 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:41:11.694994 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:41:11.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:11.695814 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:41:11.696802 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:41:11.698290 systemd[1]: Reached target remote-fs.target. Nov 1 00:41:11.700281 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:41:11.708699 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:41:11.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:12.144101 ignition[1062]: Ignition 2.14.0 Nov 1 00:41:12.144114 ignition[1062]: Stage: fetch-offline Nov 1 00:41:12.144254 ignition[1062]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:41:12.144307 ignition[1062]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:41:12.160031 ignition[1062]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:41:12.160592 ignition[1062]: Ignition finished successfully Nov 1 00:41:12.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:12.162193 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:41:12.163909 systemd[1]: Starting ignition-fetch.service... Nov 1 00:41:12.172202 ignition[1126]: Ignition 2.14.0 Nov 1 00:41:12.172213 ignition[1126]: Stage: fetch Nov 1 00:41:12.172366 ignition[1126]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:41:12.172388 ignition[1126]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:41:12.177891 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:41:12.178592 ignition[1126]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:41:12.219812 ignition[1126]: INFO : PUT result: OK Nov 1 00:41:12.224464 ignition[1126]: DEBUG : parsed url from cmdline: "" Nov 1 00:41:12.224464 ignition[1126]: INFO : no config URL provided Nov 1 00:41:12.224464 ignition[1126]: INFO : reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:41:12.226643 ignition[1126]: INFO : no config at "/usr/lib/ignition/user.ign" Nov 1 00:41:12.226643 ignition[1126]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:41:12.226643 ignition[1126]: INFO : PUT result: OK Nov 1 00:41:12.226643 ignition[1126]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 1 00:41:12.226643 ignition[1126]: INFO : GET result: OK Nov 1 00:41:12.231884 ignition[1126]: DEBUG : parsing config with SHA512: 85c87706c81509004d6c06de0b719d8ae49ba8c9ce6f47fae999e86e24170a064c9822e44c909661da31bed88a165f6a6a4d2f81fc845ffcbcfc728dbd6238e0 Nov 1 00:41:12.235525 unknown[1126]: fetched base config from "system" Nov 1 00:41:12.235540 unknown[1126]: fetched base config from "system" Nov 1 00:41:12.235556 unknown[1126]: fetched user config from "aws" Nov 1 00:41:12.236603 ignition[1126]: fetch: fetch complete Nov 1 00:41:12.236613 ignition[1126]: fetch: fetch passed Nov 1 00:41:12.239158 systemd[1]: Finished ignition-fetch.service. Nov 1 00:41:12.236666 ignition[1126]: Ignition finished successfully Nov 1 00:41:12.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:12.241618 systemd[1]: Starting ignition-kargs.service... Nov 1 00:41:12.252190 ignition[1133]: Ignition 2.14.0 Nov 1 00:41:12.252203 ignition[1133]: Stage: kargs Nov 1 00:41:12.252436 ignition[1133]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:41:12.252471 ignition[1133]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:41:12.259972 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:41:12.260807 ignition[1133]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:41:12.261514 ignition[1133]: INFO : PUT result: OK Nov 1 00:41:12.263318 ignition[1133]: kargs: kargs passed Nov 1 00:41:12.263383 ignition[1133]: Ignition finished successfully Nov 1 00:41:12.264774 systemd[1]: Finished ignition-kargs.service. Nov 1 00:41:12.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:12.266499 systemd[1]: Starting ignition-disks.service... Nov 1 00:41:12.275665 ignition[1139]: Ignition 2.14.0 Nov 1 00:41:12.275678 ignition[1139]: Stage: disks Nov 1 00:41:12.275884 ignition[1139]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:41:12.275916 ignition[1139]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:41:12.283506 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:41:12.284264 ignition[1139]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:41:12.284916 ignition[1139]: INFO : PUT result: OK Nov 1 00:41:12.287279 ignition[1139]: disks: disks passed Nov 1 00:41:12.287351 ignition[1139]: Ignition finished successfully Nov 1 00:41:12.289196 systemd[1]: Finished ignition-disks.service. Nov 1 00:41:12.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:12.290078 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:41:12.290952 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:41:12.291975 systemd[1]: Reached target local-fs.target. Nov 1 00:41:12.292859 systemd[1]: Reached target sysinit.target. Nov 1 00:41:12.293756 systemd[1]: Reached target basic.target. Nov 1 00:41:12.295990 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:41:12.341298 systemd-fsck[1147]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:41:12.344068 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:41:12.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:12.345427 systemd[1]: Mounting sysroot.mount... Nov 1 00:41:12.362275 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:41:12.362982 systemd[1]: Mounted sysroot.mount. Nov 1 00:41:12.364269 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:41:12.374057 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:41:12.375310 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:41:12.375371 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:41:12.375415 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:41:12.379630 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:41:12.382784 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:41:12.394644 initrd-setup-root[1168]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:41:12.406139 initrd-setup-root[1176]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:41:12.411218 initrd-setup-root[1184]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:41:12.416769 initrd-setup-root[1192]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:41:12.483182 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:41:12.502249 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1200) Nov 1 00:41:12.505695 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:41:12.505757 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:41:12.505771 kernel: BTRFS info (device nvme0n1p6): has skinny extents Nov 1 00:41:12.515255 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:41:12.517712 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:41:12.576016 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:41:12.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:12.577494 systemd[1]: Starting ignition-mount.service... Nov 1 00:41:12.578713 systemd[1]: Starting sysroot-boot.service... Nov 1 00:41:12.585132 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 00:41:12.585248 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 00:41:12.598984 ignition[1229]: INFO : Ignition 2.14.0 Nov 1 00:41:12.598984 ignition[1229]: INFO : Stage: mount Nov 1 00:41:12.601030 ignition[1229]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:41:12.601030 ignition[1229]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:41:12.608902 ignition[1229]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:41:12.609742 ignition[1229]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:41:12.611650 ignition[1229]: INFO : PUT result: OK Nov 1 00:41:12.614353 ignition[1229]: INFO : mount: mount passed Nov 1 00:41:12.614353 ignition[1229]: INFO : Ignition finished successfully Nov 1 00:41:12.616353 systemd[1]: Finished ignition-mount.service. Nov 1 00:41:12.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:12.617728 systemd[1]: Starting ignition-files.service... Nov 1 00:41:12.625520 systemd[1]: Finished sysroot-boot.service. Nov 1 00:41:12.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:12.628320 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:41:12.644247 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1239) Nov 1 00:41:12.648196 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:41:12.648279 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:41:12.648293 kernel: BTRFS info (device nvme0n1p6): has skinny extents Nov 1 00:41:12.674265 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:41:12.678144 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:41:12.688099 ignition[1259]: INFO : Ignition 2.14.0 Nov 1 00:41:12.688099 ignition[1259]: INFO : Stage: files Nov 1 00:41:12.689436 ignition[1259]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:41:12.689436 ignition[1259]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:41:12.699925 ignition[1259]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:41:12.700827 ignition[1259]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:41:12.701604 ignition[1259]: INFO : PUT result: OK Nov 1 00:41:12.705164 ignition[1259]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:41:12.710716 ignition[1259]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:41:12.710716 ignition[1259]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:41:12.714068 ignition[1259]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:41:12.715292 ignition[1259]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:41:12.715292 ignition[1259]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:41:12.715196 unknown[1259]: wrote ssh authorized keys file for user: core Nov 1 00:41:12.718144 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:41:12.718144 ignition[1259]: INFO : GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:41:12.804145 ignition[1259]: INFO : GET result: OK Nov 1 00:41:13.007962 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:41:13.007962 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:41:13.010594 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:41:13.010594 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Nov 1 00:41:13.010594 ignition[1259]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:41:13.014618 ignition[1259]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem758272401" Nov 1 00:41:13.014618 ignition[1259]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem758272401": device or resource busy Nov 1 00:41:13.014618 ignition[1259]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem758272401", trying btrfs: device or resource busy Nov 1 00:41:13.014618 ignition[1259]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem758272401" Nov 1 00:41:13.014618 ignition[1259]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem758272401" Nov 1 00:41:13.021414 ignition[1259]: INFO : op(3): [started] unmounting "/mnt/oem758272401" Nov 1 00:41:13.021414 ignition[1259]: INFO : op(3): [finished] unmounting "/mnt/oem758272401" Nov 1 00:41:13.021414 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Nov 1 00:41:13.021414 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:41:13.021414 ignition[1259]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:41:13.148403 systemd-networkd[1102]: eth0: Gained IPv6LL Nov 1 00:41:13.204478 ignition[1259]: INFO : GET result: OK Nov 1 00:41:13.332516 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:41:13.332516 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:41:13.337214 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:41:13.337214 ignition[1259]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:41:13.353809 ignition[1259]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1420885439" Nov 1 00:41:13.353809 ignition[1259]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1420885439": device or resource busy Nov 1 00:41:13.353809 ignition[1259]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1420885439", trying btrfs: device or resource busy Nov 1 00:41:13.353809 ignition[1259]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1420885439" Nov 1 00:41:13.353809 ignition[1259]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1420885439" Nov 1 00:41:13.353809 ignition[1259]: INFO : op(6): [started] unmounting "/mnt/oem1420885439" Nov 1 00:41:13.353809 ignition[1259]: INFO : op(6): [finished] unmounting "/mnt/oem1420885439" Nov 1 00:41:13.353809 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:41:13.353809 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Nov 1 00:41:13.353809 ignition[1259]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:41:13.353809 ignition[1259]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3554597423" Nov 1 00:41:13.353809 ignition[1259]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3554597423": device or resource busy Nov 1 00:41:13.353809 ignition[1259]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3554597423", trying btrfs: device or resource busy Nov 1 00:41:13.353809 ignition[1259]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3554597423" Nov 1 00:41:13.353809 ignition[1259]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3554597423" Nov 1 00:41:13.353809 ignition[1259]: INFO : op(9): [started] unmounting "/mnt/oem3554597423" Nov 1 00:41:13.353809 ignition[1259]: INFO : op(9): [finished] unmounting "/mnt/oem3554597423" Nov 1 00:41:13.353809 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Nov 1 00:41:13.353809 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:41:13.353809 ignition[1259]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 1 00:41:13.717744 ignition[1259]: INFO : GET result: OK Nov 1 00:41:14.049546 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:41:14.049546 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Nov 1 00:41:14.054809 ignition[1259]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:41:14.059277 ignition[1259]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3308737454" Nov 1 00:41:14.061811 ignition[1259]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3308737454": device or resource busy Nov 1 00:41:14.061811 ignition[1259]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3308737454", trying btrfs: device or resource busy Nov 1 00:41:14.061811 ignition[1259]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3308737454" Nov 1 00:41:14.075322 ignition[1259]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3308737454" Nov 1 00:41:14.075322 ignition[1259]: INFO : op(c): [started] unmounting "/mnt/oem3308737454" Nov 1 00:41:14.075322 ignition[1259]: INFO : op(c): [finished] unmounting "/mnt/oem3308737454" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(13): [started] processing unit "nvidia.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(13): [finished] processing unit "nvidia.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Nov 1 00:41:14.075322 ignition[1259]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Nov 1 00:41:14.149721 kernel: kauditd_printk_skb: 26 callbacks suppressed Nov 1 00:41:14.149752 kernel: audit: type=1130 audit(1761957674.097:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.149771 kernel: audit: type=1130 audit(1761957674.118:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.149789 kernel: audit: type=1131 audit(1761957674.119:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.149806 kernel: audit: type=1130 audit(1761957674.134:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.072476 systemd[1]: mnt-oem3308737454.mount: Deactivated successfully. Nov 1 00:41:14.151725 ignition[1259]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Nov 1 00:41:14.151725 ignition[1259]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:41:14.151725 ignition[1259]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:41:14.151725 ignition[1259]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:41:14.151725 ignition[1259]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:41:14.151725 ignition[1259]: INFO : files: files passed Nov 1 00:41:14.151725 ignition[1259]: INFO : Ignition finished successfully Nov 1 00:41:14.176352 kernel: audit: type=1130 audit(1761957674.165:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.176391 kernel: audit: type=1131 audit(1761957674.165:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.095962 systemd[1]: Finished ignition-files.service. Nov 1 00:41:14.105736 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:41:14.179802 initrd-setup-root-after-ignition[1284]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:41:14.110876 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:41:14.112071 systemd[1]: Starting ignition-quench.service... Nov 1 00:41:14.117567 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:41:14.117708 systemd[1]: Finished ignition-quench.service. Nov 1 00:41:14.129555 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:41:14.134463 systemd[1]: Reached target ignition-complete.target. Nov 1 00:41:14.142631 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:41:14.164869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:41:14.164996 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:41:14.165950 systemd[1]: Reached target initrd-fs.target. Nov 1 00:41:14.177154 systemd[1]: Reached target initrd.target. Nov 1 00:41:14.206727 kernel: audit: type=1130 audit(1761957674.197:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.178709 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:41:14.180062 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:41:14.196735 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:41:14.199149 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:41:14.215491 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:41:14.216362 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:41:14.217600 systemd[1]: Stopped target timers.target. Nov 1 00:41:14.218759 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:41:14.225086 kernel: audit: type=1131 audit(1761957674.219:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.218962 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:41:14.220255 systemd[1]: Stopped target initrd.target. Nov 1 00:41:14.225892 systemd[1]: Stopped target basic.target. Nov 1 00:41:14.227122 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:41:14.228242 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:41:14.229372 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:41:14.230522 systemd[1]: Stopped target remote-fs.target. Nov 1 00:41:14.231733 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:41:14.232903 systemd[1]: Stopped target sysinit.target. Nov 1 00:41:14.234060 systemd[1]: Stopped target local-fs.target. Nov 1 00:41:14.235323 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:41:14.236452 systemd[1]: Stopped target swap.target. Nov 1 00:41:14.243846 kernel: audit: type=1131 audit(1761957674.238:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.237532 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:41:14.237732 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:41:14.251085 kernel: audit: type=1131 audit(1761957674.245:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.238895 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:41:14.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.244649 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:41:14.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.244864 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:41:14.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.246070 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:41:14.271694 ignition[1297]: INFO : Ignition 2.14.0 Nov 1 00:41:14.271694 ignition[1297]: INFO : Stage: umount Nov 1 00:41:14.271694 ignition[1297]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:41:14.271694 ignition[1297]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:41:14.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.246302 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:41:14.251999 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:41:14.252202 systemd[1]: Stopped ignition-files.service. Nov 1 00:41:14.254565 systemd[1]: Stopping ignition-mount.service... Nov 1 00:41:14.256117 systemd[1]: Stopping iscsiuio.service... Nov 1 00:41:14.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.257070 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:41:14.261866 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:41:14.264440 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:41:14.265442 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:41:14.270572 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:41:14.273497 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:41:14.273712 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:41:14.278571 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:41:14.278705 systemd[1]: Stopped iscsiuio.service. Nov 1 00:41:14.286879 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:41:14.286990 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:41:14.300791 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:41:14.305847 ignition[1297]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:41:14.306886 ignition[1297]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:41:14.308597 ignition[1297]: INFO : PUT result: OK Nov 1 00:41:14.311620 ignition[1297]: INFO : umount: umount passed Nov 1 00:41:14.313303 ignition[1297]: INFO : Ignition finished successfully Nov 1 00:41:14.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.312616 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:41:14.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.312702 systemd[1]: Stopped ignition-mount.service. Nov 1 00:41:14.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.313381 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:41:14.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.313429 systemd[1]: Stopped ignition-disks.service. Nov 1 00:41:14.314540 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:41:14.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.314589 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:41:14.315886 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:41:14.315933 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:41:14.316910 systemd[1]: Stopped target network.target. Nov 1 00:41:14.317929 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:41:14.317984 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:41:14.319293 systemd[1]: Stopped target paths.target. Nov 1 00:41:14.320182 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:41:14.322296 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:41:14.322882 systemd[1]: Stopped target slices.target. Nov 1 00:41:14.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.324052 systemd[1]: Stopped target sockets.target. Nov 1 00:41:14.325139 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:41:14.325171 systemd[1]: Closed iscsid.socket. Nov 1 00:41:14.327346 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:41:14.327388 systemd[1]: Closed iscsiuio.socket. Nov 1 00:41:14.328309 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:41:14.328360 systemd[1]: Stopped ignition-setup.service. Nov 1 00:41:14.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.330059 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:41:14.331123 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:41:14.333301 systemd-networkd[1102]: eth0: DHCPv6 lease lost Nov 1 00:41:14.339000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:41:14.334199 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:41:14.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.334477 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:41:14.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.335920 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:41:14.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.335951 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:41:14.337410 systemd[1]: Stopping network-cleanup.service... Nov 1 00:41:14.340536 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:41:14.340592 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:41:14.341692 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:41:14.341738 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:41:14.342720 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:41:14.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.342761 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:41:14.347528 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:41:14.349525 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:41:14.350016 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:41:14.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.350100 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:41:14.353163 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:41:14.357000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:41:14.353330 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:41:14.356532 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:41:14.356578 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:41:14.357319 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:41:14.357353 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:41:14.360885 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:41:14.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.360937 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:41:14.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.362054 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:41:14.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.362097 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:41:14.363192 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:41:14.363256 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:41:14.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.364992 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:41:14.366978 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:41:14.367124 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:41:14.370362 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:41:14.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.370480 systemd[1]: Stopped network-cleanup.service. Nov 1 00:41:14.372436 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:41:14.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.372518 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:41:14.421898 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:41:14.422004 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:41:14.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.423483 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:41:14.424381 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:41:14.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:14.424440 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:41:14.426192 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:41:14.438304 systemd[1]: Switching root. Nov 1 00:41:14.458371 iscsid[1107]: iscsid shutting down. Nov 1 00:41:14.459205 systemd-journald[184]: Journal stopped Nov 1 00:41:18.780168 systemd-journald[184]: Received SIGTERM from PID 1 (n/a). Nov 1 00:41:18.780267 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:41:18.780291 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:41:18.780310 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:41:18.780337 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:41:18.780355 kernel: SELinux: policy capability open_perms=1 Nov 1 00:41:18.780378 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:41:18.780400 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:41:18.780417 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:41:18.780435 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:41:18.780451 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:41:18.780474 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:41:18.780492 systemd[1]: Successfully loaded SELinux policy in 77.844ms. Nov 1 00:41:18.780522 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.112ms. Nov 1 00:41:18.780545 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:41:18.780567 systemd[1]: Detected virtualization amazon. Nov 1 00:41:18.780586 systemd[1]: Detected architecture x86-64. Nov 1 00:41:18.780604 systemd[1]: Detected first boot. Nov 1 00:41:18.780623 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:41:18.780641 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:41:18.780661 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:41:18.780686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:41:18.780710 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:41:18.780734 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:41:18.780753 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:41:18.780772 systemd[1]: Stopped iscsid.service. Nov 1 00:41:18.780790 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:41:18.780809 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:41:18.780827 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:41:18.780847 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:41:18.780865 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:41:18.780887 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 00:41:18.780905 systemd[1]: Created slice system-getty.slice. Nov 1 00:41:18.780922 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:41:18.780941 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:41:18.780961 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:41:18.780981 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:41:18.780998 systemd[1]: Created slice user.slice. Nov 1 00:41:18.781016 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:41:18.781035 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:41:18.781056 systemd[1]: Set up automount boot.automount. Nov 1 00:41:18.781075 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:41:18.781093 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:41:18.781110 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:41:18.781129 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:41:18.781147 systemd[1]: Reached target integritysetup.target. Nov 1 00:41:18.781165 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:41:18.781187 systemd[1]: Reached target remote-fs.target. Nov 1 00:41:18.781205 systemd[1]: Reached target slices.target. Nov 1 00:41:18.781241 systemd[1]: Reached target swap.target. Nov 1 00:41:18.781262 systemd[1]: Reached target torcx.target. Nov 1 00:41:18.781282 systemd[1]: Reached target veritysetup.target. Nov 1 00:41:18.781303 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:41:18.781339 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:41:18.781364 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:41:18.781385 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:41:18.781410 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:41:18.781431 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:41:18.781452 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:41:18.781473 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:41:18.781493 systemd[1]: Mounting media.mount... Nov 1 00:41:18.781514 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:18.781538 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:41:18.781559 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:41:18.781579 systemd[1]: Mounting tmp.mount... Nov 1 00:41:18.781600 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:41:18.781621 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:18.781641 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:41:18.781663 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:41:18.781684 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:18.781705 systemd[1]: Starting modprobe@drm.service... Nov 1 00:41:18.781729 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:41:18.781750 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:41:18.781771 systemd[1]: Starting modprobe@loop.service... Nov 1 00:41:18.781793 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:41:18.781814 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:41:18.781835 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:41:18.781857 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:41:18.781879 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:41:18.781903 systemd[1]: Stopped systemd-journald.service. Nov 1 00:41:18.781924 systemd[1]: Starting systemd-journald.service... Nov 1 00:41:18.781944 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:41:18.781965 kernel: loop: module loaded Nov 1 00:41:18.781986 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:41:18.782006 kernel: fuse: init (API version 7.34) Nov 1 00:41:18.782027 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:41:18.782047 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:41:18.782068 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:41:18.782089 systemd[1]: Stopped verity-setup.service. Nov 1 00:41:18.782110 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:18.782126 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:41:18.782143 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:41:18.782162 systemd[1]: Mounted media.mount. Nov 1 00:41:18.782179 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:41:18.782197 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:41:18.782216 systemd[1]: Mounted tmp.mount. Nov 1 00:41:18.786304 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:41:18.786346 systemd-journald[1412]: Journal started Nov 1 00:41:18.786427 systemd-journald[1412]: Runtime Journal (/run/log/journal/ec222dfad1d907c6e780c42467920b63) is 4.8M, max 38.3M, 33.5M free. Nov 1 00:41:14.959000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:41:15.073000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:41:15.073000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:41:15.074000 audit: BPF prog-id=10 op=LOAD Nov 1 00:41:15.074000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:41:15.074000 audit: BPF prog-id=11 op=LOAD Nov 1 00:41:15.074000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:41:15.277000 audit[1330]: AVC avc: denied { associate } for pid=1330 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:41:15.277000 audit[1330]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1313 pid=1330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:15.277000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:41:15.280000 audit[1330]: AVC avc: denied { associate } for pid=1330 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:41:15.280000 audit[1330]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1313 pid=1330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:15.280000 audit: CWD cwd="/" Nov 1 00:41:15.280000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:15.280000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:15.280000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:41:18.557000 audit: BPF prog-id=12 op=LOAD Nov 1 00:41:18.558000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:41:18.558000 audit: BPF prog-id=13 op=LOAD Nov 1 00:41:18.558000 audit: BPF prog-id=14 op=LOAD Nov 1 00:41:18.558000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:41:18.558000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:41:18.558000 audit: BPF prog-id=15 op=LOAD Nov 1 00:41:18.558000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:41:18.558000 audit: BPF prog-id=16 op=LOAD Nov 1 00:41:18.559000 audit: BPF prog-id=17 op=LOAD Nov 1 00:41:18.559000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:41:18.559000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:41:18.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.567000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:41:18.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.711000 audit: BPF prog-id=18 op=LOAD Nov 1 00:41:18.711000 audit: BPF prog-id=19 op=LOAD Nov 1 00:41:18.711000 audit: BPF prog-id=20 op=LOAD Nov 1 00:41:18.711000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:41:18.711000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:41:18.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.778000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:41:18.778000 audit[1412]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff0949b950 a2=4000 a3=7fff0949b9ec items=0 ppid=1 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:18.778000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:41:18.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.555806 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:41:15.262747 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:41:18.555821 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Nov 1 00:41:15.263624 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:41:18.793112 systemd[1]: Started systemd-journald.service. Nov 1 00:41:18.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.559947 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:41:15.263644 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:41:15.263678 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:41:18.793492 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:41:15.263688 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:41:18.793697 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:41:15.263719 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:41:15.263732 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:41:15.263930 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:41:15.263970 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:41:18.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.795831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:15.263983 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:41:18.796468 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:15.266100 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:41:15.266141 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:41:15.266162 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:41:15.266177 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:41:18.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:15.266195 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:41:15.266208 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:41:18.027561 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:18Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:41:18.027816 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:18Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:41:18.027935 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:18Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:41:18.798056 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:41:18.028124 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:18Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:41:18.028172 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:18Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:41:18.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.028266 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-11-01T00:41:18Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:41:18.798344 systemd[1]: Finished modprobe@drm.service. Nov 1 00:41:18.799850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:41:18.800022 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:41:18.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.801379 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:41:18.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.804611 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:41:18.806171 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:41:18.806424 systemd[1]: Finished modprobe@loop.service. Nov 1 00:41:18.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.808698 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:41:18.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.810883 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:41:18.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.812666 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:41:18.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.815383 systemd[1]: Reached target network-pre.target. Nov 1 00:41:18.818095 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:41:18.827475 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:41:18.830318 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:41:18.832783 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:41:18.836307 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:41:18.837145 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:41:18.839412 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:41:18.841341 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:41:18.844685 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:41:18.847372 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:41:18.849415 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:41:18.862807 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:41:18.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.863841 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:41:18.878082 systemd-journald[1412]: Time spent on flushing to /var/log/journal/ec222dfad1d907c6e780c42467920b63 is 53.089ms for 1219 entries. Nov 1 00:41:18.878082 systemd-journald[1412]: System Journal (/var/log/journal/ec222dfad1d907c6e780c42467920b63) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:41:18.944972 systemd-journald[1412]: Received client request to flush runtime journal. Nov 1 00:41:18.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:18.885341 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:41:18.887882 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:41:18.946424 udevadm[1447]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:41:18.905988 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:41:18.908376 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:41:18.917817 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:41:18.946117 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:41:18.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.007211 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:41:19.444165 kernel: kauditd_printk_skb: 100 callbacks suppressed Nov 1 00:41:19.444264 kernel: audit: type=1130 audit(1761957679.438:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.444288 kernel: audit: type=1334 audit(1761957679.439:139): prog-id=21 op=LOAD Nov 1 00:41:19.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.439000 audit: BPF prog-id=21 op=LOAD Nov 1 00:41:19.438280 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:41:19.446084 kernel: audit: type=1334 audit(1761957679.445:140): prog-id=22 op=LOAD Nov 1 00:41:19.445000 audit: BPF prog-id=22 op=LOAD Nov 1 00:41:19.447285 kernel: audit: type=1334 audit(1761957679.445:141): prog-id=7 op=UNLOAD Nov 1 00:41:19.445000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:41:19.447908 systemd[1]: Starting systemd-udevd.service... Nov 1 00:41:19.448353 kernel: audit: type=1334 audit(1761957679.445:142): prog-id=8 op=UNLOAD Nov 1 00:41:19.445000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:41:19.465974 systemd-udevd[1449]: Using default interface naming scheme 'v252'. Nov 1 00:41:19.528803 systemd[1]: Started systemd-udevd.service. Nov 1 00:41:19.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.534881 kernel: audit: type=1130 audit(1761957679.529:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.534943 kernel: audit: type=1334 audit(1761957679.533:144): prog-id=23 op=LOAD Nov 1 00:41:19.533000 audit: BPF prog-id=23 op=LOAD Nov 1 00:41:19.535589 systemd[1]: Starting systemd-networkd.service... Nov 1 00:41:19.550393 kernel: audit: type=1334 audit(1761957679.546:145): prog-id=24 op=LOAD Nov 1 00:41:19.550480 kernel: audit: type=1334 audit(1761957679.548:146): prog-id=25 op=LOAD Nov 1 00:41:19.550501 kernel: audit: type=1334 audit(1761957679.549:147): prog-id=26 op=LOAD Nov 1 00:41:19.546000 audit: BPF prog-id=24 op=LOAD Nov 1 00:41:19.548000 audit: BPF prog-id=25 op=LOAD Nov 1 00:41:19.549000 audit: BPF prog-id=26 op=LOAD Nov 1 00:41:19.552403 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:41:19.577542 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 00:41:19.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.592127 systemd[1]: Started systemd-userdbd.service. Nov 1 00:41:19.597586 (udev-worker)[1451]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:41:19.628262 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:41:19.638749 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:41:19.638832 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Nov 1 00:41:19.643586 kernel: ACPI: button: Sleep Button [SLPF] Nov 1 00:41:19.639000 audit[1462]: AVC avc: denied { confidentiality } for pid=1462 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:41:19.639000 audit[1462]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ec1b521960 a1=338ec a2=7f186820abc5 a3=5 items=110 ppid=1449 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:19.639000 audit: CWD cwd="/" Nov 1 00:41:19.639000 audit: PATH item=0 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=1 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=2 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=3 name=(null) inode=13957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=4 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=5 name=(null) inode=13958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=6 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=7 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=8 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=9 name=(null) inode=13960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=10 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=11 name=(null) inode=13961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=12 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=13 name=(null) inode=13962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=14 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=15 name=(null) inode=13963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=16 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=17 name=(null) inode=13964 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=18 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=19 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=20 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=21 name=(null) inode=13966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=22 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=23 name=(null) inode=13967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=24 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=25 name=(null) inode=13968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=26 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=27 name=(null) inode=13969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=28 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=29 name=(null) inode=13970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=30 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=31 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=32 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=33 name=(null) inode=13972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=34 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=35 name=(null) inode=13973 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=36 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=37 name=(null) inode=13974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=38 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=39 name=(null) inode=13975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=40 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=41 name=(null) inode=13976 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=42 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=43 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=44 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=45 name=(null) inode=13978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=46 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=47 name=(null) inode=13979 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=48 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=49 name=(null) inode=13980 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=50 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=51 name=(null) inode=13981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=52 name=(null) inode=13977 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=53 name=(null) inode=13982 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=54 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=55 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=56 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=57 name=(null) inode=13984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=58 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=59 name=(null) inode=13985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=60 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=61 name=(null) inode=13986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=62 name=(null) inode=13986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=63 name=(null) inode=13987 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=64 name=(null) inode=13986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=65 name=(null) inode=13988 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=66 name=(null) inode=13986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=67 name=(null) inode=13989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=68 name=(null) inode=13986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=69 name=(null) inode=13990 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=70 name=(null) inode=13986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=71 name=(null) inode=13991 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=72 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=73 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=74 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=75 name=(null) inode=13993 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=76 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=77 name=(null) inode=13994 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=78 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=79 name=(null) inode=13995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=80 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=81 name=(null) inode=13996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=82 name=(null) inode=13992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=83 name=(null) inode=13997 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=84 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=85 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=86 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=87 name=(null) inode=13999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=88 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=89 name=(null) inode=14000 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=90 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=91 name=(null) inode=14001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=92 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=93 name=(null) inode=14002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=94 name=(null) inode=13998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=95 name=(null) inode=14003 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=96 name=(null) inode=13983 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=97 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=98 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=99 name=(null) inode=14005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=100 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=101 name=(null) inode=14006 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=102 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=103 name=(null) inode=14007 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=104 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=105 name=(null) inode=14008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=106 name=(null) inode=14004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=107 name=(null) inode=14009 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PATH item=109 name=(null) inode=14989 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:19.639000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:41:19.685255 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 1 00:41:19.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.706048 systemd-networkd[1453]: lo: Link UP Nov 1 00:41:19.706057 systemd-networkd[1453]: lo: Gained carrier Nov 1 00:41:19.706487 systemd-networkd[1453]: Enumeration completed Nov 1 00:41:19.706580 systemd[1]: Started systemd-networkd.service. Nov 1 00:41:19.708330 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:41:19.710448 systemd-networkd[1453]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:41:19.714212 systemd-networkd[1453]: eth0: Link UP Nov 1 00:41:19.714398 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:41:19.714600 systemd-networkd[1453]: eth0: Gained carrier Nov 1 00:41:19.725477 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 00:41:19.730406 systemd-networkd[1453]: eth0: DHCPv4 address 172.31.16.189/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 1 00:41:19.733258 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:41:19.837326 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:41:19.838655 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:41:19.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.840666 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:41:19.894854 lvm[1563]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:41:19.923636 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:41:19.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.924401 systemd[1]: Reached target cryptsetup.target. Nov 1 00:41:19.926121 systemd[1]: Starting lvm2-activation.service... Nov 1 00:41:19.930918 lvm[1564]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:41:19.958575 systemd[1]: Finished lvm2-activation.service. Nov 1 00:41:19.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.959410 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:41:19.959963 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:41:19.959990 systemd[1]: Reached target local-fs.target. Nov 1 00:41:19.960506 systemd[1]: Reached target machines.target. Nov 1 00:41:19.962241 systemd[1]: Starting ldconfig.service... Nov 1 00:41:19.963684 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:19.963747 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:19.965116 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:41:19.966945 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:41:19.969915 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:41:19.972581 systemd[1]: Starting systemd-sysext.service... Nov 1 00:41:19.976087 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1566 (bootctl) Nov 1 00:41:19.977527 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:41:19.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:19.995941 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:41:20.001222 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:41:20.008108 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:41:20.008332 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:41:20.023686 kernel: loop0: detected capacity change from 0 to 229808 Nov 1 00:41:20.094257 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:41:20.118278 kernel: loop1: detected capacity change from 0 to 229808 Nov 1 00:41:20.139966 (sd-sysext)[1578]: Using extensions 'kubernetes'. Nov 1 00:41:20.140737 (sd-sysext)[1578]: Merged extensions into '/usr'. Nov 1 00:41:20.153120 systemd-fsck[1575]: fsck.fat 4.2 (2021-01-31) Nov 1 00:41:20.153120 systemd-fsck[1575]: /dev/nvme0n1p1: 790 files, 120773/258078 clusters Nov 1 00:41:20.158217 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:41:20.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.161574 systemd[1]: Mounting boot.mount... Nov 1 00:41:20.162533 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:20.169785 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:41:20.170860 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.172704 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:20.175987 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:41:20.180746 systemd[1]: Starting modprobe@loop.service... Nov 1 00:41:20.181990 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.182187 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:20.182408 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:20.185980 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:41:20.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.187838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:20.188022 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:20.189396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:41:20.189565 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:41:20.190886 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:41:20.191159 systemd[1]: Finished modprobe@loop.service. Nov 1 00:41:20.193083 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:41:20.193439 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.196176 systemd[1]: Finished systemd-sysext.service. Nov 1 00:41:20.200036 systemd[1]: Starting ensure-sysext.service... Nov 1 00:41:20.205072 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:41:20.211761 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:41:20.213754 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:41:20.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.215686 systemd[1]: Mounted boot.mount. Nov 1 00:41:20.222884 systemd[1]: Reloading. Nov 1 00:41:20.265138 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:41:20.300335 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:41:20.309788 /usr/lib/systemd/system-generators/torcx-generator[1616]: time="2025-11-01T00:41:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:41:20.311316 /usr/lib/systemd/system-generators/torcx-generator[1616]: time="2025-11-01T00:41:20Z" level=info msg="torcx already run" Nov 1 00:41:20.327927 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:41:20.449077 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:41:20.449317 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:41:20.480078 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:41:20.604000 audit: BPF prog-id=27 op=LOAD Nov 1 00:41:20.604000 audit: BPF prog-id=28 op=LOAD Nov 1 00:41:20.605000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:41:20.605000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:41:20.605000 audit: BPF prog-id=29 op=LOAD Nov 1 00:41:20.606000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:41:20.606000 audit: BPF prog-id=30 op=LOAD Nov 1 00:41:20.606000 audit: BPF prog-id=31 op=LOAD Nov 1 00:41:20.606000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:41:20.606000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:41:20.608000 audit: BPF prog-id=32 op=LOAD Nov 1 00:41:20.608000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:41:20.608000 audit: BPF prog-id=33 op=LOAD Nov 1 00:41:20.609000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:41:20.609000 audit: BPF prog-id=34 op=LOAD Nov 1 00:41:20.609000 audit: BPF prog-id=35 op=LOAD Nov 1 00:41:20.609000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:41:20.609000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:41:20.615106 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:41:20.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.637965 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:20.638382 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.640740 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:20.643107 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:41:20.646324 systemd[1]: Starting modprobe@loop.service... Nov 1 00:41:20.647218 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.647568 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:20.647900 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:20.649278 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:41:20.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.650219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:20.650405 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:20.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.651454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:41:20.651692 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:41:20.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.652771 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:41:20.652929 systemd[1]: Finished modprobe@loop.service. Nov 1 00:41:20.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.655394 systemd[1]: Starting audit-rules.service... Nov 1 00:41:20.657684 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:41:20.662000 audit: BPF prog-id=36 op=LOAD Nov 1 00:41:20.660425 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:41:20.661187 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:41:20.661418 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.663986 systemd[1]: Starting systemd-resolved.service... Nov 1 00:41:20.667000 audit: BPF prog-id=37 op=LOAD Nov 1 00:41:20.670522 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:41:20.674319 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:41:20.679862 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:20.680284 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.684516 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:20.686861 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:41:20.691331 systemd[1]: Starting modprobe@loop.service... Nov 1 00:41:20.692022 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.692271 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:20.692442 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:20.694067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:20.694279 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:20.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.701000 audit[1681]: SYSTEM_BOOT pid=1681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.703278 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:41:20.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.706910 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:41:20.707098 systemd[1]: Finished modprobe@loop.service. Nov 1 00:41:20.708310 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:20.709056 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.712188 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:20.715772 systemd[1]: Starting modprobe@drm.service... Nov 1 00:41:20.716584 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.716869 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:20.717176 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:41:20.717407 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:20.725039 systemd[1]: Finished ensure-sysext.service. Nov 1 00:41:20.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.726116 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:41:20.727553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:20.727721 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:20.728419 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.731467 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:41:20.731626 systemd[1]: Finished modprobe@drm.service. Nov 1 00:41:20.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.749693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:41:20.749882 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:41:20.750751 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:41:20.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.805905 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:41:20.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:20.833279 systemd-resolved[1679]: Positive Trust Anchors: Nov 1 00:41:20.833300 systemd-resolved[1679]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:41:20.833342 systemd-resolved[1679]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:41:20.838000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:41:20.838000 audit[1701]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffddc6d5d50 a2=420 a3=0 items=0 ppid=1676 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:20.838000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:41:20.839190 augenrules[1701]: No rules Nov 1 00:41:20.839718 systemd[1]: Finished audit-rules.service. Nov 1 00:41:20.855611 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:41:20.856349 systemd[1]: Reached target time-set.target. Nov 1 00:41:20.858738 ldconfig[1565]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:41:20.872156 systemd-resolved[1679]: Defaulting to hostname 'linux'. Nov 1 00:41:20.873377 systemd[1]: Finished ldconfig.service. Nov 1 00:41:20.875071 systemd[1]: Starting systemd-update-done.service... Nov 1 00:41:20.875519 systemd[1]: Started systemd-resolved.service. Nov 1 00:41:20.875936 systemd[1]: Reached target network.target. Nov 1 00:41:20.876265 systemd[1]: Reached target nss-lookup.target. Nov 1 00:41:20.883356 systemd[1]: Finished systemd-update-done.service. Nov 1 00:41:20.883792 systemd[1]: Reached target sysinit.target. Nov 1 00:41:20.884188 systemd[1]: Started motdgen.path. Nov 1 00:41:20.884537 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:41:20.885014 systemd[1]: Started logrotate.timer. Nov 1 00:41:20.885418 systemd[1]: Started mdadm.timer. Nov 1 00:41:20.885721 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:41:20.886027 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:41:20.886052 systemd[1]: Reached target paths.target. Nov 1 00:41:20.886388 systemd[1]: Reached target timers.target. Nov 1 00:41:20.886990 systemd[1]: Listening on dbus.socket. Nov 1 00:41:20.888354 systemd[1]: Starting docker.socket... Nov 1 00:41:20.892150 systemd[1]: Listening on sshd.socket. Nov 1 00:41:20.892653 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:20.893177 systemd[1]: Listening on docker.socket. Nov 1 00:41:20.893614 systemd[1]: Reached target sockets.target. Nov 1 00:41:20.893967 systemd[1]: Reached target basic.target. Nov 1 00:41:20.894346 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.894378 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:41:20.895504 systemd[1]: Starting containerd.service... Nov 1 00:41:20.897404 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 00:41:20.899432 systemd[1]: Starting dbus.service... Nov 1 00:41:20.901432 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:41:20.903007 systemd[1]: Starting extend-filesystems.service... Nov 1 00:41:20.904006 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:41:20.905142 systemd[1]: Starting motdgen.service... Nov 1 00:41:20.909350 systemd[1]: Starting prepare-helm.service... Nov 1 00:41:20.911378 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:41:20.913313 systemd[1]: Starting sshd-keygen.service... Nov 1 00:41:20.917587 systemd[1]: Starting systemd-logind.service... Nov 1 00:41:20.917970 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:20.918044 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:41:20.918507 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:41:20.920425 systemd[1]: Starting update-engine.service... Nov 1 00:41:20.923473 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:41:20.926547 jq[1713]: false Nov 1 00:41:20.941504 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:41:20.941696 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:41:20.942849 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:41:20.943465 jq[1722]: true Nov 1 00:41:20.943015 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:41:20.958771 tar[1732]: linux-amd64/LICENSE Nov 1 00:41:20.958771 tar[1732]: linux-amd64/helm Nov 1 00:41:20.969152 jq[1733]: true Nov 1 00:41:20.978015 systemd[1]: Started dbus.service. Nov 1 00:41:20.977852 dbus-daemon[1712]: [system] SELinux support is enabled Nov 1 00:41:20.980562 dbus-daemon[1712]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1453 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:41:20.980676 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:41:20.980706 systemd[1]: Reached target system-config.target. Nov 1 00:41:20.981110 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:41:20.981132 systemd[1]: Reached target user-config.target. Nov 1 00:41:20.985456 dbus-daemon[1712]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:41:20.990498 systemd[1]: Starting systemd-hostnamed.service... Nov 1 00:41:20.991209 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:41:20.991415 systemd[1]: Finished motdgen.service. Nov 1 00:41:21.016465 extend-filesystems[1714]: Found loop1 Nov 1 00:41:21.016465 extend-filesystems[1714]: Found nvme0n1 Nov 1 00:41:21.016465 extend-filesystems[1714]: Found nvme0n1p1 Nov 1 00:41:21.016465 extend-filesystems[1714]: Found nvme0n1p2 Nov 1 00:41:21.016465 extend-filesystems[1714]: Found nvme0n1p3 Nov 1 00:41:21.016465 extend-filesystems[1714]: Found usr Nov 1 00:41:21.016465 extend-filesystems[1714]: Found nvme0n1p4 Nov 1 00:41:21.016465 extend-filesystems[1714]: Found nvme0n1p6 Nov 1 00:41:21.016465 extend-filesystems[1714]: Found nvme0n1p7 Nov 1 00:41:21.018533 systemd-timesyncd[1680]: Contacted time server 85.209.17.10:123 (0.flatcar.pool.ntp.org). Nov 1 00:41:21.018730 systemd-timesyncd[1680]: Initial clock synchronization to Sat 2025-11-01 00:41:21.291087 UTC. Nov 1 00:41:21.020322 extend-filesystems[1714]: Found nvme0n1p9 Nov 1 00:41:21.020628 extend-filesystems[1714]: Checking size of /dev/nvme0n1p9 Nov 1 00:41:21.037862 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:41:21.038025 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:41:21.047048 extend-filesystems[1714]: Resized partition /dev/nvme0n1p9 Nov 1 00:41:21.053877 extend-filesystems[1765]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:41:21.063247 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 1 00:41:21.073312 update_engine[1721]: I1101 00:41:21.072706 1721 main.cc:92] Flatcar Update Engine starting Nov 1 00:41:21.077898 systemd[1]: Started update-engine.service. Nov 1 00:41:21.079987 systemd[1]: Started locksmithd.service. Nov 1 00:41:21.084400 update_engine[1721]: I1101 00:41:21.084368 1721 update_check_scheduler.cc:74] Next update check in 3m25s Nov 1 00:41:21.099573 env[1734]: time="2025-11-01T00:41:21.099515389Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:41:21.127518 systemd-logind[1720]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:41:21.128211 systemd-logind[1720]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 00:41:21.128324 systemd-logind[1720]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:41:21.128536 systemd-logind[1720]: New seat seat0. Nov 1 00:41:21.137986 systemd[1]: Started systemd-logind.service. Nov 1 00:41:21.148699 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 1 00:41:21.169288 extend-filesystems[1765]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 1 00:41:21.169288 extend-filesystems[1765]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 1 00:41:21.169288 extend-filesystems[1765]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 1 00:41:21.172675 extend-filesystems[1714]: Resized filesystem in /dev/nvme0n1p9 Nov 1 00:41:21.169941 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:41:21.170098 systemd[1]: Finished extend-filesystems.service. Nov 1 00:41:21.212369 systemd-networkd[1453]: eth0: Gained IPv6LL Nov 1 00:41:21.215091 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:41:21.215846 systemd[1]: Reached target network-online.target. Nov 1 00:41:21.217755 systemd[1]: Started amazon-ssm-agent.service. Nov 1 00:41:21.219948 systemd[1]: Starting kubelet.service... Nov 1 00:41:21.222120 systemd[1]: Started nvidia.service. Nov 1 00:41:21.244012 env[1734]: time="2025-11-01T00:41:21.243923989Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:41:21.244254 env[1734]: time="2025-11-01T00:41:21.244211905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:21.247939 env[1734]: time="2025-11-01T00:41:21.247904788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:41:21.248054 env[1734]: time="2025-11-01T00:41:21.248041755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:21.248336 env[1734]: time="2025-11-01T00:41:21.248317061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:41:21.248402 env[1734]: time="2025-11-01T00:41:21.248392462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:21.248450 env[1734]: time="2025-11-01T00:41:21.248440013Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:41:21.248490 env[1734]: time="2025-11-01T00:41:21.248482109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:21.248606 env[1734]: time="2025-11-01T00:41:21.248594675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:21.248855 env[1734]: time="2025-11-01T00:41:21.248840811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:21.258726 env[1734]: time="2025-11-01T00:41:21.258685596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:41:21.264450 env[1734]: time="2025-11-01T00:41:21.264408737Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:41:21.268038 env[1734]: time="2025-11-01T00:41:21.267985555Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:41:21.284580 env[1734]: time="2025-11-01T00:41:21.284530241Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:41:21.287030 dbus-daemon[1712]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:41:21.287299 systemd[1]: Started systemd-hostnamed.service. Nov 1 00:41:21.299688 dbus-daemon[1712]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1750 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:41:21.306149 systemd[1]: Starting polkit.service... Nov 1 00:41:21.360852 env[1734]: time="2025-11-01T00:41:21.360771663Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:41:21.361036 env[1734]: time="2025-11-01T00:41:21.361017137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:41:21.361166 env[1734]: time="2025-11-01T00:41:21.361149212Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:41:21.361382 env[1734]: time="2025-11-01T00:41:21.361306414Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:41:21.361516 env[1734]: time="2025-11-01T00:41:21.361498289Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:41:21.361626 env[1734]: time="2025-11-01T00:41:21.361609322Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:41:21.361720 env[1734]: time="2025-11-01T00:41:21.361704817Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:41:21.361816 env[1734]: time="2025-11-01T00:41:21.361800500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:41:21.361909 env[1734]: time="2025-11-01T00:41:21.361894359Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:41:21.362004 env[1734]: time="2025-11-01T00:41:21.361988723Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:41:21.362107 env[1734]: time="2025-11-01T00:41:21.362091572Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:41:21.362199 env[1734]: time="2025-11-01T00:41:21.362185899Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:41:21.362467 env[1734]: time="2025-11-01T00:41:21.362449288Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:41:21.362679 env[1734]: time="2025-11-01T00:41:21.362659944Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:41:21.363299 env[1734]: time="2025-11-01T00:41:21.363268819Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:41:21.363461 env[1734]: time="2025-11-01T00:41:21.363438666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.363585 env[1734]: time="2025-11-01T00:41:21.363565577Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:41:21.363793 env[1734]: time="2025-11-01T00:41:21.363773835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.364013 env[1734]: time="2025-11-01T00:41:21.363980031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.369353 env[1734]: time="2025-11-01T00:41:21.369297647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.370947 env[1734]: time="2025-11-01T00:41:21.370884465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.371316 env[1734]: time="2025-11-01T00:41:21.371293443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.371452 env[1734]: time="2025-11-01T00:41:21.371433971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.371568 env[1734]: time="2025-11-01T00:41:21.371549680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.371681 env[1734]: time="2025-11-01T00:41:21.371665846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.371794 env[1734]: time="2025-11-01T00:41:21.371779173Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:41:21.372122 env[1734]: time="2025-11-01T00:41:21.372083611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.372268 env[1734]: time="2025-11-01T00:41:21.372251468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.372365 env[1734]: time="2025-11-01T00:41:21.372351089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.372456 env[1734]: time="2025-11-01T00:41:21.372441426Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:41:21.372554 env[1734]: time="2025-11-01T00:41:21.372536703Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:41:21.372637 env[1734]: time="2025-11-01T00:41:21.372623786Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:41:21.372736 env[1734]: time="2025-11-01T00:41:21.372718481Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:41:21.375082 polkitd[1825]: Started polkitd version 121 Nov 1 00:41:21.375547 env[1734]: time="2025-11-01T00:41:21.375501798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:41:21.376022 env[1734]: time="2025-11-01T00:41:21.375944974Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:41:21.379124 env[1734]: time="2025-11-01T00:41:21.379097050Z" level=info msg="Connect containerd service" Nov 1 00:41:21.379322 env[1734]: time="2025-11-01T00:41:21.379302855Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:41:21.380259 env[1734]: time="2025-11-01T00:41:21.380217120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:41:21.382259 env[1734]: time="2025-11-01T00:41:21.382219847Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:41:21.408851 env[1734]: time="2025-11-01T00:41:21.391271458Z" level=info msg="Start subscribing containerd event" Nov 1 00:41:21.409090 env[1734]: time="2025-11-01T00:41:21.409071239Z" level=info msg="Start recovering state" Nov 1 00:41:21.409282 env[1734]: time="2025-11-01T00:41:21.409264875Z" level=info msg="Start event monitor" Nov 1 00:41:21.409373 env[1734]: time="2025-11-01T00:41:21.409359729Z" level=info msg="Start snapshots syncer" Nov 1 00:41:21.409477 env[1734]: time="2025-11-01T00:41:21.409464434Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:41:21.409569 env[1734]: time="2025-11-01T00:41:21.409553934Z" level=info msg="Start streaming server" Nov 1 00:41:21.409964 env[1734]: time="2025-11-01T00:41:21.409940288Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:41:21.410215 systemd[1]: Started containerd.service. Nov 1 00:41:21.412339 env[1734]: time="2025-11-01T00:41:21.412302724Z" level=info msg="containerd successfully booted in 0.321721s" Nov 1 00:41:21.423373 polkitd[1825]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:41:21.460423 polkitd[1825]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:41:21.466137 polkitd[1825]: Finished loading, compiling and executing 2 rules Nov 1 00:41:21.466947 dbus-daemon[1712]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:41:21.467151 systemd[1]: Started polkit.service. Nov 1 00:41:21.469706 polkitd[1825]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:41:21.475064 amazon-ssm-agent[1812]: 2025/11/01 00:41:21 Failed to load instance info from vault. RegistrationKey does not exist. Nov 1 00:41:21.486241 amazon-ssm-agent[1812]: Initializing new seelog logger Nov 1 00:41:21.502681 amazon-ssm-agent[1812]: New Seelog Logger Creation Complete Nov 1 00:41:21.502949 systemd-hostnamed[1750]: Hostname set to (transient) Nov 1 00:41:21.505645 amazon-ssm-agent[1812]: 2025/11/01 00:41:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:41:21.505787 amazon-ssm-agent[1812]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:41:21.506093 amazon-ssm-agent[1812]: 2025/11/01 00:41:21 processing appconfig overrides Nov 1 00:41:21.511358 systemd[1]: nvidia.service: Deactivated successfully. Nov 1 00:41:21.512787 systemd-resolved[1679]: System hostname changed to 'ip-172-31-16-189'. Nov 1 00:41:21.814606 coreos-metadata[1711]: Nov 01 00:41:21.809 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 1 00:41:21.820588 coreos-metadata[1711]: Nov 01 00:41:21.820 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Nov 1 00:41:21.822222 coreos-metadata[1711]: Nov 01 00:41:21.822 INFO Fetch successful Nov 1 00:41:21.822385 coreos-metadata[1711]: Nov 01 00:41:21.822 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 00:41:21.822943 coreos-metadata[1711]: Nov 01 00:41:21.822 INFO Fetch successful Nov 1 00:41:21.826666 unknown[1711]: wrote ssh authorized keys file for user: core Nov 1 00:41:21.863880 update-ssh-keys[1898]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:41:21.864239 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 00:41:21.972739 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Create new startup processor Nov 1 00:41:21.973959 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [LongRunningPluginsManager] registered plugins: {} Nov 1 00:41:21.974141 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Initializing bookkeeping folders Nov 1 00:41:21.974282 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO removing the completed state files Nov 1 00:41:21.974383 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Initializing bookkeeping folders for long running plugins Nov 1 00:41:21.974463 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Nov 1 00:41:21.974535 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Initializing healthcheck folders for long running plugins Nov 1 00:41:21.974625 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Initializing locations for inventory plugin Nov 1 00:41:21.974711 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Initializing default location for custom inventory Nov 1 00:41:21.974782 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Initializing default location for file inventory Nov 1 00:41:21.974867 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Initializing default location for role inventory Nov 1 00:41:21.974948 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Init the cloudwatchlogs publisher Nov 1 00:41:21.975034 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform independent plugin aws:updateSsmAgent Nov 1 00:41:21.975109 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform independent plugin aws:configureDocker Nov 1 00:41:21.975194 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform independent plugin aws:refreshAssociation Nov 1 00:41:21.975282 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform independent plugin aws:downloadContent Nov 1 00:41:21.975372 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform independent plugin aws:softwareInventory Nov 1 00:41:21.975460 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform independent plugin aws:runPowerShellScript Nov 1 00:41:21.975531 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform independent plugin aws:runDocument Nov 1 00:41:21.975610 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform independent plugin aws:runDockerAction Nov 1 00:41:21.975693 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform independent plugin aws:configurePackage Nov 1 00:41:21.975777 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Successfully loaded platform dependent plugin aws:runShellScript Nov 1 00:41:21.975852 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Nov 1 00:41:21.975942 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO OS: linux, Arch: amd64 Nov 1 00:41:21.977016 amazon-ssm-agent[1812]: datastore file /var/lib/amazon/ssm/i-0652e7a6471389332/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Nov 1 00:41:22.073853 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] Starting document processing engine... Nov 1 00:41:22.167875 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] [EngineProcessor] Starting Nov 1 00:41:22.262208 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Nov 1 00:41:22.271688 systemd[1]: Created slice system-sshd.slice. Nov 1 00:41:22.344117 tar[1732]: linux-amd64/README.md Nov 1 00:41:22.351463 systemd[1]: Finished prepare-helm.service. Nov 1 00:41:22.357474 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] Starting message polling Nov 1 00:41:22.451494 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] Starting send replies to MDS Nov 1 00:41:22.546396 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [instanceID=i-0652e7a6471389332] Starting association polling Nov 1 00:41:22.556809 locksmithd[1773]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:41:22.641645 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Nov 1 00:41:22.738138 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] [Association] Launching response handler Nov 1 00:41:22.833608 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Nov 1 00:41:22.929412 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Nov 1 00:41:23.025283 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Nov 1 00:41:23.056640 sshd_keygen[1744]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:41:23.087532 systemd[1]: Finished sshd-keygen.service. Nov 1 00:41:23.090321 systemd[1]: Starting issuegen.service... Nov 1 00:41:23.092961 systemd[1]: Started sshd@0-172.31.16.189:22-147.75.109.163:57352.service. Nov 1 00:41:23.099092 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:41:23.099323 systemd[1]: Finished issuegen.service. Nov 1 00:41:23.101993 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:41:23.111597 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:41:23.114210 systemd[1]: Started getty@tty1.service. Nov 1 00:41:23.117846 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:41:23.119285 systemd[1]: Reached target getty.target. Nov 1 00:41:23.121723 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessageGatewayService] Starting session document processing engine... Nov 1 00:41:23.218146 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessageGatewayService] [EngineProcessor] Starting Nov 1 00:41:23.314529 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Nov 1 00:41:23.359130 systemd[1]: Started kubelet.service. Nov 1 00:41:23.360185 systemd[1]: Reached target multi-user.target. Nov 1 00:41:23.362086 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:41:23.370842 sshd[1914]: Accepted publickey for core from 147.75.109.163 port 57352 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:41:23.371859 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:41:23.372016 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:41:23.372685 systemd[1]: Startup finished in 609ms (kernel) + 6.093s (initrd) + 8.517s (userspace) = 15.221s. Nov 1 00:41:23.376691 sshd[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:23.393693 systemd[1]: Created slice user-500.slice. Nov 1 00:41:23.394829 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:41:23.398138 systemd-logind[1720]: New session 1 of user core. Nov 1 00:41:23.407967 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:41:23.409709 systemd[1]: Starting user@500.service... Nov 1 00:41:23.412104 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0652e7a6471389332, requestId: ff5048e4-380d-442c-bf7e-98e73654aa68 Nov 1 00:41:23.414333 (systemd)[1926]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:23.509427 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [OfflineService] Starting document processing engine... Nov 1 00:41:23.541762 systemd[1926]: Queued start job for default target default.target. Nov 1 00:41:23.542360 systemd[1926]: Reached target paths.target. Nov 1 00:41:23.542500 systemd[1926]: Reached target sockets.target. Nov 1 00:41:23.542648 systemd[1926]: Reached target timers.target. Nov 1 00:41:23.542723 systemd[1926]: Reached target basic.target. Nov 1 00:41:23.543516 systemd[1]: Started user@500.service. Nov 1 00:41:23.544543 systemd[1]: Started session-1.scope. Nov 1 00:41:23.545163 systemd[1926]: Reached target default.target. Nov 1 00:41:23.545413 systemd[1926]: Startup finished in 121ms. Nov 1 00:41:23.606454 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [OfflineService] [EngineProcessor] Starting Nov 1 00:41:23.686839 systemd[1]: Started sshd@1-172.31.16.189:22-147.75.109.163:57368.service. Nov 1 00:41:23.704128 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [OfflineService] [EngineProcessor] Initial processing Nov 1 00:41:23.801672 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [OfflineService] Starting message polling Nov 1 00:41:23.851364 sshd[1940]: Accepted publickey for core from 147.75.109.163 port 57368 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:41:23.852999 sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:23.858795 systemd-logind[1720]: New session 2 of user core. Nov 1 00:41:23.859298 systemd[1]: Started session-2.scope. Nov 1 00:41:23.899292 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [OfflineService] Starting send replies to MDS Nov 1 00:41:23.989855 sshd[1940]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:23.992788 systemd[1]: sshd@1-172.31.16.189:22-147.75.109.163:57368.service: Deactivated successfully. Nov 1 00:41:23.993524 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:41:23.994088 systemd-logind[1720]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:41:23.995018 systemd-logind[1720]: Removed session 2. Nov 1 00:41:23.997101 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [LongRunningPluginsManager] starting long running plugin manager Nov 1 00:41:24.015345 systemd[1]: Started sshd@2-172.31.16.189:22-147.75.109.163:57382.service. Nov 1 00:41:24.095241 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Nov 1 00:41:24.178284 sshd[1946]: Accepted publickey for core from 147.75.109.163 port 57382 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:41:24.178722 sshd[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:24.185259 systemd[1]: Started session-3.scope. Nov 1 00:41:24.186308 systemd-logind[1720]: New session 3 of user core. Nov 1 00:41:24.193436 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [HealthCheck] HealthCheck reporting agent health. Nov 1 00:41:24.201835 kubelet[1923]: E1101 00:41:24.201802 1923 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:41:24.203820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:41:24.203954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:41:24.204184 systemd[1]: kubelet.service: Consumed 1.173s CPU time. Nov 1 00:41:24.291902 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Nov 1 00:41:24.311051 sshd[1946]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:24.314076 systemd[1]: sshd@2-172.31.16.189:22-147.75.109.163:57382.service: Deactivated successfully. Nov 1 00:41:24.314958 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:41:24.315635 systemd-logind[1720]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:41:24.316550 systemd-logind[1720]: Removed session 3. Nov 1 00:41:24.337598 systemd[1]: Started sshd@3-172.31.16.189:22-147.75.109.163:57392.service. Nov 1 00:41:24.390645 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [MessageGatewayService] listening reply. Nov 1 00:41:24.489759 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [StartupProcessor] Executing startup processor tasks Nov 1 00:41:24.502446 sshd[1952]: Accepted publickey for core from 147.75.109.163 port 57392 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:41:24.503814 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:24.512954 systemd-logind[1720]: New session 4 of user core. Nov 1 00:41:24.513596 systemd[1]: Started session-4.scope. Nov 1 00:41:24.589753 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Nov 1 00:41:24.644512 sshd[1952]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:24.647369 systemd[1]: sshd@3-172.31.16.189:22-147.75.109.163:57392.service: Deactivated successfully. Nov 1 00:41:24.648056 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:41:24.648531 systemd-logind[1720]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:41:24.649472 systemd-logind[1720]: Removed session 4. Nov 1 00:41:24.670497 systemd[1]: Started sshd@4-172.31.16.189:22-147.75.109.163:57408.service. Nov 1 00:41:24.689010 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Nov 1 00:41:24.788652 amazon-ssm-agent[1812]: 2025-11-01 00:41:21 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Nov 1 00:41:24.834818 sshd[1958]: Accepted publickey for core from 147.75.109.163 port 57408 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:41:24.836215 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:24.840832 systemd-logind[1720]: New session 5 of user core. Nov 1 00:41:24.841260 systemd[1]: Started session-5.scope. Nov 1 00:41:24.888780 amazon-ssm-agent[1812]: 2025-11-01 00:41:22 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0652e7a6471389332?role=subscribe&stream=input Nov 1 00:41:24.966298 sudo[1961]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:41:24.967168 sudo[1961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:41:24.988633 amazon-ssm-agent[1812]: 2025-11-01 00:41:22 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0652e7a6471389332?role=subscribe&stream=input Nov 1 00:41:24.994450 systemd[1]: Starting docker.service... Nov 1 00:41:25.040198 env[1971]: time="2025-11-01T00:41:25.040145507Z" level=info msg="Starting up" Nov 1 00:41:25.043148 env[1971]: time="2025-11-01T00:41:25.043101897Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:41:25.043148 env[1971]: time="2025-11-01T00:41:25.043130086Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:41:25.043432 env[1971]: time="2025-11-01T00:41:25.043160710Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:41:25.043432 env[1971]: time="2025-11-01T00:41:25.043174381Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:41:25.045341 env[1971]: time="2025-11-01T00:41:25.045306235Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:41:25.045341 env[1971]: time="2025-11-01T00:41:25.045329230Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:41:25.045507 env[1971]: time="2025-11-01T00:41:25.045353098Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:41:25.045507 env[1971]: time="2025-11-01T00:41:25.045366748Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:41:25.052342 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3018901763-merged.mount: Deactivated successfully. Nov 1 00:41:25.088679 amazon-ssm-agent[1812]: 2025-11-01 00:41:22 INFO [MessageGatewayService] Starting receiving message from control channel Nov 1 00:41:25.188850 amazon-ssm-agent[1812]: 2025-11-01 00:41:22 INFO [MessageGatewayService] [EngineProcessor] Initial processing Nov 1 00:41:25.257131 env[1971]: time="2025-11-01T00:41:25.256567626Z" level=info msg="Loading containers: start." Nov 1 00:41:25.460258 kernel: Initializing XFRM netlink socket Nov 1 00:41:25.511917 env[1971]: time="2025-11-01T00:41:25.511813112Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:41:25.512933 (udev-worker)[1982]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:41:25.582706 systemd-networkd[1453]: docker0: Link UP Nov 1 00:41:25.598985 env[1971]: time="2025-11-01T00:41:25.598941875Z" level=info msg="Loading containers: done." Nov 1 00:41:25.611825 env[1971]: time="2025-11-01T00:41:25.611774543Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:41:25.612105 env[1971]: time="2025-11-01T00:41:25.611965359Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:41:25.612225 env[1971]: time="2025-11-01T00:41:25.612185823Z" level=info msg="Daemon has completed initialization" Nov 1 00:41:25.629400 systemd[1]: Started docker.service. Nov 1 00:41:25.635994 env[1971]: time="2025-11-01T00:41:25.635942837Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:41:26.717371 env[1734]: time="2025-11-01T00:41:26.717248740Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 1 00:41:27.298344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291216324.mount: Deactivated successfully. Nov 1 00:41:29.026630 env[1734]: time="2025-11-01T00:41:29.026505074Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:29.031608 env[1734]: time="2025-11-01T00:41:29.031560008Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:29.033783 env[1734]: time="2025-11-01T00:41:29.033753140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:29.035796 env[1734]: time="2025-11-01T00:41:29.035761376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:29.036745 env[1734]: time="2025-11-01T00:41:29.036713606Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 1 00:41:29.037462 env[1734]: time="2025-11-01T00:41:29.037438613Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 1 00:41:30.809506 env[1734]: time="2025-11-01T00:41:30.809453015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:30.811733 env[1734]: time="2025-11-01T00:41:30.811689138Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:30.813651 env[1734]: time="2025-11-01T00:41:30.813616599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:30.815524 env[1734]: time="2025-11-01T00:41:30.815492596Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:30.816177 env[1734]: time="2025-11-01T00:41:30.816139895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 1 00:41:30.816636 env[1734]: time="2025-11-01T00:41:30.816612936Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 1 00:41:32.358117 env[1734]: time="2025-11-01T00:41:32.358047735Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:32.360308 env[1734]: time="2025-11-01T00:41:32.360267552Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:32.362497 env[1734]: time="2025-11-01T00:41:32.362460645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:32.364242 env[1734]: time="2025-11-01T00:41:32.364201618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:32.364985 env[1734]: time="2025-11-01T00:41:32.364945030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 1 00:41:32.365508 env[1734]: time="2025-11-01T00:41:32.365487335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 1 00:41:33.411321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885137167.mount: Deactivated successfully. Nov 1 00:41:34.131956 env[1734]: time="2025-11-01T00:41:34.131895322Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:34.134000 env[1734]: time="2025-11-01T00:41:34.133963302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:34.135800 env[1734]: time="2025-11-01T00:41:34.135768909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:34.137319 env[1734]: time="2025-11-01T00:41:34.137294764Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:34.137611 env[1734]: time="2025-11-01T00:41:34.137579257Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 1 00:41:34.138103 env[1734]: time="2025-11-01T00:41:34.138078635Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 1 00:41:34.454806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:41:34.454986 systemd[1]: Stopped kubelet.service. Nov 1 00:41:34.455030 systemd[1]: kubelet.service: Consumed 1.173s CPU time. Nov 1 00:41:34.456507 systemd[1]: Starting kubelet.service... Nov 1 00:41:34.674464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752051411.mount: Deactivated successfully. Nov 1 00:41:34.754446 systemd[1]: Started kubelet.service. Nov 1 00:41:34.839800 kubelet[2100]: E1101 00:41:34.839760 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:41:34.845427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:41:34.845604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:41:35.971601 env[1734]: time="2025-11-01T00:41:35.971547073Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:35.973830 env[1734]: time="2025-11-01T00:41:35.973786567Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:35.975866 env[1734]: time="2025-11-01T00:41:35.975837086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:35.977665 env[1734]: time="2025-11-01T00:41:35.977628675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:35.978600 env[1734]: time="2025-11-01T00:41:35.978559977Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 1 00:41:35.979315 env[1734]: time="2025-11-01T00:41:35.979286949Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:41:36.390655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498445859.mount: Deactivated successfully. Nov 1 00:41:36.400869 env[1734]: time="2025-11-01T00:41:36.400818937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:36.404398 env[1734]: time="2025-11-01T00:41:36.404354780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:36.407348 env[1734]: time="2025-11-01T00:41:36.407308841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:36.409959 env[1734]: time="2025-11-01T00:41:36.409924486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:36.410432 env[1734]: time="2025-11-01T00:41:36.410393941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:41:36.410976 env[1734]: time="2025-11-01T00:41:36.410953772Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 1 00:41:36.907671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746264766.mount: Deactivated successfully. Nov 1 00:41:39.493127 env[1734]: time="2025-11-01T00:41:39.493068941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:39.495540 env[1734]: time="2025-11-01T00:41:39.495499509Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:39.497503 env[1734]: time="2025-11-01T00:41:39.497471234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:39.499585 env[1734]: time="2025-11-01T00:41:39.499552654Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:39.500421 env[1734]: time="2025-11-01T00:41:39.500378527Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 1 00:41:42.824869 systemd[1]: Stopped kubelet.service. Nov 1 00:41:42.827751 systemd[1]: Starting kubelet.service... Nov 1 00:41:42.866840 systemd[1]: Reloading. Nov 1 00:41:42.974911 /usr/lib/systemd/system-generators/torcx-generator[2154]: time="2025-11-01T00:41:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:41:42.974952 /usr/lib/systemd/system-generators/torcx-generator[2154]: time="2025-11-01T00:41:42Z" level=info msg="torcx already run" Nov 1 00:41:43.108698 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:41:43.108725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:41:43.134993 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:41:43.265768 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:41:43.265843 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:41:43.266062 systemd[1]: Stopped kubelet.service. Nov 1 00:41:43.268163 systemd[1]: Starting kubelet.service... Nov 1 00:41:43.742650 systemd[1]: Started kubelet.service. Nov 1 00:41:43.802780 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:41:43.803115 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:41:43.803252 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:41:43.803394 kubelet[2211]: I1101 00:41:43.803372 2211 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:41:44.051410 kubelet[2211]: I1101 00:41:44.051298 2211 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:41:44.051410 kubelet[2211]: I1101 00:41:44.051327 2211 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:41:44.051934 kubelet[2211]: I1101 00:41:44.051904 2211 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:41:44.086214 kubelet[2211]: I1101 00:41:44.086184 2211 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:41:44.086693 kubelet[2211]: E1101 00:41:44.086668 2211 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.189:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:41:44.096028 kubelet[2211]: E1101 00:41:44.095978 2211 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:41:44.096028 kubelet[2211]: I1101 00:41:44.096020 2211 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:41:44.099455 kubelet[2211]: I1101 00:41:44.099430 2211 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:41:44.099669 kubelet[2211]: I1101 00:41:44.099639 2211 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:41:44.099829 kubelet[2211]: I1101 00:41:44.099667 2211 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-189","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:41:44.099942 kubelet[2211]: I1101 00:41:44.099831 2211 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:41:44.099942 kubelet[2211]: I1101 00:41:44.099841 2211 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:41:44.099999 kubelet[2211]: I1101 00:41:44.099951 2211 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:41:44.109590 kubelet[2211]: I1101 00:41:44.109549 2211 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:41:44.109590 kubelet[2211]: I1101 00:41:44.109594 2211 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:41:44.112376 kubelet[2211]: I1101 00:41:44.112346 2211 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:41:44.115189 kubelet[2211]: I1101 00:41:44.115161 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:41:44.136626 kubelet[2211]: I1101 00:41:44.136599 2211 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:41:44.137255 kubelet[2211]: I1101 00:41:44.137223 2211 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:41:44.143648 kubelet[2211]: W1101 00:41:44.143612 2211 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:41:44.148805 kubelet[2211]: E1101 00:41:44.148775 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-189&limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:41:44.149719 kubelet[2211]: I1101 00:41:44.149707 2211 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:41:44.149850 kubelet[2211]: I1101 00:41:44.149843 2211 server.go:1289] "Started kubelet" Nov 1 00:41:44.150445 kubelet[2211]: E1101 00:41:44.150401 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.189:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:41:44.150545 kubelet[2211]: I1101 00:41:44.150499 2211 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:41:44.161344 kubelet[2211]: I1101 00:41:44.161280 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:41:44.163065 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:41:44.163198 kubelet[2211]: I1101 00:41:44.163180 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:41:44.172628 kubelet[2211]: I1101 00:41:44.172591 2211 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:41:44.173743 kubelet[2211]: I1101 00:41:44.173714 2211 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:41:44.177872 kubelet[2211]: E1101 00:41:44.173103 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.189:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.189:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-189.1873bb32591ccb41 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-189,UID:ip-172-31-16-189,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-189,},FirstTimestamp:2025-11-01 00:41:44.149814081 +0000 UTC m=+0.400609757,LastTimestamp:2025-11-01 00:41:44.149814081 +0000 UTC m=+0.400609757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-189,}" Nov 1 00:41:44.179278 kubelet[2211]: E1101 00:41:44.179253 2211 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-189\" not found" Nov 1 00:41:44.179422 kubelet[2211]: I1101 00:41:44.163183 2211 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:41:44.179626 kubelet[2211]: I1101 00:41:44.179615 2211 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:41:44.180043 kubelet[2211]: I1101 00:41:44.180025 2211 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:41:44.180723 kubelet[2211]: I1101 00:41:44.180641 2211 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:41:44.182205 kubelet[2211]: E1101 00:41:44.182178 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:41:44.183740 kubelet[2211]: E1101 00:41:44.183711 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": dial tcp 172.31.16.189:6443: connect: connection refused" interval="200ms" Nov 1 00:41:44.184321 kubelet[2211]: I1101 00:41:44.184305 2211 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:41:44.184848 kubelet[2211]: I1101 00:41:44.184828 2211 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:41:44.188135 kubelet[2211]: I1101 00:41:44.188108 2211 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:41:44.206947 kubelet[2211]: I1101 00:41:44.206897 2211 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:41:44.208152 kubelet[2211]: I1101 00:41:44.208120 2211 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:41:44.208152 kubelet[2211]: I1101 00:41:44.208145 2211 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:41:44.208358 kubelet[2211]: I1101 00:41:44.208179 2211 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:41:44.208358 kubelet[2211]: I1101 00:41:44.208188 2211 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:41:44.208446 kubelet[2211]: E1101 00:41:44.208402 2211 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:41:44.217333 kubelet[2211]: E1101 00:41:44.217293 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:41:44.217793 kubelet[2211]: E1101 00:41:44.217570 2211 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:41:44.227300 kubelet[2211]: I1101 00:41:44.227278 2211 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:41:44.227300 kubelet[2211]: I1101 00:41:44.227299 2211 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:41:44.227513 kubelet[2211]: I1101 00:41:44.227317 2211 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:41:44.235446 kubelet[2211]: I1101 00:41:44.235415 2211 policy_none.go:49] "None policy: Start" Nov 1 00:41:44.235446 kubelet[2211]: I1101 00:41:44.235441 2211 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:41:44.235446 kubelet[2211]: I1101 00:41:44.235454 2211 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:41:44.243980 systemd[1]: Created slice kubepods.slice. Nov 1 00:41:44.250736 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:41:44.260087 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:41:44.262388 kubelet[2211]: E1101 00:41:44.262359 2211 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:41:44.262589 kubelet[2211]: I1101 00:41:44.262569 2211 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:41:44.262672 kubelet[2211]: I1101 00:41:44.262587 2211 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:41:44.265569 kubelet[2211]: I1101 00:41:44.263639 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:41:44.266280 kubelet[2211]: E1101 00:41:44.266258 2211 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:41:44.266386 kubelet[2211]: E1101 00:41:44.266310 2211 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-189\" not found" Nov 1 00:41:44.322848 systemd[1]: Created slice kubepods-burstable-pod159bf9ea4870adc4a6e0733ebe9da33e.slice. Nov 1 00:41:44.327997 kubelet[2211]: E1101 00:41:44.327955 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:44.334617 systemd[1]: Created slice kubepods-burstable-pod5ce9233cacca3d56a15e5509a8b1033f.slice. Nov 1 00:41:44.341489 kubelet[2211]: E1101 00:41:44.341459 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:44.351043 systemd[1]: Created slice kubepods-burstable-podb88057875b4f2fc249a89fb0205c9bcd.slice. Nov 1 00:41:44.353310 kubelet[2211]: E1101 00:41:44.353282 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:44.365451 kubelet[2211]: I1101 00:41:44.365426 2211 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-189" Nov 1 00:41:44.365758 kubelet[2211]: E1101 00:41:44.365733 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.189:6443/api/v1/nodes\": dial tcp 172.31.16.189:6443: connect: connection refused" node="ip-172-31-16-189" Nov 1 00:41:44.382140 kubelet[2211]: I1101 00:41:44.382089 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:44.382140 kubelet[2211]: I1101 00:41:44.382131 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:44.382351 kubelet[2211]: I1101 00:41:44.382155 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:44.382351 kubelet[2211]: I1101 00:41:44.382178 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b88057875b4f2fc249a89fb0205c9bcd-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-189\" (UID: \"b88057875b4f2fc249a89fb0205c9bcd\") " pod="kube-system/kube-scheduler-ip-172-31-16-189" Nov 1 00:41:44.382351 kubelet[2211]: I1101 00:41:44.382193 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/159bf9ea4870adc4a6e0733ebe9da33e-ca-certs\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"159bf9ea4870adc4a6e0733ebe9da33e\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Nov 1 00:41:44.382351 kubelet[2211]: I1101 00:41:44.382208 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/159bf9ea4870adc4a6e0733ebe9da33e-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"159bf9ea4870adc4a6e0733ebe9da33e\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Nov 1 00:41:44.382351 kubelet[2211]: I1101 00:41:44.382236 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/159bf9ea4870adc4a6e0733ebe9da33e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"159bf9ea4870adc4a6e0733ebe9da33e\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Nov 1 00:41:44.382501 kubelet[2211]: I1101 00:41:44.382252 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:44.382501 kubelet[2211]: I1101 00:41:44.382277 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:44.384552 kubelet[2211]: E1101 00:41:44.384496 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": dial tcp 172.31.16.189:6443: connect: connection refused" interval="400ms" Nov 1 00:41:44.499900 kubelet[2211]: E1101 00:41:44.499787 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.189:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.189:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-189.1873bb32591ccb41 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-189,UID:ip-172-31-16-189,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-189,},FirstTimestamp:2025-11-01 00:41:44.149814081 +0000 UTC m=+0.400609757,LastTimestamp:2025-11-01 00:41:44.149814081 +0000 UTC m=+0.400609757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-189,}" Nov 1 00:41:44.567372 kubelet[2211]: I1101 00:41:44.567344 2211 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-189" Nov 1 00:41:44.567702 kubelet[2211]: E1101 00:41:44.567666 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.189:6443/api/v1/nodes\": dial tcp 172.31.16.189:6443: connect: connection refused" node="ip-172-31-16-189" Nov 1 00:41:44.629787 env[1734]: time="2025-11-01T00:41:44.629374752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-189,Uid:159bf9ea4870adc4a6e0733ebe9da33e,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:44.643184 env[1734]: time="2025-11-01T00:41:44.643138501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-189,Uid:5ce9233cacca3d56a15e5509a8b1033f,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:44.655121 env[1734]: time="2025-11-01T00:41:44.655079380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-189,Uid:b88057875b4f2fc249a89fb0205c9bcd,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:44.785961 kubelet[2211]: E1101 00:41:44.785917 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": dial tcp 172.31.16.189:6443: connect: connection refused" interval="800ms" Nov 1 00:41:44.971106 kubelet[2211]: I1101 00:41:44.969957 2211 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-189" Nov 1 00:41:44.971106 kubelet[2211]: E1101 00:41:44.970448 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.189:6443/api/v1/nodes\": dial tcp 172.31.16.189:6443: connect: connection refused" node="ip-172-31-16-189" Nov 1 00:41:45.028790 kubelet[2211]: E1101 00:41:45.028743 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-189&limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:41:45.079023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951908205.mount: Deactivated successfully. Nov 1 00:41:45.097077 env[1734]: time="2025-11-01T00:41:45.097011808Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.099074 env[1734]: time="2025-11-01T00:41:45.099022827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.105131 env[1734]: time="2025-11-01T00:41:45.105092697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.107929 env[1734]: time="2025-11-01T00:41:45.107875535Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.109552 env[1734]: time="2025-11-01T00:41:45.109517737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.111909 env[1734]: time="2025-11-01T00:41:45.111867776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.115849 env[1734]: time="2025-11-01T00:41:45.115812512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.117982 env[1734]: time="2025-11-01T00:41:45.117946880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.121817 env[1734]: time="2025-11-01T00:41:45.121781689Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.123880 env[1734]: time="2025-11-01T00:41:45.123842481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.125792 env[1734]: time="2025-11-01T00:41:45.125759170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.127939 env[1734]: time="2025-11-01T00:41:45.127906362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:45.163788 env[1734]: time="2025-11-01T00:41:45.163713020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:45.163975 env[1734]: time="2025-11-01T00:41:45.163951264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:45.164058 env[1734]: time="2025-11-01T00:41:45.164040916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:45.164293 env[1734]: time="2025-11-01T00:41:45.164265316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a06f4723c8f0437a4937c5a19bb0e4c01944142c2c032495fb9f29df414c734c pid=2254 runtime=io.containerd.runc.v2 Nov 1 00:41:45.198010 systemd[1]: Started cri-containerd-a06f4723c8f0437a4937c5a19bb0e4c01944142c2c032495fb9f29df414c734c.scope. Nov 1 00:41:45.215053 env[1734]: time="2025-11-01T00:41:45.214977062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:45.215352 env[1734]: time="2025-11-01T00:41:45.215309061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:45.215450 env[1734]: time="2025-11-01T00:41:45.215333193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:45.215597 env[1734]: time="2025-11-01T00:41:45.215567571Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c076f773eae25fd6f4d7db8704eb59ad84f224292b6a712b1e376458bccd517 pid=2282 runtime=io.containerd.runc.v2 Nov 1 00:41:45.230032 env[1734]: time="2025-11-01T00:41:45.228854502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:45.230312 env[1734]: time="2025-11-01T00:41:45.230273693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:45.230464 env[1734]: time="2025-11-01T00:41:45.230431021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:45.230720 env[1734]: time="2025-11-01T00:41:45.230692728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68a8d9b3aee43a29f72192599278310c4012a9ae3be38d054a58ad7e8bceb20c pid=2298 runtime=io.containerd.runc.v2 Nov 1 00:41:45.250392 systemd[1]: Started cri-containerd-68a8d9b3aee43a29f72192599278310c4012a9ae3be38d054a58ad7e8bceb20c.scope. Nov 1 00:41:45.264080 systemd[1]: Started cri-containerd-4c076f773eae25fd6f4d7db8704eb59ad84f224292b6a712b1e376458bccd517.scope. Nov 1 00:41:45.310830 env[1734]: time="2025-11-01T00:41:45.310776928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-189,Uid:159bf9ea4870adc4a6e0733ebe9da33e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a06f4723c8f0437a4937c5a19bb0e4c01944142c2c032495fb9f29df414c734c\"" Nov 1 00:41:45.321011 env[1734]: time="2025-11-01T00:41:45.320964239Z" level=info msg="CreateContainer within sandbox \"a06f4723c8f0437a4937c5a19bb0e4c01944142c2c032495fb9f29df414c734c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:41:45.356034 env[1734]: time="2025-11-01T00:41:45.355992659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-189,Uid:5ce9233cacca3d56a15e5509a8b1033f,Namespace:kube-system,Attempt:0,} returns sandbox id \"68a8d9b3aee43a29f72192599278310c4012a9ae3be38d054a58ad7e8bceb20c\"" Nov 1 00:41:45.356505 env[1734]: time="2025-11-01T00:41:45.356471056Z" level=info msg="CreateContainer within sandbox \"a06f4723c8f0437a4937c5a19bb0e4c01944142c2c032495fb9f29df414c734c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7cc8490434d92e21c61b2f551f3fa6c8958d008e1fe4b78a70d9ff0656f8cbb3\"" Nov 1 00:41:45.359655 env[1734]: time="2025-11-01T00:41:45.359620452Z" level=info msg="StartContainer for \"7cc8490434d92e21c61b2f551f3fa6c8958d008e1fe4b78a70d9ff0656f8cbb3\"" Nov 1 00:41:45.363897 env[1734]: time="2025-11-01T00:41:45.363860937Z" level=info msg="CreateContainer within sandbox \"68a8d9b3aee43a29f72192599278310c4012a9ae3be38d054a58ad7e8bceb20c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:41:45.379632 env[1734]: time="2025-11-01T00:41:45.379584552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-189,Uid:b88057875b4f2fc249a89fb0205c9bcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c076f773eae25fd6f4d7db8704eb59ad84f224292b6a712b1e376458bccd517\"" Nov 1 00:41:45.390005 env[1734]: time="2025-11-01T00:41:45.389957856Z" level=info msg="CreateContainer within sandbox \"4c076f773eae25fd6f4d7db8704eb59ad84f224292b6a712b1e376458bccd517\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:41:45.398420 env[1734]: time="2025-11-01T00:41:45.398368130Z" level=info msg="CreateContainer within sandbox \"68a8d9b3aee43a29f72192599278310c4012a9ae3be38d054a58ad7e8bceb20c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc\"" Nov 1 00:41:45.399566 env[1734]: time="2025-11-01T00:41:45.399528786Z" level=info msg="StartContainer for \"78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc\"" Nov 1 00:41:45.403910 systemd[1]: Started cri-containerd-7cc8490434d92e21c61b2f551f3fa6c8958d008e1fe4b78a70d9ff0656f8cbb3.scope. Nov 1 00:41:45.424465 env[1734]: time="2025-11-01T00:41:45.424406714Z" level=info msg="CreateContainer within sandbox \"4c076f773eae25fd6f4d7db8704eb59ad84f224292b6a712b1e376458bccd517\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad\"" Nov 1 00:41:45.425206 env[1734]: time="2025-11-01T00:41:45.425171976Z" level=info msg="StartContainer for \"3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad\"" Nov 1 00:41:45.444966 systemd[1]: Started cri-containerd-78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc.scope. Nov 1 00:41:45.491330 systemd[1]: Started cri-containerd-3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad.scope. Nov 1 00:41:45.498252 env[1734]: time="2025-11-01T00:41:45.498179788Z" level=info msg="StartContainer for \"7cc8490434d92e21c61b2f551f3fa6c8958d008e1fe4b78a70d9ff0656f8cbb3\" returns successfully" Nov 1 00:41:45.541312 env[1734]: time="2025-11-01T00:41:45.541265224Z" level=info msg="StartContainer for \"78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc\" returns successfully" Nov 1 00:41:45.565450 env[1734]: time="2025-11-01T00:41:45.565407546Z" level=info msg="StartContainer for \"3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad\" returns successfully" Nov 1 00:41:45.587083 kubelet[2211]: E1101 00:41:45.586993 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": dial tcp 172.31.16.189:6443: connect: connection refused" interval="1.6s" Nov 1 00:41:45.628001 kubelet[2211]: E1101 00:41:45.627945 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:41:45.644794 kubelet[2211]: E1101 00:41:45.644747 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:41:45.690277 kubelet[2211]: E1101 00:41:45.690216 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.189:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:41:45.772504 kubelet[2211]: I1101 00:41:45.772409 2211 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-189" Nov 1 00:41:45.773439 kubelet[2211]: E1101 00:41:45.773035 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.189:6443/api/v1/nodes\": dial tcp 172.31.16.189:6443: connect: connection refused" node="ip-172-31-16-189" Nov 1 00:41:46.096850 systemd[1]: run-containerd-runc-k8s.io-a06f4723c8f0437a4937c5a19bb0e4c01944142c2c032495fb9f29df414c734c-runc.UpVgXQ.mount: Deactivated successfully. Nov 1 00:41:46.120545 amazon-ssm-agent[1812]: 2025-11-01 00:41:46 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Nov 1 00:41:46.215320 kubelet[2211]: E1101 00:41:46.215284 2211 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.189:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.189:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:41:46.228386 kubelet[2211]: E1101 00:41:46.228357 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:46.250872 kubelet[2211]: E1101 00:41:46.250842 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:46.251567 kubelet[2211]: E1101 00:41:46.251263 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:46.978207 amazon-ssm-agent[1812]: 2025-11-01 00:41:46 INFO [HealthCheck] HealthCheck reporting agent health. Nov 1 00:41:47.235758 kubelet[2211]: E1101 00:41:47.235544 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:47.236632 kubelet[2211]: E1101 00:41:47.236517 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:47.375900 kubelet[2211]: I1101 00:41:47.375870 2211 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-189" Nov 1 00:41:48.470555 kubelet[2211]: E1101 00:41:48.470520 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:49.313963 kubelet[2211]: E1101 00:41:49.313925 2211 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-189\" not found" node="ip-172-31-16-189" Nov 1 00:41:49.358616 kubelet[2211]: I1101 00:41:49.358570 2211 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-189" Nov 1 00:41:49.358616 kubelet[2211]: E1101 00:41:49.358619 2211 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-189\": node \"ip-172-31-16-189\" not found" Nov 1 00:41:49.381293 kubelet[2211]: I1101 00:41:49.381258 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-189" Nov 1 00:41:49.454463 kubelet[2211]: E1101 00:41:49.454427 2211 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-189\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-189" Nov 1 00:41:49.454463 kubelet[2211]: I1101 00:41:49.454458 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-189" Nov 1 00:41:49.457304 kubelet[2211]: E1101 00:41:49.457272 2211 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-189\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-189" Nov 1 00:41:49.457304 kubelet[2211]: I1101 00:41:49.457301 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:49.459716 kubelet[2211]: E1101 00:41:49.459684 2211 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-189\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:50.156729 kubelet[2211]: I1101 00:41:50.156688 2211 apiserver.go:52] "Watching apiserver" Nov 1 00:41:50.181065 kubelet[2211]: I1101 00:41:50.181026 2211 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:41:51.216243 systemd[1]: Reloading. Nov 1 00:41:51.316164 /usr/lib/systemd/system-generators/torcx-generator[2517]: time="2025-11-01T00:41:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:41:51.316206 /usr/lib/systemd/system-generators/torcx-generator[2517]: time="2025-11-01T00:41:51Z" level=info msg="torcx already run" Nov 1 00:41:51.421194 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:41:51.421218 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:41:51.441636 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:41:51.526312 kubelet[2211]: I1101 00:41:51.526178 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-189" Nov 1 00:41:51.562190 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:41:51.573364 kubelet[2211]: I1101 00:41:51.573180 2211 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:41:51.573658 systemd[1]: Stopping kubelet.service... Nov 1 00:41:51.599011 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:41:51.599199 systemd[1]: Stopped kubelet.service. Nov 1 00:41:51.601015 systemd[1]: Starting kubelet.service... Nov 1 00:41:52.737867 systemd[1]: Started kubelet.service. Nov 1 00:41:52.811411 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:41:52.811411 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:41:52.811411 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:41:52.811411 kubelet[2580]: I1101 00:41:52.811094 2580 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:41:52.827107 kubelet[2580]: I1101 00:41:52.825957 2580 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:41:52.827107 kubelet[2580]: I1101 00:41:52.825980 2580 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:41:52.827107 kubelet[2580]: I1101 00:41:52.826217 2580 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:41:52.829452 kubelet[2580]: I1101 00:41:52.829431 2580 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:41:52.834635 sudo[2593]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:41:52.834906 sudo[2593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:41:52.837099 kubelet[2580]: I1101 00:41:52.837075 2580 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:41:52.848245 kubelet[2580]: E1101 00:41:52.848202 2580 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:41:52.848403 kubelet[2580]: I1101 00:41:52.848271 2580 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:41:52.859099 kubelet[2580]: I1101 00:41:52.859068 2580 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:41:52.859458 kubelet[2580]: I1101 00:41:52.859433 2580 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:41:52.859678 kubelet[2580]: I1101 00:41:52.859535 2580 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-189","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:41:52.859804 kubelet[2580]: I1101 00:41:52.859794 2580 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:41:52.859852 kubelet[2580]: I1101 00:41:52.859847 2580 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:41:52.859929 kubelet[2580]: I1101 00:41:52.859923 2580 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:41:52.860102 kubelet[2580]: I1101 00:41:52.860093 2580 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:41:52.860165 kubelet[2580]: I1101 00:41:52.860158 2580 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:41:52.860241 kubelet[2580]: I1101 00:41:52.860219 2580 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:41:52.860310 kubelet[2580]: I1101 00:41:52.860303 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:41:52.871941 kubelet[2580]: I1101 00:41:52.871896 2580 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:41:52.873691 kubelet[2580]: I1101 00:41:52.873670 2580 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:41:52.876955 kubelet[2580]: I1101 00:41:52.876940 2580 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:41:52.877124 kubelet[2580]: I1101 00:41:52.877116 2580 server.go:1289] "Started kubelet" Nov 1 00:41:52.883669 kubelet[2580]: I1101 00:41:52.883645 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:41:52.892982 kubelet[2580]: I1101 00:41:52.892915 2580 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:41:52.893979 kubelet[2580]: I1101 00:41:52.893961 2580 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:41:52.911315 kubelet[2580]: I1101 00:41:52.911265 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:41:52.911663 kubelet[2580]: I1101 00:41:52.911652 2580 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:41:52.913977 kubelet[2580]: I1101 00:41:52.913955 2580 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:41:52.917433 kubelet[2580]: I1101 00:41:52.915451 2580 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:41:52.921312 kubelet[2580]: I1101 00:41:52.921275 2580 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:41:52.922770 kubelet[2580]: I1101 00:41:52.922757 2580 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:41:52.923782 kubelet[2580]: I1101 00:41:52.923409 2580 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:41:52.926339 kubelet[2580]: I1101 00:41:52.924424 2580 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:41:52.926339 kubelet[2580]: I1101 00:41:52.924446 2580 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:41:52.926339 kubelet[2580]: I1101 00:41:52.924462 2580 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:41:52.926339 kubelet[2580]: I1101 00:41:52.924471 2580 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:41:52.926339 kubelet[2580]: E1101 00:41:52.924509 2580 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:41:52.944497 kubelet[2580]: I1101 00:41:52.943772 2580 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:41:52.944783 kubelet[2580]: I1101 00:41:52.944769 2580 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:41:52.947082 kubelet[2580]: I1101 00:41:52.944928 2580 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:41:52.948400 kubelet[2580]: E1101 00:41:52.945189 2580 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:41:53.028325 kubelet[2580]: E1101 00:41:53.025746 2580 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:41:53.044959 kubelet[2580]: I1101 00:41:53.044882 2580 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:41:53.044959 kubelet[2580]: I1101 00:41:53.044902 2580 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:41:53.044959 kubelet[2580]: I1101 00:41:53.044924 2580 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:41:53.045303 kubelet[2580]: I1101 00:41:53.045087 2580 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:41:53.045303 kubelet[2580]: I1101 00:41:53.045100 2580 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:41:53.045303 kubelet[2580]: I1101 00:41:53.045121 2580 policy_none.go:49] "None policy: Start" Nov 1 00:41:53.045303 kubelet[2580]: I1101 00:41:53.045134 2580 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:41:53.045303 kubelet[2580]: I1101 00:41:53.045147 2580 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:41:53.045303 kubelet[2580]: I1101 00:41:53.045286 2580 state_mem.go:75] "Updated machine memory state" Nov 1 00:41:53.049960 kubelet[2580]: E1101 00:41:53.049934 2580 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:41:53.050194 kubelet[2580]: I1101 00:41:53.050112 2580 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:41:53.050194 kubelet[2580]: I1101 00:41:53.050124 2580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:41:53.051775 kubelet[2580]: I1101 00:41:53.051752 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:41:53.054588 kubelet[2580]: E1101 00:41:53.054564 2580 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:41:53.166411 kubelet[2580]: I1101 00:41:53.166380 2580 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-189" Nov 1 00:41:53.182361 kubelet[2580]: I1101 00:41:53.182333 2580 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-189" Nov 1 00:41:53.182736 kubelet[2580]: I1101 00:41:53.182723 2580 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-189" Nov 1 00:41:53.226542 kubelet[2580]: I1101 00:41:53.226463 2580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-189" Nov 1 00:41:53.227367 kubelet[2580]: I1101 00:41:53.227350 2580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:53.227542 kubelet[2580]: I1101 00:41:53.227360 2580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-189" Nov 1 00:41:53.234188 kubelet[2580]: E1101 00:41:53.234152 2580 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-189\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-189" Nov 1 00:41:53.337492 kubelet[2580]: I1101 00:41:53.337393 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/159bf9ea4870adc4a6e0733ebe9da33e-ca-certs\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"159bf9ea4870adc4a6e0733ebe9da33e\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Nov 1 00:41:53.337492 kubelet[2580]: I1101 00:41:53.337432 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/159bf9ea4870adc4a6e0733ebe9da33e-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"159bf9ea4870adc4a6e0733ebe9da33e\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Nov 1 00:41:53.337492 kubelet[2580]: I1101 00:41:53.337450 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:53.337492 kubelet[2580]: I1101 00:41:53.337469 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:53.337492 kubelet[2580]: I1101 00:41:53.337486 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b88057875b4f2fc249a89fb0205c9bcd-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-189\" (UID: \"b88057875b4f2fc249a89fb0205c9bcd\") " pod="kube-system/kube-scheduler-ip-172-31-16-189" Nov 1 00:41:53.337801 kubelet[2580]: I1101 00:41:53.337504 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/159bf9ea4870adc4a6e0733ebe9da33e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-189\" (UID: \"159bf9ea4870adc4a6e0733ebe9da33e\") " pod="kube-system/kube-apiserver-ip-172-31-16-189" Nov 1 00:41:53.337801 kubelet[2580]: I1101 00:41:53.337521 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:53.337801 kubelet[2580]: I1101 00:41:53.337536 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:53.337801 kubelet[2580]: I1101 00:41:53.337553 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ce9233cacca3d56a15e5509a8b1033f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-189\" (UID: \"5ce9233cacca3d56a15e5509a8b1033f\") " pod="kube-system/kube-controller-manager-ip-172-31-16-189" Nov 1 00:41:53.620341 sudo[2593]: pam_unix(sudo:session): session closed for user root Nov 1 00:41:53.862892 kubelet[2580]: I1101 00:41:53.862805 2580 apiserver.go:52] "Watching apiserver" Nov 1 00:41:53.922821 kubelet[2580]: I1101 00:41:53.922732 2580 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:41:53.933781 kubelet[2580]: I1101 00:41:53.933037 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-189" podStartSLOduration=0.93302014 podStartE2EDuration="933.02014ms" podCreationTimestamp="2025-11-01 00:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:41:53.932569265 +0000 UTC m=+1.181672838" watchObservedRunningTime="2025-11-01 00:41:53.93302014 +0000 UTC m=+1.182123709" Nov 1 00:41:53.955279 kubelet[2580]: I1101 00:41:53.955210 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-189" podStartSLOduration=2.9551785820000003 podStartE2EDuration="2.955178582s" podCreationTimestamp="2025-11-01 00:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:41:53.946278688 +0000 UTC m=+1.195382256" watchObservedRunningTime="2025-11-01 00:41:53.955178582 +0000 UTC m=+1.204282153" Nov 1 00:41:53.968671 kubelet[2580]: I1101 00:41:53.968611 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-189" podStartSLOduration=0.968596892 podStartE2EDuration="968.596892ms" podCreationTimestamp="2025-11-01 00:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:41:53.955620723 +0000 UTC m=+1.204724294" watchObservedRunningTime="2025-11-01 00:41:53.968596892 +0000 UTC m=+1.217700441" Nov 1 00:41:55.347676 sudo[1961]: pam_unix(sudo:session): session closed for user root Nov 1 00:41:55.371352 sshd[1958]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:55.374210 systemd[1]: sshd@4-172.31.16.189:22-147.75.109.163:57408.service: Deactivated successfully. Nov 1 00:41:55.374991 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:41:55.375131 systemd[1]: session-5.scope: Consumed 4.930s CPU time. Nov 1 00:41:55.375528 systemd-logind[1720]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:41:55.376511 systemd-logind[1720]: Removed session 5. Nov 1 00:41:57.710321 kubelet[2580]: I1101 00:41:57.710288 2580 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:41:57.711130 env[1734]: time="2025-11-01T00:41:57.711067030Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:41:57.711477 kubelet[2580]: I1101 00:41:57.711292 2580 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:41:58.440728 systemd[1]: Created slice kubepods-burstable-pod65dce9c9_f13d_4597_9b53_0ca436e98fe5.slice. Nov 1 00:41:58.446235 systemd[1]: Created slice kubepods-besteffort-pod870f3cdf_9375_4baf_af8d_2f597c352216.slice. Nov 1 00:41:58.471031 kubelet[2580]: I1101 00:41:58.470999 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8ck\" (UniqueName: \"kubernetes.io/projected/870f3cdf-9375-4baf-af8d-2f597c352216-kube-api-access-zc8ck\") pod \"kube-proxy-qc4ct\" (UID: \"870f3cdf-9375-4baf-af8d-2f597c352216\") " pod="kube-system/kube-proxy-qc4ct" Nov 1 00:41:58.471315 kubelet[2580]: I1101 00:41:58.471273 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-bpf-maps\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.471418 kubelet[2580]: I1101 00:41:58.471408 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-cgroup\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.471491 kubelet[2580]: I1101 00:41:58.471482 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-host-proc-sys-net\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.471556 kubelet[2580]: I1101 00:41:58.471548 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65dce9c9-f13d-4597-9b53-0ca436e98fe5-hubble-tls\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.471623 kubelet[2580]: I1101 00:41:58.471615 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/870f3cdf-9375-4baf-af8d-2f597c352216-kube-proxy\") pod \"kube-proxy-qc4ct\" (UID: \"870f3cdf-9375-4baf-af8d-2f597c352216\") " pod="kube-system/kube-proxy-qc4ct" Nov 1 00:41:58.471701 kubelet[2580]: I1101 00:41:58.471693 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cni-path\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.471768 kubelet[2580]: I1101 00:41:58.471759 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-xtables-lock\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.471834 kubelet[2580]: I1101 00:41:58.471824 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbbcv\" (UniqueName: \"kubernetes.io/projected/65dce9c9-f13d-4597-9b53-0ca436e98fe5-kube-api-access-qbbcv\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.471896 kubelet[2580]: I1101 00:41:58.471887 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/870f3cdf-9375-4baf-af8d-2f597c352216-lib-modules\") pod \"kube-proxy-qc4ct\" (UID: \"870f3cdf-9375-4baf-af8d-2f597c352216\") " pod="kube-system/kube-proxy-qc4ct" Nov 1 00:41:58.471959 kubelet[2580]: I1101 00:41:58.471950 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65dce9c9-f13d-4597-9b53-0ca436e98fe5-clustermesh-secrets\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.472027 kubelet[2580]: I1101 00:41:58.472019 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-host-proc-sys-kernel\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.472119 kubelet[2580]: I1101 00:41:58.472107 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/870f3cdf-9375-4baf-af8d-2f597c352216-xtables-lock\") pod \"kube-proxy-qc4ct\" (UID: \"870f3cdf-9375-4baf-af8d-2f597c352216\") " pod="kube-system/kube-proxy-qc4ct" Nov 1 00:41:58.472495 kubelet[2580]: I1101 00:41:58.472466 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-run\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.472594 kubelet[2580]: I1101 00:41:58.472584 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-hostproc\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.472657 kubelet[2580]: I1101 00:41:58.472647 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-etc-cni-netd\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.472735 kubelet[2580]: I1101 00:41:58.472726 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-lib-modules\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.472799 kubelet[2580]: I1101 00:41:58.472790 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-config-path\") pod \"cilium-czmgx\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " pod="kube-system/cilium-czmgx" Nov 1 00:41:58.574203 kubelet[2580]: I1101 00:41:58.574173 2580 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:41:58.745558 env[1734]: time="2025-11-01T00:41:58.745439079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-czmgx,Uid:65dce9c9-f13d-4597-9b53-0ca436e98fe5,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:58.755493 env[1734]: time="2025-11-01T00:41:58.755449011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qc4ct,Uid:870f3cdf-9375-4baf-af8d-2f597c352216,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:58.782393 env[1734]: time="2025-11-01T00:41:58.782337576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:58.782632 env[1734]: time="2025-11-01T00:41:58.782557510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:58.782632 env[1734]: time="2025-11-01T00:41:58.782570909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:58.782986 env[1734]: time="2025-11-01T00:41:58.782948854Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792 pid=2666 runtime=io.containerd.runc.v2 Nov 1 00:41:58.809597 env[1734]: time="2025-11-01T00:41:58.809510239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:58.809850 env[1734]: time="2025-11-01T00:41:58.809812381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:58.810326 env[1734]: time="2025-11-01T00:41:58.810285418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:58.811531 env[1734]: time="2025-11-01T00:41:58.810516042Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/576700e56cc008f3bc71a04647b19818a32fd038a221862a18766b323d4759d2 pid=2688 runtime=io.containerd.runc.v2 Nov 1 00:41:58.812960 systemd[1]: Started cri-containerd-97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792.scope. Nov 1 00:41:58.819706 systemd[1]: Created slice kubepods-besteffort-pode5b31f27_fe81_4387_b1c2_ed4e47cac69d.slice. Nov 1 00:41:58.838297 systemd[1]: Started cri-containerd-576700e56cc008f3bc71a04647b19818a32fd038a221862a18766b323d4759d2.scope. Nov 1 00:41:58.864423 env[1734]: time="2025-11-01T00:41:58.864381452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-czmgx,Uid:65dce9c9-f13d-4597-9b53-0ca436e98fe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\"" Nov 1 00:41:58.866290 env[1734]: time="2025-11-01T00:41:58.866262626Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:41:58.875001 kubelet[2580]: I1101 00:41:58.874892 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8r7t\" (UniqueName: \"kubernetes.io/projected/e5b31f27-fe81-4387-b1c2-ed4e47cac69d-kube-api-access-k8r7t\") pod \"cilium-operator-6c4d7847fc-vz6qr\" (UID: \"e5b31f27-fe81-4387-b1c2-ed4e47cac69d\") " pod="kube-system/cilium-operator-6c4d7847fc-vz6qr" Nov 1 00:41:58.875001 kubelet[2580]: I1101 00:41:58.874927 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5b31f27-fe81-4387-b1c2-ed4e47cac69d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vz6qr\" (UID: \"e5b31f27-fe81-4387-b1c2-ed4e47cac69d\") " pod="kube-system/cilium-operator-6c4d7847fc-vz6qr" Nov 1 00:41:58.922684 env[1734]: time="2025-11-01T00:41:58.922648980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qc4ct,Uid:870f3cdf-9375-4baf-af8d-2f597c352216,Namespace:kube-system,Attempt:0,} returns sandbox id \"576700e56cc008f3bc71a04647b19818a32fd038a221862a18766b323d4759d2\"" Nov 1 00:41:58.933263 env[1734]: time="2025-11-01T00:41:58.933214590Z" level=info msg="CreateContainer within sandbox \"576700e56cc008f3bc71a04647b19818a32fd038a221862a18766b323d4759d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:41:58.960163 env[1734]: time="2025-11-01T00:41:58.960107507Z" level=info msg="CreateContainer within sandbox \"576700e56cc008f3bc71a04647b19818a32fd038a221862a18766b323d4759d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d474a399ec28b6135b84437fac9e72c2de3c1c382605808d171b6aa04c395f6\"" Nov 1 00:41:58.960980 env[1734]: time="2025-11-01T00:41:58.960948748Z" level=info msg="StartContainer for \"6d474a399ec28b6135b84437fac9e72c2de3c1c382605808d171b6aa04c395f6\"" Nov 1 00:41:58.978939 systemd[1]: Started cri-containerd-6d474a399ec28b6135b84437fac9e72c2de3c1c382605808d171b6aa04c395f6.scope. Nov 1 00:41:59.023885 env[1734]: time="2025-11-01T00:41:59.022038847Z" level=info msg="StartContainer for \"6d474a399ec28b6135b84437fac9e72c2de3c1c382605808d171b6aa04c395f6\" returns successfully" Nov 1 00:41:59.126376 env[1734]: time="2025-11-01T00:41:59.126328164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vz6qr,Uid:e5b31f27-fe81-4387-b1c2-ed4e47cac69d,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:59.150680 env[1734]: time="2025-11-01T00:41:59.150491576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:59.150680 env[1734]: time="2025-11-01T00:41:59.150526383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:59.150680 env[1734]: time="2025-11-01T00:41:59.150537616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:59.157652 env[1734]: time="2025-11-01T00:41:59.156812440Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6 pid=2781 runtime=io.containerd.runc.v2 Nov 1 00:41:59.168621 systemd[1]: Started cri-containerd-349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6.scope. Nov 1 00:41:59.217786 env[1734]: time="2025-11-01T00:41:59.217738912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vz6qr,Uid:e5b31f27-fe81-4387-b1c2-ed4e47cac69d,Namespace:kube-system,Attempt:0,} returns sandbox id \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\"" Nov 1 00:42:00.149573 kubelet[2580]: I1101 00:42:00.149510 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qc4ct" podStartSLOduration=2.149488058 podStartE2EDuration="2.149488058s" podCreationTimestamp="2025-11-01 00:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:00.126707028 +0000 UTC m=+7.375810597" watchObservedRunningTime="2025-11-01 00:42:00.149488058 +0000 UTC m=+7.398591628" Nov 1 00:42:05.724256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127081040.mount: Deactivated successfully. Nov 1 00:42:06.160195 update_engine[1721]: I1101 00:42:06.159456 1721 update_attempter.cc:509] Updating boot flags... Nov 1 00:42:08.863761 env[1734]: time="2025-11-01T00:42:08.863698152Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:08.867177 env[1734]: time="2025-11-01T00:42:08.867143394Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:08.870171 env[1734]: time="2025-11-01T00:42:08.870124441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:08.871530 env[1734]: time="2025-11-01T00:42:08.871491678Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 00:42:08.873283 env[1734]: time="2025-11-01T00:42:08.873252887Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:42:08.880521 env[1734]: time="2025-11-01T00:42:08.880461010Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:42:08.903874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308221077.mount: Deactivated successfully. Nov 1 00:42:08.914216 env[1734]: time="2025-11-01T00:42:08.914166102Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\"" Nov 1 00:42:08.914842 env[1734]: time="2025-11-01T00:42:08.914775148Z" level=info msg="StartContainer for \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\"" Nov 1 00:42:08.942297 systemd[1]: Started cri-containerd-13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34.scope. Nov 1 00:42:08.984249 env[1734]: time="2025-11-01T00:42:08.981895199Z" level=info msg="StartContainer for \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\" returns successfully" Nov 1 00:42:08.991351 systemd[1]: cri-containerd-13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34.scope: Deactivated successfully. Nov 1 00:42:09.174451 env[1734]: time="2025-11-01T00:42:09.174324516Z" level=info msg="shim disconnected" id=13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34 Nov 1 00:42:09.174451 env[1734]: time="2025-11-01T00:42:09.174381975Z" level=warning msg="cleaning up after shim disconnected" id=13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34 namespace=k8s.io Nov 1 00:42:09.174451 env[1734]: time="2025-11-01T00:42:09.174392838Z" level=info msg="cleaning up dead shim" Nov 1 00:42:09.185265 env[1734]: time="2025-11-01T00:42:09.185050473Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3099 runtime=io.containerd.runc.v2\n" Nov 1 00:42:09.896251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34-rootfs.mount: Deactivated successfully. Nov 1 00:42:10.195442 env[1734]: time="2025-11-01T00:42:10.195143281Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:42:10.212437 env[1734]: time="2025-11-01T00:42:10.212357908Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\"" Nov 1 00:42:10.213254 env[1734]: time="2025-11-01T00:42:10.213210364Z" level=info msg="StartContainer for \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\"" Nov 1 00:42:10.241214 systemd[1]: Started cri-containerd-c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e.scope. Nov 1 00:42:10.273054 env[1734]: time="2025-11-01T00:42:10.273006465Z" level=info msg="StartContainer for \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\" returns successfully" Nov 1 00:42:10.286252 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:42:10.286935 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:42:10.287129 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:42:10.290024 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:10.295167 systemd[1]: cri-containerd-c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e.scope: Deactivated successfully. Nov 1 00:42:10.307921 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:10.330490 env[1734]: time="2025-11-01T00:42:10.330452096Z" level=info msg="shim disconnected" id=c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e Nov 1 00:42:10.330920 env[1734]: time="2025-11-01T00:42:10.330900200Z" level=warning msg="cleaning up after shim disconnected" id=c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e namespace=k8s.io Nov 1 00:42:10.331001 env[1734]: time="2025-11-01T00:42:10.330991097Z" level=info msg="cleaning up dead shim" Nov 1 00:42:10.340509 env[1734]: time="2025-11-01T00:42:10.340471309Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3166 runtime=io.containerd.runc.v2\n" Nov 1 00:42:10.896713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e-rootfs.mount: Deactivated successfully. Nov 1 00:42:11.207682 env[1734]: time="2025-11-01T00:42:11.207322278Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:42:11.239592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543298896.mount: Deactivated successfully. Nov 1 00:42:11.248511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083896365.mount: Deactivated successfully. Nov 1 00:42:11.262406 env[1734]: time="2025-11-01T00:42:11.262344343Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\"" Nov 1 00:42:11.264536 env[1734]: time="2025-11-01T00:42:11.264506898Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:11.264810 env[1734]: time="2025-11-01T00:42:11.264637320Z" level=info msg="StartContainer for \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\"" Nov 1 00:42:11.270258 env[1734]: time="2025-11-01T00:42:11.270197713Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:11.273170 env[1734]: time="2025-11-01T00:42:11.273138787Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:11.273538 env[1734]: time="2025-11-01T00:42:11.273508265Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 00:42:11.280820 env[1734]: time="2025-11-01T00:42:11.280778923Z" level=info msg="CreateContainer within sandbox \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:42:11.287959 systemd[1]: Started cri-containerd-da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f.scope. Nov 1 00:42:11.312880 env[1734]: time="2025-11-01T00:42:11.312839279Z" level=info msg="CreateContainer within sandbox \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\"" Nov 1 00:42:11.315149 env[1734]: time="2025-11-01T00:42:11.315107042Z" level=info msg="StartContainer for \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\"" Nov 1 00:42:11.329985 env[1734]: time="2025-11-01T00:42:11.329946314Z" level=info msg="StartContainer for \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\" returns successfully" Nov 1 00:42:11.336961 systemd[1]: Started cri-containerd-2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d.scope. Nov 1 00:42:11.341781 systemd[1]: cri-containerd-da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f.scope: Deactivated successfully. Nov 1 00:42:11.375970 env[1734]: time="2025-11-01T00:42:11.375920680Z" level=info msg="StartContainer for \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\" returns successfully" Nov 1 00:42:11.516794 env[1734]: time="2025-11-01T00:42:11.516653390Z" level=info msg="shim disconnected" id=da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f Nov 1 00:42:11.517068 env[1734]: time="2025-11-01T00:42:11.517042948Z" level=warning msg="cleaning up after shim disconnected" id=da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f namespace=k8s.io Nov 1 00:42:11.517177 env[1734]: time="2025-11-01T00:42:11.517161915Z" level=info msg="cleaning up dead shim" Nov 1 00:42:11.531346 env[1734]: time="2025-11-01T00:42:11.531286818Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3263 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:42:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Nov 1 00:42:12.195886 env[1734]: time="2025-11-01T00:42:12.195844651Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:42:12.221159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1454933082.mount: Deactivated successfully. Nov 1 00:42:12.230501 env[1734]: time="2025-11-01T00:42:12.230454785Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\"" Nov 1 00:42:12.231221 env[1734]: time="2025-11-01T00:42:12.231196629Z" level=info msg="StartContainer for \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\"" Nov 1 00:42:12.258161 systemd[1]: Started cri-containerd-4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e.scope. Nov 1 00:42:12.342316 systemd[1]: cri-containerd-4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e.scope: Deactivated successfully. Nov 1 00:42:12.344137 env[1734]: time="2025-11-01T00:42:12.344093601Z" level=info msg="StartContainer for \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\" returns successfully" Nov 1 00:42:12.374556 env[1734]: time="2025-11-01T00:42:12.374515912Z" level=info msg="shim disconnected" id=4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e Nov 1 00:42:12.374891 env[1734]: time="2025-11-01T00:42:12.374572101Z" level=warning msg="cleaning up after shim disconnected" id=4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e namespace=k8s.io Nov 1 00:42:12.374891 env[1734]: time="2025-11-01T00:42:12.374589914Z" level=info msg="cleaning up dead shim" Nov 1 00:42:12.382620 env[1734]: time="2025-11-01T00:42:12.382574125Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3319 runtime=io.containerd.runc.v2\n" Nov 1 00:42:12.898775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e-rootfs.mount: Deactivated successfully. Nov 1 00:42:13.208304 env[1734]: time="2025-11-01T00:42:13.208025770Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:42:13.220738 kubelet[2580]: I1101 00:42:13.220642 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vz6qr" podStartSLOduration=3.165243811 podStartE2EDuration="15.220621886s" podCreationTimestamp="2025-11-01 00:41:58 +0000 UTC" firstStartedPulling="2025-11-01 00:41:59.219279724 +0000 UTC m=+6.468383273" lastFinishedPulling="2025-11-01 00:42:11.274657802 +0000 UTC m=+18.523761348" observedRunningTime="2025-11-01 00:42:12.295976567 +0000 UTC m=+19.545080135" watchObservedRunningTime="2025-11-01 00:42:13.220621886 +0000 UTC m=+20.469725454" Nov 1 00:42:13.231654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493620188.mount: Deactivated successfully. Nov 1 00:42:13.233993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836434620.mount: Deactivated successfully. Nov 1 00:42:13.240890 env[1734]: time="2025-11-01T00:42:13.240682742Z" level=info msg="CreateContainer within sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\"" Nov 1 00:42:13.242627 env[1734]: time="2025-11-01T00:42:13.242069924Z" level=info msg="StartContainer for \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\"" Nov 1 00:42:13.265657 systemd[1]: Started cri-containerd-345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea.scope. Nov 1 00:42:13.316505 env[1734]: time="2025-11-01T00:42:13.316449891Z" level=info msg="StartContainer for \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\" returns successfully" Nov 1 00:42:13.484652 kubelet[2580]: I1101 00:42:13.484558 2580 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:42:13.532403 systemd[1]: Created slice kubepods-burstable-pod91e05978_c9f2_40d8_ac10_f5f9385cb6db.slice. Nov 1 00:42:13.538733 systemd[1]: Created slice kubepods-burstable-podc88f1a62_b143_450f_ba21_5534221110b9.slice. Nov 1 00:42:13.539391 kubelet[2580]: I1101 00:42:13.539262 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91e05978-c9f2-40d8-ac10-f5f9385cb6db-config-volume\") pod \"coredns-674b8bbfcf-gsrr6\" (UID: \"91e05978-c9f2-40d8-ac10-f5f9385cb6db\") " pod="kube-system/coredns-674b8bbfcf-gsrr6" Nov 1 00:42:13.539391 kubelet[2580]: I1101 00:42:13.539324 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95mrb\" (UniqueName: \"kubernetes.io/projected/91e05978-c9f2-40d8-ac10-f5f9385cb6db-kube-api-access-95mrb\") pod \"coredns-674b8bbfcf-gsrr6\" (UID: \"91e05978-c9f2-40d8-ac10-f5f9385cb6db\") " pod="kube-system/coredns-674b8bbfcf-gsrr6" Nov 1 00:42:13.640251 kubelet[2580]: I1101 00:42:13.640200 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6f7\" (UniqueName: \"kubernetes.io/projected/c88f1a62-b143-450f-ba21-5534221110b9-kube-api-access-sp6f7\") pod \"coredns-674b8bbfcf-jg4sg\" (UID: \"c88f1a62-b143-450f-ba21-5534221110b9\") " pod="kube-system/coredns-674b8bbfcf-jg4sg" Nov 1 00:42:13.640561 kubelet[2580]: I1101 00:42:13.640529 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c88f1a62-b143-450f-ba21-5534221110b9-config-volume\") pod \"coredns-674b8bbfcf-jg4sg\" (UID: \"c88f1a62-b143-450f-ba21-5534221110b9\") " pod="kube-system/coredns-674b8bbfcf-jg4sg" Nov 1 00:42:13.837801 env[1734]: time="2025-11-01T00:42:13.837750731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gsrr6,Uid:91e05978-c9f2-40d8-ac10-f5f9385cb6db,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:13.843889 env[1734]: time="2025-11-01T00:42:13.843847357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jg4sg,Uid:c88f1a62-b143-450f-ba21-5534221110b9,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:15.811610 (udev-worker)[3483]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:42:15.813525 systemd-networkd[1453]: cilium_host: Link UP Nov 1 00:42:15.813725 systemd-networkd[1453]: cilium_net: Link UP Nov 1 00:42:15.814620 systemd-networkd[1453]: cilium_net: Gained carrier Nov 1 00:42:15.816018 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:42:15.817955 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:42:15.817313 systemd-networkd[1453]: cilium_host: Gained carrier Nov 1 00:42:15.818431 (udev-worker)[3484]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:42:15.949730 systemd-networkd[1453]: cilium_vxlan: Link UP Nov 1 00:42:15.949738 systemd-networkd[1453]: cilium_vxlan: Gained carrier Nov 1 00:42:15.956486 systemd-networkd[1453]: cilium_net: Gained IPv6LL Nov 1 00:42:16.151450 amazon-ssm-agent[1812]: 2025-11-01 00:42:16 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Nov 1 00:42:16.508405 systemd-networkd[1453]: cilium_host: Gained IPv6LL Nov 1 00:42:16.557263 kernel: NET: Registered PF_ALG protocol family Nov 1 00:42:17.303077 systemd-networkd[1453]: lxc_health: Link UP Nov 1 00:42:17.308736 (udev-worker)[3449]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:42:17.311257 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:42:17.313370 systemd-networkd[1453]: lxc_health: Gained carrier Nov 1 00:42:17.481368 systemd-networkd[1453]: lxcbe6085fe7654: Link UP Nov 1 00:42:17.489506 kernel: eth0: renamed from tmpcdef7 Nov 1 00:42:17.493097 systemd-networkd[1453]: lxcca9026982409: Link UP Nov 1 00:42:17.500946 systemd-networkd[1453]: lxcbe6085fe7654: Gained carrier Nov 1 00:42:17.501407 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbe6085fe7654: link becomes ready Nov 1 00:42:17.503292 kernel: eth0: renamed from tmpdeddb Nov 1 00:42:17.509256 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcca9026982409: link becomes ready Nov 1 00:42:17.508766 systemd-networkd[1453]: lxcca9026982409: Gained carrier Nov 1 00:42:17.660409 systemd-networkd[1453]: cilium_vxlan: Gained IPv6LL Nov 1 00:42:18.749607 systemd-networkd[1453]: lxcbe6085fe7654: Gained IPv6LL Nov 1 00:42:18.787438 kubelet[2580]: I1101 00:42:18.787374 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-czmgx" podStartSLOduration=10.780265474 podStartE2EDuration="20.787351833s" podCreationTimestamp="2025-11-01 00:41:58 +0000 UTC" firstStartedPulling="2025-11-01 00:41:58.865835272 +0000 UTC m=+6.114938817" lastFinishedPulling="2025-11-01 00:42:08.872921628 +0000 UTC m=+16.122025176" observedRunningTime="2025-11-01 00:42:14.230894445 +0000 UTC m=+21.479998014" watchObservedRunningTime="2025-11-01 00:42:18.787351833 +0000 UTC m=+26.036455402" Nov 1 00:42:18.876375 systemd-networkd[1453]: lxc_health: Gained IPv6LL Nov 1 00:42:19.198307 systemd-networkd[1453]: lxcca9026982409: Gained IPv6LL Nov 1 00:42:21.951282 env[1734]: time="2025-11-01T00:42:21.951149740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:21.951772 env[1734]: time="2025-11-01T00:42:21.951304454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:21.951772 env[1734]: time="2025-11-01T00:42:21.951339213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:21.952447 env[1734]: time="2025-11-01T00:42:21.952115769Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdef7426e416bc6d02c7a85204e5365c8f3109b8da863fdf0385db0413cdcc2d pid=3860 runtime=io.containerd.runc.v2 Nov 1 00:42:21.968276 env[1734]: time="2025-11-01T00:42:21.968178633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:21.968530 env[1734]: time="2025-11-01T00:42:21.968480676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:21.968691 env[1734]: time="2025-11-01T00:42:21.968662507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:21.969414 env[1734]: time="2025-11-01T00:42:21.969362584Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/deddb159732d3b791931c4054608f7bf0178465078c815a7fc0a205cb6c22b98 pid=3873 runtime=io.containerd.runc.v2 Nov 1 00:42:21.995724 systemd[1]: Started cri-containerd-cdef7426e416bc6d02c7a85204e5365c8f3109b8da863fdf0385db0413cdcc2d.scope. Nov 1 00:42:22.015413 systemd[1]: run-containerd-runc-k8s.io-cdef7426e416bc6d02c7a85204e5365c8f3109b8da863fdf0385db0413cdcc2d-runc.5pRs4G.mount: Deactivated successfully. Nov 1 00:42:22.040717 systemd[1]: Started cri-containerd-deddb159732d3b791931c4054608f7bf0178465078c815a7fc0a205cb6c22b98.scope. Nov 1 00:42:22.105856 env[1734]: time="2025-11-01T00:42:22.105805691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gsrr6,Uid:91e05978-c9f2-40d8-ac10-f5f9385cb6db,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdef7426e416bc6d02c7a85204e5365c8f3109b8da863fdf0385db0413cdcc2d\"" Nov 1 00:42:22.126742 env[1734]: time="2025-11-01T00:42:22.126692216Z" level=info msg="CreateContainer within sandbox \"cdef7426e416bc6d02c7a85204e5365c8f3109b8da863fdf0385db0413cdcc2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:42:22.155440 env[1734]: time="2025-11-01T00:42:22.155382405Z" level=info msg="CreateContainer within sandbox \"cdef7426e416bc6d02c7a85204e5365c8f3109b8da863fdf0385db0413cdcc2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8987e04b88e0595b393b7d9a8fa6faf5de063515f7f298ab6dfb5a49de22bdb\"" Nov 1 00:42:22.156450 env[1734]: time="2025-11-01T00:42:22.156415657Z" level=info msg="StartContainer for \"b8987e04b88e0595b393b7d9a8fa6faf5de063515f7f298ab6dfb5a49de22bdb\"" Nov 1 00:42:22.182877 env[1734]: time="2025-11-01T00:42:22.182828822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jg4sg,Uid:c88f1a62-b143-450f-ba21-5534221110b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"deddb159732d3b791931c4054608f7bf0178465078c815a7fc0a205cb6c22b98\"" Nov 1 00:42:22.192734 systemd[1]: Started cri-containerd-b8987e04b88e0595b393b7d9a8fa6faf5de063515f7f298ab6dfb5a49de22bdb.scope. Nov 1 00:42:22.195405 env[1734]: time="2025-11-01T00:42:22.195362357Z" level=info msg="CreateContainer within sandbox \"deddb159732d3b791931c4054608f7bf0178465078c815a7fc0a205cb6c22b98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:42:22.223747 env[1734]: time="2025-11-01T00:42:22.223640931Z" level=info msg="CreateContainer within sandbox \"deddb159732d3b791931c4054608f7bf0178465078c815a7fc0a205cb6c22b98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8df20d4fde1789791e88c9afc6774380e7158cff683a485f62e99711e52ca9c1\"" Nov 1 00:42:22.224579 env[1734]: time="2025-11-01T00:42:22.224552977Z" level=info msg="StartContainer for \"8df20d4fde1789791e88c9afc6774380e7158cff683a485f62e99711e52ca9c1\"" Nov 1 00:42:22.249866 systemd[1]: Started cri-containerd-8df20d4fde1789791e88c9afc6774380e7158cff683a485f62e99711e52ca9c1.scope. Nov 1 00:42:22.260800 env[1734]: time="2025-11-01T00:42:22.260755172Z" level=info msg="StartContainer for \"b8987e04b88e0595b393b7d9a8fa6faf5de063515f7f298ab6dfb5a49de22bdb\" returns successfully" Nov 1 00:42:22.294939 env[1734]: time="2025-11-01T00:42:22.294896890Z" level=info msg="StartContainer for \"8df20d4fde1789791e88c9afc6774380e7158cff683a485f62e99711e52ca9c1\" returns successfully" Nov 1 00:42:23.254148 kubelet[2580]: I1101 00:42:23.254039 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jg4sg" podStartSLOduration=25.253958268 podStartE2EDuration="25.253958268s" podCreationTimestamp="2025-11-01 00:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:23.253446794 +0000 UTC m=+30.502550364" watchObservedRunningTime="2025-11-01 00:42:23.253958268 +0000 UTC m=+30.503061836" Nov 1 00:42:23.270072 kubelet[2580]: I1101 00:42:23.270012 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gsrr6" podStartSLOduration=25.269996212 podStartE2EDuration="25.269996212s" podCreationTimestamp="2025-11-01 00:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:23.26867758 +0000 UTC m=+30.517781150" watchObservedRunningTime="2025-11-01 00:42:23.269996212 +0000 UTC m=+30.519099782" Nov 1 00:42:26.871008 systemd[1]: Started sshd@5-172.31.16.189:22-147.75.109.163:41378.service. Nov 1 00:42:27.075634 sshd[4014]: Accepted publickey for core from 147.75.109.163 port 41378 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:27.077497 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:27.082294 systemd-logind[1720]: New session 6 of user core. Nov 1 00:42:27.082953 systemd[1]: Started session-6.scope. Nov 1 00:42:27.383527 sshd[4014]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:27.386434 systemd-logind[1720]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:42:27.386803 systemd[1]: sshd@5-172.31.16.189:22-147.75.109.163:41378.service: Deactivated successfully. Nov 1 00:42:27.387500 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:42:27.388602 systemd-logind[1720]: Removed session 6. Nov 1 00:42:32.409177 systemd[1]: Started sshd@6-172.31.16.189:22-147.75.109.163:34296.service. Nov 1 00:42:32.570091 sshd[4028]: Accepted publickey for core from 147.75.109.163 port 34296 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:32.572054 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:32.577298 systemd-logind[1720]: New session 7 of user core. Nov 1 00:42:32.577535 systemd[1]: Started session-7.scope. Nov 1 00:42:32.787196 sshd[4028]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:32.790913 systemd[1]: sshd@6-172.31.16.189:22-147.75.109.163:34296.service: Deactivated successfully. Nov 1 00:42:32.791797 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:42:32.792619 systemd-logind[1720]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:42:32.793756 systemd-logind[1720]: Removed session 7. Nov 1 00:42:37.813913 systemd[1]: Started sshd@7-172.31.16.189:22-147.75.109.163:34302.service. Nov 1 00:42:37.982119 sshd[4041]: Accepted publickey for core from 147.75.109.163 port 34302 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:37.983846 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:37.989095 systemd[1]: Started session-8.scope. Nov 1 00:42:37.989448 systemd-logind[1720]: New session 8 of user core. Nov 1 00:42:38.187090 sshd[4041]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:38.189850 systemd[1]: sshd@7-172.31.16.189:22-147.75.109.163:34302.service: Deactivated successfully. Nov 1 00:42:38.190717 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:42:38.191465 systemd-logind[1720]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:42:38.192297 systemd-logind[1720]: Removed session 8. Nov 1 00:42:43.214926 systemd[1]: Started sshd@8-172.31.16.189:22-147.75.109.163:57634.service. Nov 1 00:42:43.380455 sshd[4054]: Accepted publickey for core from 147.75.109.163 port 57634 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:43.381946 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:43.387314 systemd-logind[1720]: New session 9 of user core. Nov 1 00:42:43.387532 systemd[1]: Started session-9.scope. Nov 1 00:42:43.589736 sshd[4054]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:43.592553 systemd[1]: sshd@8-172.31.16.189:22-147.75.109.163:57634.service: Deactivated successfully. Nov 1 00:42:43.593297 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:42:43.594221 systemd-logind[1720]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:42:43.595145 systemd-logind[1720]: Removed session 9. Nov 1 00:42:43.614127 systemd[1]: Started sshd@9-172.31.16.189:22-147.75.109.163:57650.service. Nov 1 00:42:43.769516 sshd[4068]: Accepted publickey for core from 147.75.109.163 port 57650 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:43.770956 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:43.776389 systemd[1]: Started session-10.scope. Nov 1 00:42:43.777779 systemd-logind[1720]: New session 10 of user core. Nov 1 00:42:44.064351 sshd[4068]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:44.067907 systemd-logind[1720]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:42:44.069655 systemd[1]: sshd@9-172.31.16.189:22-147.75.109.163:57650.service: Deactivated successfully. Nov 1 00:42:44.070447 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:42:44.071803 systemd-logind[1720]: Removed session 10. Nov 1 00:42:44.091904 systemd[1]: Started sshd@10-172.31.16.189:22-147.75.109.163:57654.service. Nov 1 00:42:44.267987 sshd[4077]: Accepted publickey for core from 147.75.109.163 port 57654 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:44.269564 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:44.274726 systemd-logind[1720]: New session 11 of user core. Nov 1 00:42:44.276290 systemd[1]: Started session-11.scope. Nov 1 00:42:44.480970 sshd[4077]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:44.483957 systemd[1]: sshd@10-172.31.16.189:22-147.75.109.163:57654.service: Deactivated successfully. Nov 1 00:42:44.484690 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:42:44.485311 systemd-logind[1720]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:42:44.486078 systemd-logind[1720]: Removed session 11. Nov 1 00:42:49.506090 systemd[1]: Started sshd@11-172.31.16.189:22-147.75.109.163:57666.service. Nov 1 00:42:49.659792 sshd[4089]: Accepted publickey for core from 147.75.109.163 port 57666 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:49.662150 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:49.667948 systemd-logind[1720]: New session 12 of user core. Nov 1 00:42:49.668553 systemd[1]: Started session-12.scope. Nov 1 00:42:49.868729 sshd[4089]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:49.871764 systemd-logind[1720]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:42:49.871951 systemd[1]: sshd@11-172.31.16.189:22-147.75.109.163:57666.service: Deactivated successfully. Nov 1 00:42:49.872634 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:42:49.873418 systemd-logind[1720]: Removed session 12. Nov 1 00:42:54.895570 systemd[1]: Started sshd@12-172.31.16.189:22-147.75.109.163:44954.service. Nov 1 00:42:55.054265 sshd[4103]: Accepted publickey for core from 147.75.109.163 port 44954 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:55.055828 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:55.061722 systemd[1]: Started session-13.scope. Nov 1 00:42:55.062083 systemd-logind[1720]: New session 13 of user core. Nov 1 00:42:55.256011 sshd[4103]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:55.259067 systemd[1]: sshd@12-172.31.16.189:22-147.75.109.163:44954.service: Deactivated successfully. Nov 1 00:42:55.259808 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:42:55.260301 systemd-logind[1720]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:42:55.261087 systemd-logind[1720]: Removed session 13. Nov 1 00:42:55.280990 systemd[1]: Started sshd@13-172.31.16.189:22-147.75.109.163:44956.service. Nov 1 00:42:55.438539 sshd[4115]: Accepted publickey for core from 147.75.109.163 port 44956 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:55.439916 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:55.445252 systemd[1]: Started session-14.scope. Nov 1 00:42:55.445791 systemd-logind[1720]: New session 14 of user core. Nov 1 00:42:56.106774 sshd[4115]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:56.109908 systemd[1]: sshd@13-172.31.16.189:22-147.75.109.163:44956.service: Deactivated successfully. Nov 1 00:42:56.110739 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:42:56.112064 systemd-logind[1720]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:42:56.113313 systemd-logind[1720]: Removed session 14. Nov 1 00:42:56.132779 systemd[1]: Started sshd@14-172.31.16.189:22-147.75.109.163:44960.service. Nov 1 00:42:56.308761 sshd[4125]: Accepted publickey for core from 147.75.109.163 port 44960 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:56.310046 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:56.314976 systemd[1]: Started session-15.scope. Nov 1 00:42:56.315629 systemd-logind[1720]: New session 15 of user core. Nov 1 00:42:57.102392 sshd[4125]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:57.111579 systemd[1]: sshd@14-172.31.16.189:22-147.75.109.163:44960.service: Deactivated successfully. Nov 1 00:42:57.112341 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:42:57.113588 systemd-logind[1720]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:42:57.114549 systemd-logind[1720]: Removed session 15. Nov 1 00:42:57.128492 systemd[1]: Started sshd@15-172.31.16.189:22-147.75.109.163:44968.service. Nov 1 00:42:57.298790 sshd[4142]: Accepted publickey for core from 147.75.109.163 port 44968 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:57.300207 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:57.305865 systemd[1]: Started session-16.scope. Nov 1 00:42:57.307055 systemd-logind[1720]: New session 16 of user core. Nov 1 00:42:57.663204 sshd[4142]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:57.666085 systemd-logind[1720]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:42:57.666295 systemd[1]: sshd@15-172.31.16.189:22-147.75.109.163:44968.service: Deactivated successfully. Nov 1 00:42:57.667154 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:42:57.667987 systemd-logind[1720]: Removed session 16. Nov 1 00:42:57.688772 systemd[1]: Started sshd@16-172.31.16.189:22-147.75.109.163:44978.service. Nov 1 00:42:57.847682 sshd[4152]: Accepted publickey for core from 147.75.109.163 port 44978 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:57.849083 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:57.854215 systemd-logind[1720]: New session 17 of user core. Nov 1 00:42:57.854584 systemd[1]: Started session-17.scope. Nov 1 00:42:58.055211 sshd[4152]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:58.058570 systemd[1]: sshd@16-172.31.16.189:22-147.75.109.163:44978.service: Deactivated successfully. Nov 1 00:42:58.059371 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:42:58.059393 systemd-logind[1720]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:42:58.060627 systemd-logind[1720]: Removed session 17. Nov 1 00:43:03.084772 systemd[1]: Started sshd@17-172.31.16.189:22-147.75.109.163:46740.service. Nov 1 00:43:03.243800 sshd[4167]: Accepted publickey for core from 147.75.109.163 port 46740 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:03.245461 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:03.250247 systemd-logind[1720]: New session 18 of user core. Nov 1 00:43:03.250807 systemd[1]: Started session-18.scope. Nov 1 00:43:03.458557 sshd[4167]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:03.462489 systemd[1]: sshd@17-172.31.16.189:22-147.75.109.163:46740.service: Deactivated successfully. Nov 1 00:43:03.463516 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:43:03.464459 systemd-logind[1720]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:43:03.465439 systemd-logind[1720]: Removed session 18. Nov 1 00:43:08.486903 systemd[1]: Started sshd@18-172.31.16.189:22-147.75.109.163:46746.service. Nov 1 00:43:08.646659 sshd[4181]: Accepted publickey for core from 147.75.109.163 port 46746 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:08.648077 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:08.652872 systemd-logind[1720]: New session 19 of user core. Nov 1 00:43:08.653277 systemd[1]: Started session-19.scope. Nov 1 00:43:08.843558 sshd[4181]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:08.846410 systemd[1]: sshd@18-172.31.16.189:22-147.75.109.163:46746.service: Deactivated successfully. Nov 1 00:43:08.847116 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:43:08.847822 systemd-logind[1720]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:43:08.848698 systemd-logind[1720]: Removed session 19. Nov 1 00:43:13.871648 systemd[1]: Started sshd@19-172.31.16.189:22-147.75.109.163:34820.service. Nov 1 00:43:14.034817 sshd[4193]: Accepted publickey for core from 147.75.109.163 port 34820 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:14.036299 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:14.041658 systemd[1]: Started session-20.scope. Nov 1 00:43:14.042086 systemd-logind[1720]: New session 20 of user core. Nov 1 00:43:14.231292 sshd[4193]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:14.235196 systemd-logind[1720]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:43:14.235458 systemd[1]: sshd@19-172.31.16.189:22-147.75.109.163:34820.service: Deactivated successfully. Nov 1 00:43:14.236444 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:43:14.237377 systemd-logind[1720]: Removed session 20. Nov 1 00:43:14.257310 systemd[1]: Started sshd@20-172.31.16.189:22-147.75.109.163:34824.service. Nov 1 00:43:14.418882 sshd[4204]: Accepted publickey for core from 147.75.109.163 port 34824 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:14.420486 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:14.426108 systemd[1]: Started session-21.scope. Nov 1 00:43:14.427076 systemd-logind[1720]: New session 21 of user core. Nov 1 00:43:16.511756 systemd[1]: run-containerd-runc-k8s.io-345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea-runc.VaHRO9.mount: Deactivated successfully. Nov 1 00:43:16.515897 env[1734]: time="2025-11-01T00:43:16.515698079Z" level=info msg="StopContainer for \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\" with timeout 30 (s)" Nov 1 00:43:16.518551 env[1734]: time="2025-11-01T00:43:16.518510167Z" level=info msg="Stop container \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\" with signal terminated" Nov 1 00:43:16.548558 systemd[1]: cri-containerd-2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d.scope: Deactivated successfully. Nov 1 00:43:16.564490 env[1734]: time="2025-11-01T00:43:16.564417364Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:43:16.573308 env[1734]: time="2025-11-01T00:43:16.573270927Z" level=info msg="StopContainer for \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\" with timeout 2 (s)" Nov 1 00:43:16.573801 env[1734]: time="2025-11-01T00:43:16.573771901Z" level=info msg="Stop container \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\" with signal terminated" Nov 1 00:43:16.580034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d-rootfs.mount: Deactivated successfully. Nov 1 00:43:16.588378 systemd-networkd[1453]: lxc_health: Link DOWN Nov 1 00:43:16.588389 systemd-networkd[1453]: lxc_health: Lost carrier Nov 1 00:43:16.606812 env[1734]: time="2025-11-01T00:43:16.606763539Z" level=info msg="shim disconnected" id=2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d Nov 1 00:43:16.608353 env[1734]: time="2025-11-01T00:43:16.608310459Z" level=warning msg="cleaning up after shim disconnected" id=2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d namespace=k8s.io Nov 1 00:43:16.608504 env[1734]: time="2025-11-01T00:43:16.608486658Z" level=info msg="cleaning up dead shim" Nov 1 00:43:16.619684 systemd[1]: cri-containerd-345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea.scope: Deactivated successfully. Nov 1 00:43:16.620025 systemd[1]: cri-containerd-345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea.scope: Consumed 7.878s CPU time. Nov 1 00:43:16.629701 env[1734]: time="2025-11-01T00:43:16.629654954Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4259 runtime=io.containerd.runc.v2\n" Nov 1 00:43:16.633822 env[1734]: time="2025-11-01T00:43:16.633774650Z" level=info msg="StopContainer for \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\" returns successfully" Nov 1 00:43:16.634558 env[1734]: time="2025-11-01T00:43:16.634521013Z" level=info msg="StopPodSandbox for \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\"" Nov 1 00:43:16.634752 env[1734]: time="2025-11-01T00:43:16.634595259Z" level=info msg="Container to stop \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:43:16.638892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6-shm.mount: Deactivated successfully. Nov 1 00:43:16.650026 systemd[1]: cri-containerd-349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6.scope: Deactivated successfully. Nov 1 00:43:16.682041 env[1734]: time="2025-11-01T00:43:16.681978207Z" level=info msg="shim disconnected" id=349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6 Nov 1 00:43:16.682041 env[1734]: time="2025-11-01T00:43:16.682029112Z" level=warning msg="cleaning up after shim disconnected" id=349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6 namespace=k8s.io Nov 1 00:43:16.682041 env[1734]: time="2025-11-01T00:43:16.682042426Z" level=info msg="cleaning up dead shim" Nov 1 00:43:16.682431 env[1734]: time="2025-11-01T00:43:16.682310191Z" level=info msg="shim disconnected" id=345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea Nov 1 00:43:16.682431 env[1734]: time="2025-11-01T00:43:16.682347464Z" level=warning msg="cleaning up after shim disconnected" id=345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea namespace=k8s.io Nov 1 00:43:16.682431 env[1734]: time="2025-11-01T00:43:16.682360552Z" level=info msg="cleaning up dead shim" Nov 1 00:43:16.692980 env[1734]: time="2025-11-01T00:43:16.692908996Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4303 runtime=io.containerd.runc.v2\n" Nov 1 00:43:16.693806 env[1734]: time="2025-11-01T00:43:16.693763869Z" level=info msg="TearDown network for sandbox \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\" successfully" Nov 1 00:43:16.693806 env[1734]: time="2025-11-01T00:43:16.693802015Z" level=info msg="StopPodSandbox for \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\" returns successfully" Nov 1 00:43:16.707266 env[1734]: time="2025-11-01T00:43:16.705424610Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4304 runtime=io.containerd.runc.v2\n" Nov 1 00:43:16.709379 env[1734]: time="2025-11-01T00:43:16.709343360Z" level=info msg="StopContainer for \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\" returns successfully" Nov 1 00:43:16.709805 env[1734]: time="2025-11-01T00:43:16.709780277Z" level=info msg="StopPodSandbox for \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\"" Nov 1 00:43:16.709862 env[1734]: time="2025-11-01T00:43:16.709834765Z" level=info msg="Container to stop \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:43:16.709862 env[1734]: time="2025-11-01T00:43:16.709847624Z" level=info msg="Container to stop \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:43:16.709925 env[1734]: time="2025-11-01T00:43:16.709857989Z" level=info msg="Container to stop \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:43:16.709925 env[1734]: time="2025-11-01T00:43:16.709870334Z" level=info msg="Container to stop \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:43:16.709925 env[1734]: time="2025-11-01T00:43:16.709880403Z" level=info msg="Container to stop \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:43:16.718414 systemd[1]: cri-containerd-97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792.scope: Deactivated successfully. Nov 1 00:43:16.755527 env[1734]: time="2025-11-01T00:43:16.755483977Z" level=info msg="shim disconnected" id=97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792 Nov 1 00:43:16.755527 env[1734]: time="2025-11-01T00:43:16.755527678Z" level=warning msg="cleaning up after shim disconnected" id=97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792 namespace=k8s.io Nov 1 00:43:16.755758 env[1734]: time="2025-11-01T00:43:16.755537246Z" level=info msg="cleaning up dead shim" Nov 1 00:43:16.764849 env[1734]: time="2025-11-01T00:43:16.764709984Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4349 runtime=io.containerd.runc.v2\n" Nov 1 00:43:16.765659 env[1734]: time="2025-11-01T00:43:16.765618504Z" level=info msg="TearDown network for sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" successfully" Nov 1 00:43:16.766270 env[1734]: time="2025-11-01T00:43:16.766208290Z" level=info msg="StopPodSandbox for \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" returns successfully" Nov 1 00:43:16.796195 kubelet[2580]: I1101 00:43:16.796164 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8r7t\" (UniqueName: \"kubernetes.io/projected/e5b31f27-fe81-4387-b1c2-ed4e47cac69d-kube-api-access-k8r7t\") pod \"e5b31f27-fe81-4387-b1c2-ed4e47cac69d\" (UID: \"e5b31f27-fe81-4387-b1c2-ed4e47cac69d\") " Nov 1 00:43:16.796677 kubelet[2580]: I1101 00:43:16.796625 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5b31f27-fe81-4387-b1c2-ed4e47cac69d-cilium-config-path\") pod \"e5b31f27-fe81-4387-b1c2-ed4e47cac69d\" (UID: \"e5b31f27-fe81-4387-b1c2-ed4e47cac69d\") " Nov 1 00:43:16.808768 kubelet[2580]: I1101 00:43:16.808725 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b31f27-fe81-4387-b1c2-ed4e47cac69d-kube-api-access-k8r7t" (OuterVolumeSpecName: "kube-api-access-k8r7t") pod "e5b31f27-fe81-4387-b1c2-ed4e47cac69d" (UID: "e5b31f27-fe81-4387-b1c2-ed4e47cac69d"). InnerVolumeSpecName "kube-api-access-k8r7t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:43:16.809094 kubelet[2580]: I1101 00:43:16.805208 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5b31f27-fe81-4387-b1c2-ed4e47cac69d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e5b31f27-fe81-4387-b1c2-ed4e47cac69d" (UID: "e5b31f27-fe81-4387-b1c2-ed4e47cac69d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:43:16.897844 kubelet[2580]: I1101 00:43:16.897809 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65dce9c9-f13d-4597-9b53-0ca436e98fe5-hubble-tls\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898607 kubelet[2580]: I1101 00:43:16.898043 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-lib-modules\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898607 kubelet[2580]: I1101 00:43:16.898091 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-cgroup\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898607 kubelet[2580]: I1101 00:43:16.898106 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-xtables-lock\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898607 kubelet[2580]: I1101 00:43:16.898122 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-bpf-maps\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898607 kubelet[2580]: I1101 00:43:16.898158 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65dce9c9-f13d-4597-9b53-0ca436e98fe5-clustermesh-secrets\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898607 kubelet[2580]: I1101 00:43:16.898176 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-host-proc-sys-net\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898846 kubelet[2580]: I1101 00:43:16.898195 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-config-path\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898846 kubelet[2580]: I1101 00:43:16.898210 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cni-path\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898846 kubelet[2580]: I1101 00:43:16.898224 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-host-proc-sys-kernel\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898846 kubelet[2580]: I1101 00:43:16.898270 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-etc-cni-netd\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898846 kubelet[2580]: I1101 00:43:16.898288 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbbcv\" (UniqueName: \"kubernetes.io/projected/65dce9c9-f13d-4597-9b53-0ca436e98fe5-kube-api-access-qbbcv\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.898846 kubelet[2580]: I1101 00:43:16.898303 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-run\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.899557 kubelet[2580]: I1101 00:43:16.898316 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-hostproc\") pod \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\" (UID: \"65dce9c9-f13d-4597-9b53-0ca436e98fe5\") " Nov 1 00:43:16.899557 kubelet[2580]: I1101 00:43:16.898357 2580 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k8r7t\" (UniqueName: \"kubernetes.io/projected/e5b31f27-fe81-4387-b1c2-ed4e47cac69d-kube-api-access-k8r7t\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.899557 kubelet[2580]: I1101 00:43:16.898385 2580 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5b31f27-fe81-4387-b1c2-ed4e47cac69d-cilium-config-path\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.899557 kubelet[2580]: I1101 00:43:16.898444 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-hostproc" (OuterVolumeSpecName: "hostproc") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.899557 kubelet[2580]: I1101 00:43:16.899086 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.899776 kubelet[2580]: I1101 00:43:16.899759 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.899859 kubelet[2580]: I1101 00:43:16.899849 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.899938 kubelet[2580]: I1101 00:43:16.899929 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.900014 kubelet[2580]: I1101 00:43:16.900004 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.903498 kubelet[2580]: I1101 00:43:16.903444 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65dce9c9-f13d-4597-9b53-0ca436e98fe5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:43:16.904627 kubelet[2580]: I1101 00:43:16.904062 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.904627 kubelet[2580]: I1101 00:43:16.904278 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65dce9c9-f13d-4597-9b53-0ca436e98fe5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:43:16.904627 kubelet[2580]: I1101 00:43:16.904334 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.907405 kubelet[2580]: I1101 00:43:16.907378 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65dce9c9-f13d-4597-9b53-0ca436e98fe5-kube-api-access-qbbcv" (OuterVolumeSpecName: "kube-api-access-qbbcv") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "kube-api-access-qbbcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:43:16.907527 kubelet[2580]: I1101 00:43:16.907516 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.908391 kubelet[2580]: I1101 00:43:16.908350 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:43:16.908391 kubelet[2580]: I1101 00:43:16.908387 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cni-path" (OuterVolumeSpecName: "cni-path") pod "65dce9c9-f13d-4597-9b53-0ca436e98fe5" (UID: "65dce9c9-f13d-4597-9b53-0ca436e98fe5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:16.932817 systemd[1]: Removed slice kubepods-burstable-pod65dce9c9_f13d_4597_9b53_0ca436e98fe5.slice. Nov 1 00:43:16.932911 systemd[1]: kubepods-burstable-pod65dce9c9_f13d_4597_9b53_0ca436e98fe5.slice: Consumed 7.990s CPU time. Nov 1 00:43:16.934584 systemd[1]: Removed slice kubepods-besteffort-pode5b31f27_fe81_4387_b1c2_ed4e47cac69d.slice. Nov 1 00:43:16.998896 kubelet[2580]: I1101 00:43:16.998858 2580 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65dce9c9-f13d-4597-9b53-0ca436e98fe5-hubble-tls\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.998896 kubelet[2580]: I1101 00:43:16.998892 2580 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-lib-modules\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.998896 kubelet[2580]: I1101 00:43:16.998901 2580 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-cgroup\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.999142 kubelet[2580]: I1101 00:43:16.998911 2580 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-xtables-lock\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.999142 kubelet[2580]: I1101 00:43:16.998923 2580 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-bpf-maps\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.999142 kubelet[2580]: I1101 00:43:16.998932 2580 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65dce9c9-f13d-4597-9b53-0ca436e98fe5-clustermesh-secrets\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.999142 kubelet[2580]: I1101 00:43:16.998941 2580 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-host-proc-sys-net\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.999142 kubelet[2580]: I1101 00:43:16.998948 2580 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-config-path\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:16.999142 kubelet[2580]: I1101 00:43:16.998956 2580 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cni-path\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:17.001481 kubelet[2580]: I1101 00:43:17.001415 2580 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-host-proc-sys-kernel\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:17.001481 kubelet[2580]: I1101 00:43:17.001454 2580 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-etc-cni-netd\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:17.001481 kubelet[2580]: I1101 00:43:17.001464 2580 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qbbcv\" (UniqueName: \"kubernetes.io/projected/65dce9c9-f13d-4597-9b53-0ca436e98fe5-kube-api-access-qbbcv\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:17.001481 kubelet[2580]: I1101 00:43:17.001476 2580 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-cilium-run\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:17.001481 kubelet[2580]: I1101 00:43:17.001486 2580 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65dce9c9-f13d-4597-9b53-0ca436e98fe5-hostproc\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:17.379717 kubelet[2580]: I1101 00:43:17.378611 2580 scope.go:117] "RemoveContainer" containerID="345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea" Nov 1 00:43:17.383854 env[1734]: time="2025-11-01T00:43:17.383452141Z" level=info msg="RemoveContainer for \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\"" Nov 1 00:43:17.391161 env[1734]: time="2025-11-01T00:43:17.391015535Z" level=info msg="RemoveContainer for \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\" returns successfully" Nov 1 00:43:17.391767 kubelet[2580]: I1101 00:43:17.391724 2580 scope.go:117] "RemoveContainer" containerID="4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e" Nov 1 00:43:17.393054 env[1734]: time="2025-11-01T00:43:17.393018626Z" level=info msg="RemoveContainer for \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\"" Nov 1 00:43:17.399194 env[1734]: time="2025-11-01T00:43:17.399062759Z" level=info msg="RemoveContainer for \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\" returns successfully" Nov 1 00:43:17.400844 kubelet[2580]: I1101 00:43:17.400813 2580 scope.go:117] "RemoveContainer" containerID="da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f" Nov 1 00:43:17.402094 env[1734]: time="2025-11-01T00:43:17.402056098Z" level=info msg="RemoveContainer for \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\"" Nov 1 00:43:17.407538 env[1734]: time="2025-11-01T00:43:17.407496457Z" level=info msg="RemoveContainer for \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\" returns successfully" Nov 1 00:43:17.407880 kubelet[2580]: I1101 00:43:17.407861 2580 scope.go:117] "RemoveContainer" containerID="c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e" Nov 1 00:43:17.409913 env[1734]: time="2025-11-01T00:43:17.409874594Z" level=info msg="RemoveContainer for \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\"" Nov 1 00:43:17.416831 env[1734]: time="2025-11-01T00:43:17.415741765Z" level=info msg="RemoveContainer for \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\" returns successfully" Nov 1 00:43:17.416990 kubelet[2580]: I1101 00:43:17.416078 2580 scope.go:117] "RemoveContainer" containerID="13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34" Nov 1 00:43:17.418808 env[1734]: time="2025-11-01T00:43:17.418755671Z" level=info msg="RemoveContainer for \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\"" Nov 1 00:43:17.428888 env[1734]: time="2025-11-01T00:43:17.428475595Z" level=info msg="RemoveContainer for \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\" returns successfully" Nov 1 00:43:17.432858 kubelet[2580]: I1101 00:43:17.432815 2580 scope.go:117] "RemoveContainer" containerID="345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea" Nov 1 00:43:17.433409 env[1734]: time="2025-11-01T00:43:17.433306990Z" level=error msg="ContainerStatus for \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\": not found" Nov 1 00:43:17.441073 kubelet[2580]: E1101 00:43:17.440995 2580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\": not found" containerID="345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea" Nov 1 00:43:17.443584 kubelet[2580]: I1101 00:43:17.441079 2580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea"} err="failed to get container status \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\": rpc error: code = NotFound desc = an error occurred when try to find container \"345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea\": not found" Nov 1 00:43:17.443803 kubelet[2580]: I1101 00:43:17.443592 2580 scope.go:117] "RemoveContainer" containerID="4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e" Nov 1 00:43:17.444057 env[1734]: time="2025-11-01T00:43:17.443957346Z" level=error msg="ContainerStatus for \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\": not found" Nov 1 00:43:17.444202 kubelet[2580]: E1101 00:43:17.444181 2580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\": not found" containerID="4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e" Nov 1 00:43:17.444269 kubelet[2580]: I1101 00:43:17.444212 2580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e"} err="failed to get container status \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4621ea47a16e2a93b4ab8609a74f34c26c9aff40db924e2e4e5c75d524dfd28e\": not found" Nov 1 00:43:17.444269 kubelet[2580]: I1101 00:43:17.444262 2580 scope.go:117] "RemoveContainer" containerID="da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f" Nov 1 00:43:17.444865 kubelet[2580]: E1101 00:43:17.444603 2580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\": not found" containerID="da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f" Nov 1 00:43:17.444865 kubelet[2580]: I1101 00:43:17.444620 2580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f"} err="failed to get container status \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\": rpc error: code = NotFound desc = an error occurred when try to find container \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\": not found" Nov 1 00:43:17.444865 kubelet[2580]: I1101 00:43:17.444634 2580 scope.go:117] "RemoveContainer" containerID="c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e" Nov 1 00:43:17.444958 env[1734]: time="2025-11-01T00:43:17.444452259Z" level=error msg="ContainerStatus for \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da112742fcb28a4d2af5fc55f3d22b2e08acd15b7dcc355be8bb97230046436f\": not found" Nov 1 00:43:17.444958 env[1734]: time="2025-11-01T00:43:17.444785102Z" level=error msg="ContainerStatus for \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\": not found" Nov 1 00:43:17.445020 kubelet[2580]: E1101 00:43:17.444875 2580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\": not found" containerID="c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e" Nov 1 00:43:17.445020 kubelet[2580]: I1101 00:43:17.444890 2580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e"} err="failed to get container status \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6773d5b36ac3e7e3812af60ab3857f91242d39ae06e1e833511ff154277019e\": not found" Nov 1 00:43:17.445020 kubelet[2580]: I1101 00:43:17.444904 2580 scope.go:117] "RemoveContainer" containerID="13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34" Nov 1 00:43:17.445100 env[1734]: time="2025-11-01T00:43:17.445011932Z" level=error msg="ContainerStatus for \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\": not found" Nov 1 00:43:17.445127 kubelet[2580]: E1101 00:43:17.445096 2580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\": not found" containerID="13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34" Nov 1 00:43:17.445127 kubelet[2580]: I1101 00:43:17.445110 2580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34"} err="failed to get container status \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\": rpc error: code = NotFound desc = an error occurred when try to find container \"13e75bd75acfc6be2647b66a8b7dc9387bd6b7ff1dbeaf58fb80dfd0d50c9c34\": not found" Nov 1 00:43:17.445179 kubelet[2580]: I1101 00:43:17.445129 2580 scope.go:117] "RemoveContainer" containerID="2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d" Nov 1 00:43:17.446109 env[1734]: time="2025-11-01T00:43:17.446083458Z" level=info msg="RemoveContainer for \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\"" Nov 1 00:43:17.451548 env[1734]: time="2025-11-01T00:43:17.451449346Z" level=info msg="RemoveContainer for \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\" returns successfully" Nov 1 00:43:17.451706 kubelet[2580]: I1101 00:43:17.451684 2580 scope.go:117] "RemoveContainer" containerID="2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d" Nov 1 00:43:17.452037 env[1734]: time="2025-11-01T00:43:17.451973417Z" level=error msg="ContainerStatus for \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\": not found" Nov 1 00:43:17.452259 kubelet[2580]: E1101 00:43:17.452221 2580 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\": not found" containerID="2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d" Nov 1 00:43:17.452360 kubelet[2580]: I1101 00:43:17.452330 2580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d"} err="failed to get container status \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b407807f8384dc33e94b46f1d3600b1394f30bb83f6c5059239cbe86e46c96d\": not found" Nov 1 00:43:17.498623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-345e8b76a931aa50d539522402bc3f0c6b08a9b3ca3905199bc5e033d1245cea-rootfs.mount: Deactivated successfully. Nov 1 00:43:17.498725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6-rootfs.mount: Deactivated successfully. Nov 1 00:43:17.498787 systemd[1]: var-lib-kubelet-pods-e5b31f27\x2dfe81\x2d4387\x2db1c2\x2ded4e47cac69d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk8r7t.mount: Deactivated successfully. Nov 1 00:43:17.498848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792-rootfs.mount: Deactivated successfully. Nov 1 00:43:17.498913 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792-shm.mount: Deactivated successfully. Nov 1 00:43:17.498977 systemd[1]: var-lib-kubelet-pods-65dce9c9\x2df13d\x2d4597\x2d9b53\x2d0ca436e98fe5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:43:17.499036 systemd[1]: var-lib-kubelet-pods-65dce9c9\x2df13d\x2d4597\x2d9b53\x2d0ca436e98fe5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqbbcv.mount: Deactivated successfully. Nov 1 00:43:17.499090 systemd[1]: var-lib-kubelet-pods-65dce9c9\x2df13d\x2d4597\x2d9b53\x2d0ca436e98fe5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:43:18.092164 kubelet[2580]: E1101 00:43:18.092081 2580 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:43:18.444849 sshd[4204]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:18.448560 systemd-logind[1720]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:43:18.448753 systemd[1]: sshd@20-172.31.16.189:22-147.75.109.163:34824.service: Deactivated successfully. Nov 1 00:43:18.449742 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:43:18.449926 systemd[1]: session-21.scope: Consumed 1.023s CPU time. Nov 1 00:43:18.450938 systemd-logind[1720]: Removed session 21. Nov 1 00:43:18.469907 systemd[1]: Started sshd@21-172.31.16.189:22-147.75.109.163:34836.service. Nov 1 00:43:18.665356 sshd[4368]: Accepted publickey for core from 147.75.109.163 port 34836 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:18.666777 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:18.672970 systemd[1]: Started session-22.scope. Nov 1 00:43:18.673475 systemd-logind[1720]: New session 22 of user core. Nov 1 00:43:18.927501 kubelet[2580]: I1101 00:43:18.927471 2580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65dce9c9-f13d-4597-9b53-0ca436e98fe5" path="/var/lib/kubelet/pods/65dce9c9-f13d-4597-9b53-0ca436e98fe5/volumes" Nov 1 00:43:18.929269 kubelet[2580]: I1101 00:43:18.929247 2580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b31f27-fe81-4387-b1c2-ed4e47cac69d" path="/var/lib/kubelet/pods/e5b31f27-fe81-4387-b1c2-ed4e47cac69d/volumes" Nov 1 00:43:19.771377 sshd[4368]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:19.774717 systemd-logind[1720]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:43:19.774883 systemd[1]: sshd@21-172.31.16.189:22-147.75.109.163:34836.service: Deactivated successfully. Nov 1 00:43:19.775564 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:43:19.776924 systemd-logind[1720]: Removed session 22. Nov 1 00:43:19.795583 systemd[1]: Started sshd@22-172.31.16.189:22-147.75.109.163:34842.service. Nov 1 00:43:19.823800 systemd[1]: Created slice kubepods-burstable-pod5b9bc37f_5555_44a3_ab3a_b0987f3b8c16.slice. Nov 1 00:43:19.917125 kubelet[2580]: I1101 00:43:19.917084 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-bpf-maps\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917125 kubelet[2580]: I1101 00:43:19.917125 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-clustermesh-secrets\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917536 kubelet[2580]: I1101 00:43:19.917156 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-run\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917536 kubelet[2580]: I1101 00:43:19.917170 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-xtables-lock\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917536 kubelet[2580]: I1101 00:43:19.917185 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-host-proc-sys-kernel\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917536 kubelet[2580]: I1101 00:43:19.917200 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-hostproc\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917536 kubelet[2580]: I1101 00:43:19.917239 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-cgroup\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917536 kubelet[2580]: I1101 00:43:19.917281 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-etc-cni-netd\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917705 kubelet[2580]: I1101 00:43:19.917307 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-host-proc-sys-net\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917705 kubelet[2580]: I1101 00:43:19.917327 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-lib-modules\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917705 kubelet[2580]: I1101 00:43:19.917343 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-config-path\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917705 kubelet[2580]: I1101 00:43:19.917360 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhmc\" (UniqueName: \"kubernetes.io/projected/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-kube-api-access-wqhmc\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917705 kubelet[2580]: I1101 00:43:19.917410 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-ipsec-secrets\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917830 kubelet[2580]: I1101 00:43:19.917425 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cni-path\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.917830 kubelet[2580]: I1101 00:43:19.917440 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-hubble-tls\") pod \"cilium-vt79t\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " pod="kube-system/cilium-vt79t" Nov 1 00:43:19.958272 sshd[4378]: Accepted publickey for core from 147.75.109.163 port 34842 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:19.959112 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:19.965375 systemd[1]: Started session-23.scope. Nov 1 00:43:19.965882 systemd-logind[1720]: New session 23 of user core. Nov 1 00:43:20.131189 env[1734]: time="2025-11-01T00:43:20.130644665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vt79t,Uid:5b9bc37f-5555-44a3-ab3a-b0987f3b8c16,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:20.153583 env[1734]: time="2025-11-01T00:43:20.153511878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:20.153866 env[1734]: time="2025-11-01T00:43:20.153745211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:20.153866 env[1734]: time="2025-11-01T00:43:20.153762180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:20.154077 env[1734]: time="2025-11-01T00:43:20.154015896Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e pid=4397 runtime=io.containerd.runc.v2 Nov 1 00:43:20.179676 systemd[1]: Started cri-containerd-6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e.scope. Nov 1 00:43:20.228549 env[1734]: time="2025-11-01T00:43:20.225599186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vt79t,Uid:5b9bc37f-5555-44a3-ab3a-b0987f3b8c16,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\"" Nov 1 00:43:20.236111 env[1734]: time="2025-11-01T00:43:20.236003024Z" level=info msg="CreateContainer within sandbox \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:43:20.260340 env[1734]: time="2025-11-01T00:43:20.260282815Z" level=info msg="CreateContainer within sandbox \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58\"" Nov 1 00:43:20.261278 env[1734]: time="2025-11-01T00:43:20.261247515Z" level=info msg="StartContainer for \"985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58\"" Nov 1 00:43:20.277244 systemd[1]: Started cri-containerd-985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58.scope. Nov 1 00:43:20.305216 systemd[1]: cri-containerd-985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58.scope: Deactivated successfully. Nov 1 00:43:20.310118 sshd[4378]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:20.315294 systemd[1]: sshd@22-172.31.16.189:22-147.75.109.163:34842.service: Deactivated successfully. Nov 1 00:43:20.316311 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:43:20.321263 systemd-logind[1720]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:43:20.324160 systemd-logind[1720]: Removed session 23. Nov 1 00:43:20.337494 systemd[1]: Started sshd@23-172.31.16.189:22-147.75.109.163:43710.service. Nov 1 00:43:20.358565 env[1734]: time="2025-11-01T00:43:20.358432653Z" level=info msg="shim disconnected" id=985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58 Nov 1 00:43:20.358886 env[1734]: time="2025-11-01T00:43:20.358859693Z" level=warning msg="cleaning up after shim disconnected" id=985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58 namespace=k8s.io Nov 1 00:43:20.359016 env[1734]: time="2025-11-01T00:43:20.358998235Z" level=info msg="cleaning up dead shim" Nov 1 00:43:20.371592 env[1734]: time="2025-11-01T00:43:20.371534507Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4460 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:43:20Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 00:43:20.372258 env[1734]: time="2025-11-01T00:43:20.372148740Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Nov 1 00:43:20.372865 env[1734]: time="2025-11-01T00:43:20.372625056Z" level=error msg="Failed to pipe stderr of container \"985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58\"" error="reading from a closed fifo" Nov 1 00:43:20.373076 env[1734]: time="2025-11-01T00:43:20.372630203Z" level=error msg="Failed to pipe stdout of container \"985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58\"" error="reading from a closed fifo" Nov 1 00:43:20.375950 env[1734]: time="2025-11-01T00:43:20.375866084Z" level=error msg="StartContainer for \"985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 00:43:20.376832 kubelet[2580]: E1101 00:43:20.376423 2580 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58" Nov 1 00:43:20.380766 kubelet[2580]: E1101 00:43:20.380696 2580 kuberuntime_manager.go:1358] "Unhandled Error" err=< Nov 1 00:43:20.380766 kubelet[2580]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Nov 1 00:43:20.380766 kubelet[2580]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Nov 1 00:43:20.380766 kubelet[2580]: rm /hostbin/cilium-mount Nov 1 00:43:20.380980 kubelet[2580]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqhmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-vt79t_kube-system(5b9bc37f-5555-44a3-ab3a-b0987f3b8c16): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Nov 1 00:43:20.380980 kubelet[2580]: > logger="UnhandledError" Nov 1 00:43:20.383133 kubelet[2580]: E1101 00:43:20.382257 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vt79t" podUID="5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" Nov 1 00:43:20.392394 env[1734]: time="2025-11-01T00:43:20.392340834Z" level=info msg="StopPodSandbox for \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\"" Nov 1 00:43:20.392797 env[1734]: time="2025-11-01T00:43:20.392753947Z" level=info msg="Container to stop \"985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:43:20.408154 systemd[1]: cri-containerd-6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e.scope: Deactivated successfully. Nov 1 00:43:20.453855 env[1734]: time="2025-11-01T00:43:20.453814197Z" level=info msg="shim disconnected" id=6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e Nov 1 00:43:20.454107 env[1734]: time="2025-11-01T00:43:20.454092266Z" level=warning msg="cleaning up after shim disconnected" id=6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e namespace=k8s.io Nov 1 00:43:20.454210 env[1734]: time="2025-11-01T00:43:20.454197915Z" level=info msg="cleaning up dead shim" Nov 1 00:43:20.463133 env[1734]: time="2025-11-01T00:43:20.463094661Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4492 runtime=io.containerd.runc.v2\n" Nov 1 00:43:20.463636 env[1734]: time="2025-11-01T00:43:20.463612282Z" level=info msg="TearDown network for sandbox \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\" successfully" Nov 1 00:43:20.463738 env[1734]: time="2025-11-01T00:43:20.463723270Z" level=info msg="StopPodSandbox for \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\" returns successfully" Nov 1 00:43:20.511650 sshd[4458]: Accepted publickey for core from 147.75.109.163 port 43710 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:20.513243 sshd[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:20.518769 systemd[1]: Started session-24.scope. Nov 1 00:43:20.519096 systemd-logind[1720]: New session 24 of user core. Nov 1 00:43:20.625285 kubelet[2580]: I1101 00:43:20.625213 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-config-path\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625285 kubelet[2580]: I1101 00:43:20.625287 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-bpf-maps\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625324 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-hostproc\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625342 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-etc-cni-netd\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625359 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-hubble-tls\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625373 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-xtables-lock\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625407 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-cgroup\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625429 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqhmc\" (UniqueName: \"kubernetes.io/projected/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-kube-api-access-wqhmc\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625444 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cni-path\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625461 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-clustermesh-secrets\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625487 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-run\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.625503 kubelet[2580]: I1101 00:43:20.625500 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-host-proc-sys-kernel\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.626992 kubelet[2580]: I1101 00:43:20.625516 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-ipsec-secrets\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.626992 kubelet[2580]: I1101 00:43:20.625558 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-host-proc-sys-net\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.626992 kubelet[2580]: I1101 00:43:20.625572 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-lib-modules\") pod \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\" (UID: \"5b9bc37f-5555-44a3-ab3a-b0987f3b8c16\") " Nov 1 00:43:20.626992 kubelet[2580]: I1101 00:43:20.625644 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.626992 kubelet[2580]: I1101 00:43:20.625880 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cni-path" (OuterVolumeSpecName: "cni-path") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.626992 kubelet[2580]: I1101 00:43:20.626490 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.626992 kubelet[2580]: I1101 00:43:20.626519 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.626992 kubelet[2580]: I1101 00:43:20.626950 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.626992 kubelet[2580]: I1101 00:43:20.626983 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-hostproc" (OuterVolumeSpecName: "hostproc") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.627263 kubelet[2580]: I1101 00:43:20.627004 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.630327 kubelet[2580]: I1101 00:43:20.629873 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.631344 kubelet[2580]: I1101 00:43:20.631319 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.631462 kubelet[2580]: I1101 00:43:20.631451 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:43:20.631596 kubelet[2580]: I1101 00:43:20.631583 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-kube-api-access-wqhmc" (OuterVolumeSpecName: "kube-api-access-wqhmc") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "kube-api-access-wqhmc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:43:20.631717 kubelet[2580]: I1101 00:43:20.631696 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:43:20.631771 kubelet[2580]: I1101 00:43:20.631708 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:43:20.634024 kubelet[2580]: I1101 00:43:20.633947 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:43:20.635364 kubelet[2580]: I1101 00:43:20.635340 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" (UID: "5b9bc37f-5555-44a3-ab3a-b0987f3b8c16"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:43:20.726282 kubelet[2580]: I1101 00:43:20.726245 2580 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-bpf-maps\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726485 kubelet[2580]: I1101 00:43:20.726461 2580 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-hostproc\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726485 kubelet[2580]: I1101 00:43:20.726480 2580 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-etc-cni-netd\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726485 kubelet[2580]: I1101 00:43:20.726490 2580 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-hubble-tls\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726485 kubelet[2580]: I1101 00:43:20.726498 2580 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-xtables-lock\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726507 2580 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-cgroup\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726520 2580 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wqhmc\" (UniqueName: \"kubernetes.io/projected/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-kube-api-access-wqhmc\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726532 2580 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cni-path\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726541 2580 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-clustermesh-secrets\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726552 2580 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-run\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726562 2580 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-host-proc-sys-kernel\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726575 2580 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-ipsec-secrets\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726583 2580 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-host-proc-sys-net\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726591 2580 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-lib-modules\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.726705 kubelet[2580]: I1101 00:43:20.726599 2580 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16-cilium-config-path\") on node \"ip-172-31-16-189\" DevicePath \"\"" Nov 1 00:43:20.930446 systemd[1]: Removed slice kubepods-burstable-pod5b9bc37f_5555_44a3_ab3a_b0987f3b8c16.slice. Nov 1 00:43:21.028963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e-shm.mount: Deactivated successfully. Nov 1 00:43:21.029071 systemd[1]: var-lib-kubelet-pods-5b9bc37f\x2d5555\x2d44a3\x2dab3a\x2db0987f3b8c16-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwqhmc.mount: Deactivated successfully. Nov 1 00:43:21.029131 systemd[1]: var-lib-kubelet-pods-5b9bc37f\x2d5555\x2d44a3\x2dab3a\x2db0987f3b8c16-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:43:21.029193 systemd[1]: var-lib-kubelet-pods-5b9bc37f\x2d5555\x2d44a3\x2dab3a\x2db0987f3b8c16-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:43:21.029265 systemd[1]: var-lib-kubelet-pods-5b9bc37f\x2d5555\x2d44a3\x2dab3a\x2db0987f3b8c16-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:43:21.394603 kubelet[2580]: I1101 00:43:21.394580 2580 scope.go:117] "RemoveContainer" containerID="985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58" Nov 1 00:43:21.396662 env[1734]: time="2025-11-01T00:43:21.396623671Z" level=info msg="RemoveContainer for \"985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58\"" Nov 1 00:43:21.402143 env[1734]: time="2025-11-01T00:43:21.401913097Z" level=info msg="RemoveContainer for \"985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58\" returns successfully" Nov 1 00:43:21.450090 systemd[1]: Created slice kubepods-burstable-pod089f30f7_71d7_49d9_a0ab_9a63caceda63.slice. Nov 1 00:43:21.533006 kubelet[2580]: I1101 00:43:21.532946 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-bpf-maps\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533006 kubelet[2580]: I1101 00:43:21.532993 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/089f30f7-71d7-49d9-a0ab-9a63caceda63-cilium-ipsec-secrets\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533006 kubelet[2580]: I1101 00:43:21.533013 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-cilium-cgroup\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533030 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-etc-cni-netd\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533045 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-host-proc-sys-net\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533059 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-hostproc\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533073 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/089f30f7-71d7-49d9-a0ab-9a63caceda63-cilium-config-path\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533099 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nztn\" (UniqueName: \"kubernetes.io/projected/089f30f7-71d7-49d9-a0ab-9a63caceda63-kube-api-access-4nztn\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533114 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/089f30f7-71d7-49d9-a0ab-9a63caceda63-clustermesh-secrets\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533129 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/089f30f7-71d7-49d9-a0ab-9a63caceda63-hubble-tls\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533146 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-cni-path\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533160 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-xtables-lock\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533174 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-host-proc-sys-kernel\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533193 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-cilium-run\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.533265 kubelet[2580]: I1101 00:43:21.533208 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/089f30f7-71d7-49d9-a0ab-9a63caceda63-lib-modules\") pod \"cilium-k9hl9\" (UID: \"089f30f7-71d7-49d9-a0ab-9a63caceda63\") " pod="kube-system/cilium-k9hl9" Nov 1 00:43:21.754463 env[1734]: time="2025-11-01T00:43:21.753882448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9hl9,Uid:089f30f7-71d7-49d9-a0ab-9a63caceda63,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:21.775263 env[1734]: time="2025-11-01T00:43:21.775130745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:21.775263 env[1734]: time="2025-11-01T00:43:21.775201501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:21.775521 env[1734]: time="2025-11-01T00:43:21.775468367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:21.775792 env[1734]: time="2025-11-01T00:43:21.775731601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa pid=4527 runtime=io.containerd.runc.v2 Nov 1 00:43:21.791460 systemd[1]: Started cri-containerd-078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa.scope. Nov 1 00:43:21.819430 env[1734]: time="2025-11-01T00:43:21.819386615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9hl9,Uid:089f30f7-71d7-49d9-a0ab-9a63caceda63,Namespace:kube-system,Attempt:0,} returns sandbox id \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\"" Nov 1 00:43:21.831473 env[1734]: time="2025-11-01T00:43:21.831435664Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:43:21.862709 env[1734]: time="2025-11-01T00:43:21.862654962Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"98966b9be8df33b277cbd8b155dced695637adc0d1e42756570da93596990a28\"" Nov 1 00:43:21.865262 env[1734]: time="2025-11-01T00:43:21.863589370Z" level=info msg="StartContainer for \"98966b9be8df33b277cbd8b155dced695637adc0d1e42756570da93596990a28\"" Nov 1 00:43:21.896154 systemd[1]: Started cri-containerd-98966b9be8df33b277cbd8b155dced695637adc0d1e42756570da93596990a28.scope. Nov 1 00:43:21.946084 env[1734]: time="2025-11-01T00:43:21.946033307Z" level=info msg="StartContainer for \"98966b9be8df33b277cbd8b155dced695637adc0d1e42756570da93596990a28\" returns successfully" Nov 1 00:43:21.974554 systemd[1]: cri-containerd-98966b9be8df33b277cbd8b155dced695637adc0d1e42756570da93596990a28.scope: Deactivated successfully. Nov 1 00:43:22.032507 env[1734]: time="2025-11-01T00:43:22.031727806Z" level=info msg="shim disconnected" id=98966b9be8df33b277cbd8b155dced695637adc0d1e42756570da93596990a28 Nov 1 00:43:22.032507 env[1734]: time="2025-11-01T00:43:22.031805208Z" level=warning msg="cleaning up after shim disconnected" id=98966b9be8df33b277cbd8b155dced695637adc0d1e42756570da93596990a28 namespace=k8s.io Nov 1 00:43:22.032507 env[1734]: time="2025-11-01T00:43:22.031815244Z" level=info msg="cleaning up dead shim" Nov 1 00:43:22.042384 env[1734]: time="2025-11-01T00:43:22.042325739Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4611 runtime=io.containerd.runc.v2\n" Nov 1 00:43:22.404501 env[1734]: time="2025-11-01T00:43:22.404463899Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:43:22.426971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312605224.mount: Deactivated successfully. Nov 1 00:43:22.438436 env[1734]: time="2025-11-01T00:43:22.438367642Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713\"" Nov 1 00:43:22.440106 env[1734]: time="2025-11-01T00:43:22.439365933Z" level=info msg="StartContainer for \"cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713\"" Nov 1 00:43:22.459788 systemd[1]: Started cri-containerd-cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713.scope. Nov 1 00:43:22.496416 env[1734]: time="2025-11-01T00:43:22.496101813Z" level=info msg="StartContainer for \"cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713\" returns successfully" Nov 1 00:43:22.511152 systemd[1]: cri-containerd-cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713.scope: Deactivated successfully. Nov 1 00:43:22.544763 env[1734]: time="2025-11-01T00:43:22.544705504Z" level=info msg="shim disconnected" id=cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713 Nov 1 00:43:22.544763 env[1734]: time="2025-11-01T00:43:22.544762735Z" level=warning msg="cleaning up after shim disconnected" id=cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713 namespace=k8s.io Nov 1 00:43:22.545101 env[1734]: time="2025-11-01T00:43:22.544775274Z" level=info msg="cleaning up dead shim" Nov 1 00:43:22.554176 env[1734]: time="2025-11-01T00:43:22.554122654Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4674 runtime=io.containerd.runc.v2\n" Nov 1 00:43:22.927333 kubelet[2580]: I1101 00:43:22.927294 2580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9bc37f-5555-44a3-ab3a-b0987f3b8c16" path="/var/lib/kubelet/pods/5b9bc37f-5555-44a3-ab3a-b0987f3b8c16/volumes" Nov 1 00:43:23.029159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713-rootfs.mount: Deactivated successfully. Nov 1 00:43:23.093790 kubelet[2580]: E1101 00:43:23.093744 2580 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:43:23.409261 env[1734]: time="2025-11-01T00:43:23.408922173Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:43:23.432736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435475435.mount: Deactivated successfully. Nov 1 00:43:23.446978 env[1734]: time="2025-11-01T00:43:23.446927550Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04\"" Nov 1 00:43:23.447684 env[1734]: time="2025-11-01T00:43:23.447660007Z" level=info msg="StartContainer for \"121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04\"" Nov 1 00:43:23.463918 kubelet[2580]: W1101 00:43:23.463875 2580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b9bc37f_5555_44a3_ab3a_b0987f3b8c16.slice/cri-containerd-985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58.scope WatchSource:0}: container "985d5870d8dfccef2b79f896d6feade526d2184e818b851434633db467044a58" in namespace "k8s.io": not found Nov 1 00:43:23.472485 systemd[1]: Started cri-containerd-121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04.scope. Nov 1 00:43:23.521574 env[1734]: time="2025-11-01T00:43:23.521528085Z" level=info msg="StartContainer for \"121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04\" returns successfully" Nov 1 00:43:23.533797 systemd[1]: cri-containerd-121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04.scope: Deactivated successfully. Nov 1 00:43:23.573598 env[1734]: time="2025-11-01T00:43:23.573543258Z" level=info msg="shim disconnected" id=121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04 Nov 1 00:43:23.573598 env[1734]: time="2025-11-01T00:43:23.573587158Z" level=warning msg="cleaning up after shim disconnected" id=121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04 namespace=k8s.io Nov 1 00:43:23.573598 env[1734]: time="2025-11-01T00:43:23.573596921Z" level=info msg="cleaning up dead shim" Nov 1 00:43:23.583284 env[1734]: time="2025-11-01T00:43:23.583153429Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4731 runtime=io.containerd.runc.v2\n" Nov 1 00:43:24.029336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04-rootfs.mount: Deactivated successfully. Nov 1 00:43:24.426432 env[1734]: time="2025-11-01T00:43:24.426388567Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:43:24.458173 env[1734]: time="2025-11-01T00:43:24.458116379Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1\"" Nov 1 00:43:24.458668 env[1734]: time="2025-11-01T00:43:24.458639582Z" level=info msg="StartContainer for \"b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1\"" Nov 1 00:43:24.483194 systemd[1]: Started cri-containerd-b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1.scope. Nov 1 00:43:24.511145 systemd[1]: cri-containerd-b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1.scope: Deactivated successfully. Nov 1 00:43:24.513391 env[1734]: time="2025-11-01T00:43:24.513340212Z" level=info msg="StartContainer for \"b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1\" returns successfully" Nov 1 00:43:24.544161 env[1734]: time="2025-11-01T00:43:24.544120885Z" level=info msg="shim disconnected" id=b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1 Nov 1 00:43:24.544458 env[1734]: time="2025-11-01T00:43:24.544391034Z" level=warning msg="cleaning up after shim disconnected" id=b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1 namespace=k8s.io Nov 1 00:43:24.544458 env[1734]: time="2025-11-01T00:43:24.544409507Z" level=info msg="cleaning up dead shim" Nov 1 00:43:24.552126 env[1734]: time="2025-11-01T00:43:24.552078709Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4790 runtime=io.containerd.runc.v2\n" Nov 1 00:43:25.030237 systemd[1]: run-containerd-runc-k8s.io-b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1-runc.OsMwlP.mount: Deactivated successfully. Nov 1 00:43:25.030358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1-rootfs.mount: Deactivated successfully. Nov 1 00:43:25.281310 kubelet[2580]: I1101 00:43:25.280968 2580 setters.go:618] "Node became not ready" node="ip-172-31-16-189" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:43:25Z","lastTransitionTime":"2025-11-01T00:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:43:25.422301 env[1734]: time="2025-11-01T00:43:25.422251281Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:43:25.447356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229451485.mount: Deactivated successfully. Nov 1 00:43:25.458105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501272096.mount: Deactivated successfully. Nov 1 00:43:25.465447 env[1734]: time="2025-11-01T00:43:25.465389977Z" level=info msg="CreateContainer within sandbox \"078dc0909787965d8740a77c17c73433f6bcdcc3037135f6906d9f80789c4caa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ce1d6d2d6c15512bdc6b7f002ffc441db7ea02d6d3f85782c4ed569612f1f3fb\"" Nov 1 00:43:25.466566 env[1734]: time="2025-11-01T00:43:25.466524384Z" level=info msg="StartContainer for \"ce1d6d2d6c15512bdc6b7f002ffc441db7ea02d6d3f85782c4ed569612f1f3fb\"" Nov 1 00:43:25.485994 systemd[1]: Started cri-containerd-ce1d6d2d6c15512bdc6b7f002ffc441db7ea02d6d3f85782c4ed569612f1f3fb.scope. Nov 1 00:43:25.527097 env[1734]: time="2025-11-01T00:43:25.527032430Z" level=info msg="StartContainer for \"ce1d6d2d6c15512bdc6b7f002ffc441db7ea02d6d3f85782c4ed569612f1f3fb\" returns successfully" Nov 1 00:43:26.252272 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 00:43:26.609005 kubelet[2580]: W1101 00:43:26.608960 2580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod089f30f7_71d7_49d9_a0ab_9a63caceda63.slice/cri-containerd-98966b9be8df33b277cbd8b155dced695637adc0d1e42756570da93596990a28.scope WatchSource:0}: task 98966b9be8df33b277cbd8b155dced695637adc0d1e42756570da93596990a28 not found Nov 1 00:43:26.893398 systemd[1]: run-containerd-runc-k8s.io-ce1d6d2d6c15512bdc6b7f002ffc441db7ea02d6d3f85782c4ed569612f1f3fb-runc.ZvyIbi.mount: Deactivated successfully. Nov 1 00:43:29.088006 systemd[1]: run-containerd-runc-k8s.io-ce1d6d2d6c15512bdc6b7f002ffc441db7ea02d6d3f85782c4ed569612f1f3fb-runc.7HgPKQ.mount: Deactivated successfully. Nov 1 00:43:29.107751 systemd-networkd[1453]: lxc_health: Link UP Nov 1 00:43:29.118822 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:43:29.118115 (udev-worker)[5368]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:43:29.119083 systemd-networkd[1453]: lxc_health: Gained carrier Nov 1 00:43:29.719758 kubelet[2580]: W1101 00:43:29.718938 2580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod089f30f7_71d7_49d9_a0ab_9a63caceda63.slice/cri-containerd-cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713.scope WatchSource:0}: task cffa8992b9f1dcf0a17bf1853f8414ecf1516e489b91736e0eaf75cf11286713 not found Nov 1 00:43:29.789093 kubelet[2580]: I1101 00:43:29.789023 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k9hl9" podStartSLOduration=8.7889881 podStartE2EDuration="8.7889881s" podCreationTimestamp="2025-11-01 00:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:26.438504266 +0000 UTC m=+93.687607833" watchObservedRunningTime="2025-11-01 00:43:29.7889881 +0000 UTC m=+97.038091670" Nov 1 00:43:30.302435 systemd-networkd[1453]: lxc_health: Gained IPv6LL Nov 1 00:43:31.409802 systemd[1]: run-containerd-runc-k8s.io-ce1d6d2d6c15512bdc6b7f002ffc441db7ea02d6d3f85782c4ed569612f1f3fb-runc.KgOBD8.mount: Deactivated successfully. Nov 1 00:43:32.852150 kubelet[2580]: W1101 00:43:32.852107 2580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod089f30f7_71d7_49d9_a0ab_9a63caceda63.slice/cri-containerd-121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04.scope WatchSource:0}: task 121dbe2509b4dcaa5e0aec1ef4b935720a36a1939299489a2486f77f5b4c6c04 not found Nov 1 00:43:33.641818 systemd[1]: run-containerd-runc-k8s.io-ce1d6d2d6c15512bdc6b7f002ffc441db7ea02d6d3f85782c4ed569612f1f3fb-runc.00Yu30.mount: Deactivated successfully. Nov 1 00:43:35.963076 kubelet[2580]: W1101 00:43:35.963031 2580 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod089f30f7_71d7_49d9_a0ab_9a63caceda63.slice/cri-containerd-b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1.scope WatchSource:0}: task b1b3e4cdcdf5530f477afcbbf1faa1160dec4600bef8a0151af1bce597a10ee1 not found Nov 1 00:43:35.967452 sshd[4458]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:35.970668 systemd[1]: sshd@23-172.31.16.189:22-147.75.109.163:43710.service: Deactivated successfully. Nov 1 00:43:35.971424 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:43:35.972450 systemd-logind[1720]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:43:35.973216 systemd-logind[1720]: Removed session 24. Nov 1 00:43:51.212652 systemd[1]: cri-containerd-78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc.scope: Deactivated successfully. Nov 1 00:43:51.212919 systemd[1]: cri-containerd-78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc.scope: Consumed 3.905s CPU time. Nov 1 00:43:51.236376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc-rootfs.mount: Deactivated successfully. Nov 1 00:43:51.265696 env[1734]: time="2025-11-01T00:43:51.265649360Z" level=info msg="shim disconnected" id=78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc Nov 1 00:43:51.265696 env[1734]: time="2025-11-01T00:43:51.265698378Z" level=warning msg="cleaning up after shim disconnected" id=78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc namespace=k8s.io Nov 1 00:43:51.266124 env[1734]: time="2025-11-01T00:43:51.265710487Z" level=info msg="cleaning up dead shim" Nov 1 00:43:51.274023 env[1734]: time="2025-11-01T00:43:51.273974745Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5491 runtime=io.containerd.runc.v2\n" Nov 1 00:43:51.471103 kubelet[2580]: I1101 00:43:51.470541 2580 scope.go:117] "RemoveContainer" containerID="78fd9f705e564b13e134a58cedd23c13f140dfd5ae0c934cdba1206fcc4e8bbc" Nov 1 00:43:51.473244 env[1734]: time="2025-11-01T00:43:51.473192510Z" level=info msg="CreateContainer within sandbox \"68a8d9b3aee43a29f72192599278310c4012a9ae3be38d054a58ad7e8bceb20c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 1 00:43:51.494977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714554541.mount: Deactivated successfully. Nov 1 00:43:51.505490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210227888.mount: Deactivated successfully. Nov 1 00:43:51.513042 env[1734]: time="2025-11-01T00:43:51.512985278Z" level=info msg="CreateContainer within sandbox \"68a8d9b3aee43a29f72192599278310c4012a9ae3be38d054a58ad7e8bceb20c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e980d0332246d1e2887187ccce5ba3e2279f9465d2c85503b72a05638f436ff8\"" Nov 1 00:43:51.519990 env[1734]: time="2025-11-01T00:43:51.519939880Z" level=info msg="StartContainer for \"e980d0332246d1e2887187ccce5ba3e2279f9465d2c85503b72a05638f436ff8\"" Nov 1 00:43:51.536260 systemd[1]: Started cri-containerd-e980d0332246d1e2887187ccce5ba3e2279f9465d2c85503b72a05638f436ff8.scope. Nov 1 00:43:51.594804 env[1734]: time="2025-11-01T00:43:51.594756217Z" level=info msg="StartContainer for \"e980d0332246d1e2887187ccce5ba3e2279f9465d2c85503b72a05638f436ff8\" returns successfully" Nov 1 00:43:52.945604 env[1734]: time="2025-11-01T00:43:52.945570651Z" level=info msg="StopPodSandbox for \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\"" Nov 1 00:43:52.946015 env[1734]: time="2025-11-01T00:43:52.945656949Z" level=info msg="TearDown network for sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" successfully" Nov 1 00:43:52.946015 env[1734]: time="2025-11-01T00:43:52.945687359Z" level=info msg="StopPodSandbox for \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" returns successfully" Nov 1 00:43:52.946015 env[1734]: time="2025-11-01T00:43:52.945984570Z" level=info msg="RemovePodSandbox for \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\"" Nov 1 00:43:52.946103 env[1734]: time="2025-11-01T00:43:52.946004412Z" level=info msg="Forcibly stopping sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\"" Nov 1 00:43:52.946103 env[1734]: time="2025-11-01T00:43:52.946069096Z" level=info msg="TearDown network for sandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" successfully" Nov 1 00:43:52.954127 env[1734]: time="2025-11-01T00:43:52.953959582Z" level=info msg="RemovePodSandbox \"97f147fe0e9e02ba0aa5faa7ef7c614f20181e2197a5875b4cc38f9e33941792\" returns successfully" Nov 1 00:43:52.954524 env[1734]: time="2025-11-01T00:43:52.954400731Z" level=info msg="StopPodSandbox for \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\"" Nov 1 00:43:52.954524 env[1734]: time="2025-11-01T00:43:52.954490296Z" level=info msg="TearDown network for sandbox \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\" successfully" Nov 1 00:43:52.954524 env[1734]: time="2025-11-01T00:43:52.954522115Z" level=info msg="StopPodSandbox for \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\" returns successfully" Nov 1 00:43:52.954797 env[1734]: time="2025-11-01T00:43:52.954767573Z" level=info msg="RemovePodSandbox for \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\"" Nov 1 00:43:52.954892 env[1734]: time="2025-11-01T00:43:52.954793052Z" level=info msg="Forcibly stopping sandbox \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\"" Nov 1 00:43:52.954892 env[1734]: time="2025-11-01T00:43:52.954853295Z" level=info msg="TearDown network for sandbox \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\" successfully" Nov 1 00:43:52.960173 env[1734]: time="2025-11-01T00:43:52.960119656Z" level=info msg="RemovePodSandbox \"349074aa011d644d9d9fd64c3576a2dcfd7833cc5c73afe53618e1922184ebd6\" returns successfully" Nov 1 00:43:52.960595 env[1734]: time="2025-11-01T00:43:52.960569229Z" level=info msg="StopPodSandbox for \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\"" Nov 1 00:43:52.960795 env[1734]: time="2025-11-01T00:43:52.960746199Z" level=info msg="TearDown network for sandbox \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\" successfully" Nov 1 00:43:52.960795 env[1734]: time="2025-11-01T00:43:52.960786109Z" level=info msg="StopPodSandbox for \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\" returns successfully" Nov 1 00:43:52.961063 env[1734]: time="2025-11-01T00:43:52.961042599Z" level=info msg="RemovePodSandbox for \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\"" Nov 1 00:43:52.961117 env[1734]: time="2025-11-01T00:43:52.961067655Z" level=info msg="Forcibly stopping sandbox \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\"" Nov 1 00:43:52.961157 env[1734]: time="2025-11-01T00:43:52.961140755Z" level=info msg="TearDown network for sandbox \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\" successfully" Nov 1 00:43:52.966299 env[1734]: time="2025-11-01T00:43:52.966253655Z" level=info msg="RemovePodSandbox \"6dc06a37db09189eeec8c823d56d4f6f0807023d3d579eed976d3b753920034e\" returns successfully" Nov 1 00:43:54.309398 kubelet[2580]: E1101 00:43:54.309348 2580 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 1 00:43:54.825491 systemd[1]: cri-containerd-3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad.scope: Deactivated successfully. Nov 1 00:43:54.825816 systemd[1]: cri-containerd-3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad.scope: Consumed 3.424s CPU time. Nov 1 00:43:54.849286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad-rootfs.mount: Deactivated successfully. Nov 1 00:43:54.885895 env[1734]: time="2025-11-01T00:43:54.885838886Z" level=info msg="shim disconnected" id=3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad Nov 1 00:43:54.886465 env[1734]: time="2025-11-01T00:43:54.885900517Z" level=warning msg="cleaning up after shim disconnected" id=3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad namespace=k8s.io Nov 1 00:43:54.886465 env[1734]: time="2025-11-01T00:43:54.885913365Z" level=info msg="cleaning up dead shim" Nov 1 00:43:54.894523 env[1734]: time="2025-11-01T00:43:54.894479211Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5558 runtime=io.containerd.runc.v2\n" Nov 1 00:43:55.481300 kubelet[2580]: I1101 00:43:55.481271 2580 scope.go:117] "RemoveContainer" containerID="3638bf90bcf0ceab0a672ebc58249fced35417385e705237472ed807e4f13fad" Nov 1 00:43:55.483312 env[1734]: time="2025-11-01T00:43:55.483270175Z" level=info msg="CreateContainer within sandbox \"4c076f773eae25fd6f4d7db8704eb59ad84f224292b6a712b1e376458bccd517\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 1 00:43:55.508555 env[1734]: time="2025-11-01T00:43:55.508501792Z" level=info msg="CreateContainer within sandbox \"4c076f773eae25fd6f4d7db8704eb59ad84f224292b6a712b1e376458bccd517\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d3bfd36d6bcbf895ab766dd7a80daf47306b9bd5ce3a46ad5a41db73f7ea62f5\"" Nov 1 00:43:55.509022 env[1734]: time="2025-11-01T00:43:55.508996351Z" level=info msg="StartContainer for \"d3bfd36d6bcbf895ab766dd7a80daf47306b9bd5ce3a46ad5a41db73f7ea62f5\"" Nov 1 00:43:55.534684 systemd[1]: Started cri-containerd-d3bfd36d6bcbf895ab766dd7a80daf47306b9bd5ce3a46ad5a41db73f7ea62f5.scope. Nov 1 00:43:55.586011 env[1734]: time="2025-11-01T00:43:55.585964581Z" level=info msg="StartContainer for \"d3bfd36d6bcbf895ab766dd7a80daf47306b9bd5ce3a46ad5a41db73f7ea62f5\" returns successfully" Nov 1 00:44:04.309906 kubelet[2580]: E1101 00:44:04.309857 2580 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-189?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"