Sep 13 00:03:51.024283 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 13 00:03:51.024320 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 00:03:51.024342 kernel: efi: EFI v2.70 by EDK II Sep 13 00:03:51.024357 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Sep 13 00:03:51.024371 kernel: ACPI: Early table checksum verification disabled Sep 13 00:03:51.024385 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 13 00:03:51.024400 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:03:51.024415 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:03:51.024428 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 13 00:03:51.024442 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:03:51.024460 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 13 00:03:51.024474 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 13 00:03:51.024488 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 13 00:03:51.024502 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:03:51.024518 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 13 00:03:51.024537 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 13 00:03:51.024552 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 13 00:03:51.024566 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 13 00:03:51.024580 kernel: printk: bootconsole [uart0] enabled Sep 13 00:03:51.024595 kernel: NUMA: Failed to initialise from firmware Sep 13 00:03:51.024609 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 13 00:03:51.024624 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Sep 13 00:03:51.024638 kernel: Zone ranges: Sep 13 00:03:51.024653 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 13 00:03:51.024667 kernel: DMA32 empty Sep 13 00:03:51.024681 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 13 00:03:51.024699 kernel: Movable zone start for each node Sep 13 00:03:51.024714 kernel: Early memory node ranges Sep 13 00:03:51.024728 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 13 00:03:51.024742 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 13 00:03:51.024757 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 13 00:03:51.024771 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 13 00:03:51.024785 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 13 00:03:51.024800 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 13 00:03:51.024830 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 13 00:03:51.024851 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 13 00:03:51.024867 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 13 00:03:51.024882 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 13 00:03:51.024902 kernel: psci: probing for conduit method from ACPI. Sep 13 00:03:51.024916 kernel: psci: PSCIv1.0 detected in firmware. Sep 13 00:03:51.024937 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:03:51.024953 kernel: psci: Trusted OS migration not required Sep 13 00:03:51.024968 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:03:51.024988 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 13 00:03:51.025003 kernel: ACPI: SRAT not present Sep 13 00:03:51.025019 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 00:03:51.025034 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 00:03:51.025050 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:03:51.025065 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:03:51.025081 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:03:51.025096 kernel: CPU features: detected: Spectre-v2 Sep 13 00:03:51.025111 kernel: CPU features: detected: Spectre-v3a Sep 13 00:03:51.025126 kernel: CPU features: detected: Spectre-BHB Sep 13 00:03:51.025141 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:03:51.025160 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:03:51.025176 kernel: CPU features: detected: ARM erratum 1742098 Sep 13 00:03:51.025191 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 13 00:03:51.025206 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 13 00:03:51.025221 kernel: Policy zone: Normal Sep 13 00:03:51.025239 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:03:51.025255 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:03:51.025270 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:03:51.025285 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:03:51.025300 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:03:51.025319 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 13 00:03:51.025335 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Sep 13 00:03:51.025351 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:03:51.025366 kernel: trace event string verifier disabled Sep 13 00:03:51.025381 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:03:51.025397 kernel: rcu: RCU event tracing is enabled. Sep 13 00:03:51.025413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:03:51.025428 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:03:51.025443 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:03:51.025458 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:03:51.025473 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:03:51.025488 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:03:51.025507 kernel: GICv3: 96 SPIs implemented Sep 13 00:03:51.025522 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:03:51.025537 kernel: GICv3: Distributor has no Range Selector support Sep 13 00:03:51.025568 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:03:51.025587 kernel: GICv3: 16 PPIs implemented Sep 13 00:03:51.025602 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 13 00:03:51.025617 kernel: ACPI: SRAT not present Sep 13 00:03:51.025631 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 13 00:03:51.025647 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:03:51.025662 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:03:51.025677 kernel: GICv3: using LPI property table @0x00000004000b0000 Sep 13 00:03:51.025697 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 13 00:03:51.025713 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Sep 13 00:03:51.025727 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 13 00:03:51.025743 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 13 00:03:51.025758 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 13 00:03:51.025773 kernel: Console: colour dummy device 80x25 Sep 13 00:03:51.025789 kernel: printk: console [tty1] enabled Sep 13 00:03:51.025804 kernel: ACPI: Core revision 20210730 Sep 13 00:03:51.025839 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 13 00:03:51.025858 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:03:51.025878 kernel: LSM: Security Framework initializing Sep 13 00:03:51.025894 kernel: SELinux: Initializing. Sep 13 00:03:51.025910 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:03:51.025925 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:03:51.025941 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:03:51.025956 kernel: Platform MSI: ITS@0x10080000 domain created Sep 13 00:03:51.025971 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 13 00:03:51.025987 kernel: Remapping and enabling EFI services. Sep 13 00:03:51.026002 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:03:51.026017 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:03:51.026037 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 13 00:03:51.026053 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Sep 13 00:03:51.026068 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 13 00:03:51.026084 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:03:51.026099 kernel: SMP: Total of 2 processors activated. Sep 13 00:03:51.026115 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:03:51.026130 kernel: CPU features: detected: 32-bit EL1 Support Sep 13 00:03:51.026146 kernel: CPU features: detected: CRC32 instructions Sep 13 00:03:51.026161 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:03:51.026180 kernel: alternatives: patching kernel code Sep 13 00:03:51.026196 kernel: devtmpfs: initialized Sep 13 00:03:51.026221 kernel: KASLR disabled due to lack of seed Sep 13 00:03:51.026241 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:03:51.026258 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:03:51.026274 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:03:51.026289 kernel: SMBIOS 3.0.0 present. Sep 13 00:03:51.026306 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 13 00:03:51.026322 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:03:51.026338 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:03:51.026355 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:03:51.026375 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:03:51.026391 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:03:51.026407 kernel: audit: type=2000 audit(0.306:1): state=initialized audit_enabled=0 res=1 Sep 13 00:03:51.026423 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:03:51.026439 kernel: cpuidle: using governor menu Sep 13 00:03:51.026458 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:03:51.026475 kernel: ASID allocator initialised with 32768 entries Sep 13 00:03:51.026490 kernel: ACPI: bus type PCI registered Sep 13 00:03:51.026507 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:03:51.026522 kernel: Serial: AMBA PL011 UART driver Sep 13 00:03:51.026538 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:03:51.026555 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:03:51.026571 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:03:51.026586 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:03:51.026606 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:03:51.026622 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:03:51.026638 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:03:51.026654 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:03:51.026670 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:03:51.026686 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:03:51.026702 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:03:51.026718 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:03:51.026734 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:03:51.026750 kernel: ACPI: Interpreter enabled Sep 13 00:03:51.026770 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:03:51.026786 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:03:51.026802 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 13 00:03:51.027096 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:03:51.027293 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:03:51.027484 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:03:51.027674 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 13 00:03:51.046969 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 13 00:03:51.047017 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 13 00:03:51.047036 kernel: acpiphp: Slot [1] registered Sep 13 00:03:51.047053 kernel: acpiphp: Slot [2] registered Sep 13 00:03:51.047070 kernel: acpiphp: Slot [3] registered Sep 13 00:03:51.047087 kernel: acpiphp: Slot [4] registered Sep 13 00:03:51.047103 kernel: acpiphp: Slot [5] registered Sep 13 00:03:51.047120 kernel: acpiphp: Slot [6] registered Sep 13 00:03:51.047136 kernel: acpiphp: Slot [7] registered Sep 13 00:03:51.047161 kernel: acpiphp: Slot [8] registered Sep 13 00:03:51.047178 kernel: acpiphp: Slot [9] registered Sep 13 00:03:51.047194 kernel: acpiphp: Slot [10] registered Sep 13 00:03:51.047210 kernel: acpiphp: Slot [11] registered Sep 13 00:03:51.047226 kernel: acpiphp: Slot [12] registered Sep 13 00:03:51.047243 kernel: acpiphp: Slot [13] registered Sep 13 00:03:51.047260 kernel: acpiphp: Slot [14] registered Sep 13 00:03:51.047276 kernel: acpiphp: Slot [15] registered Sep 13 00:03:51.047293 kernel: acpiphp: Slot [16] registered Sep 13 00:03:51.047314 kernel: acpiphp: Slot [17] registered Sep 13 00:03:51.047331 kernel: acpiphp: Slot [18] registered Sep 13 00:03:51.047347 kernel: acpiphp: Slot [19] registered Sep 13 00:03:51.047363 kernel: acpiphp: Slot [20] registered Sep 13 00:03:51.047379 kernel: acpiphp: Slot [21] registered Sep 13 00:03:51.047395 kernel: acpiphp: Slot [22] registered Sep 13 00:03:51.047411 kernel: acpiphp: Slot [23] registered Sep 13 00:03:51.047427 kernel: acpiphp: Slot [24] registered Sep 13 00:03:51.047443 kernel: acpiphp: Slot [25] registered Sep 13 00:03:51.047460 kernel: acpiphp: Slot [26] registered Sep 13 00:03:51.047481 kernel: acpiphp: Slot [27] registered Sep 13 00:03:51.047497 kernel: acpiphp: Slot [28] registered Sep 13 00:03:51.047513 kernel: acpiphp: Slot [29] registered Sep 13 00:03:51.047529 kernel: acpiphp: Slot [30] registered Sep 13 00:03:51.047544 kernel: acpiphp: Slot [31] registered Sep 13 00:03:51.047560 kernel: PCI host bridge to bus 0000:00 Sep 13 00:03:51.047791 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 13 00:03:51.048012 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:03:51.048201 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 13 00:03:51.048384 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 13 00:03:51.048624 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 13 00:03:51.048880 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 13 00:03:51.049089 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 13 00:03:51.049300 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:03:51.049506 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 13 00:03:51.049736 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:03:51.057157 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:03:51.057404 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 13 00:03:51.057647 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 13 00:03:51.057906 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 13 00:03:51.058117 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:03:51.058330 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 13 00:03:51.058536 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 13 00:03:51.058748 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 13 00:03:51.058995 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 13 00:03:51.059213 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 13 00:03:51.059409 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 13 00:03:51.059595 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:03:51.059789 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 13 00:03:51.059838 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:03:51.059865 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:03:51.059882 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:03:51.059899 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:03:51.059916 kernel: iommu: Default domain type: Translated Sep 13 00:03:51.059932 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:03:51.059948 kernel: vgaarb: loaded Sep 13 00:03:51.059965 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:03:51.059988 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:03:51.060004 kernel: PTP clock support registered Sep 13 00:03:51.060020 kernel: Registered efivars operations Sep 13 00:03:51.060037 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:03:51.060053 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:03:51.060069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:03:51.060086 kernel: pnp: PnP ACPI init Sep 13 00:03:51.060315 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 13 00:03:51.060349 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:03:51.060366 kernel: NET: Registered PF_INET protocol family Sep 13 00:03:51.060383 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:03:51.060400 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:03:51.060417 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:03:51.060433 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:03:51.060450 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:03:51.060467 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:03:51.060483 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:03:51.060503 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:03:51.060520 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:03:51.060536 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:03:51.060553 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 13 00:03:51.060569 kernel: kvm [1]: HYP mode not available Sep 13 00:03:51.060585 kernel: Initialise system trusted keyrings Sep 13 00:03:51.060602 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:03:51.060618 kernel: Key type asymmetric registered Sep 13 00:03:51.060634 kernel: Asymmetric key parser 'x509' registered Sep 13 00:03:51.060655 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:03:51.060672 kernel: io scheduler mq-deadline registered Sep 13 00:03:51.060688 kernel: io scheduler kyber registered Sep 13 00:03:51.060704 kernel: io scheduler bfq registered Sep 13 00:03:51.060973 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 13 00:03:51.061001 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:03:51.061018 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:03:51.061035 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 13 00:03:51.061051 kernel: ACPI: button: Sleep Button [SLPB] Sep 13 00:03:51.061074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:03:51.061091 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 13 00:03:51.061296 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 13 00:03:51.061321 kernel: printk: console [ttyS0] disabled Sep 13 00:03:51.061339 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 13 00:03:51.061356 kernel: printk: console [ttyS0] enabled Sep 13 00:03:51.061372 kernel: printk: bootconsole [uart0] disabled Sep 13 00:03:51.061389 kernel: thunder_xcv, ver 1.0 Sep 13 00:03:51.061405 kernel: thunder_bgx, ver 1.0 Sep 13 00:03:51.061426 kernel: nicpf, ver 1.0 Sep 13 00:03:51.061442 kernel: nicvf, ver 1.0 Sep 13 00:03:51.061690 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:03:51.076018 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:03:50 UTC (1757721830) Sep 13 00:03:51.076064 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:03:51.076082 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:03:51.076099 kernel: Segment Routing with IPv6 Sep 13 00:03:51.076116 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:03:51.076142 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:03:51.076159 kernel: Key type dns_resolver registered Sep 13 00:03:51.076175 kernel: registered taskstats version 1 Sep 13 00:03:51.076192 kernel: Loading compiled-in X.509 certificates Sep 13 00:03:51.076208 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 00:03:51.076225 kernel: Key type .fscrypt registered Sep 13 00:03:51.076241 kernel: Key type fscrypt-provisioning registered Sep 13 00:03:51.076257 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:03:51.076273 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:03:51.076294 kernel: ima: No architecture policies found Sep 13 00:03:51.076310 kernel: clk: Disabling unused clocks Sep 13 00:03:51.076326 kernel: Freeing unused kernel memory: 36416K Sep 13 00:03:51.076342 kernel: Run /init as init process Sep 13 00:03:51.076358 kernel: with arguments: Sep 13 00:03:51.076374 kernel: /init Sep 13 00:03:51.076390 kernel: with environment: Sep 13 00:03:51.076405 kernel: HOME=/ Sep 13 00:03:51.076422 kernel: TERM=linux Sep 13 00:03:51.076442 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:03:51.076464 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:03:51.076486 systemd[1]: Detected virtualization amazon. Sep 13 00:03:51.076504 systemd[1]: Detected architecture arm64. Sep 13 00:03:51.076522 systemd[1]: Running in initrd. Sep 13 00:03:51.076539 systemd[1]: No hostname configured, using default hostname. Sep 13 00:03:51.076556 systemd[1]: Hostname set to . Sep 13 00:03:51.076579 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:03:51.076598 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:03:51.076615 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:03:51.076633 systemd[1]: Reached target cryptsetup.target. Sep 13 00:03:51.076650 systemd[1]: Reached target paths.target. Sep 13 00:03:51.076667 systemd[1]: Reached target slices.target. Sep 13 00:03:51.076684 systemd[1]: Reached target swap.target. Sep 13 00:03:51.076702 systemd[1]: Reached target timers.target. Sep 13 00:03:51.076724 systemd[1]: Listening on iscsid.socket. Sep 13 00:03:51.076742 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:03:51.076760 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:03:51.076778 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:03:51.076795 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:03:51.076828 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:03:51.076854 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:03:51.076873 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:03:51.076896 systemd[1]: Reached target sockets.target. Sep 13 00:03:51.076915 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:03:51.076933 systemd[1]: Finished network-cleanup.service. Sep 13 00:03:51.076950 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:03:51.076968 systemd[1]: Starting systemd-journald.service... Sep 13 00:03:51.076986 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:03:51.077004 systemd[1]: Starting systemd-resolved.service... Sep 13 00:03:51.077022 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:03:51.077039 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:03:51.077061 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:03:51.077079 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:03:51.077097 kernel: audit: type=1130 audit(1757721831.019:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.077115 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:03:51.077133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:03:51.077151 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:03:51.077168 kernel: audit: type=1130 audit(1757721831.068:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.077189 systemd-journald[310]: Journal started Sep 13 00:03:51.077278 systemd-journald[310]: Runtime Journal (/run/log/journal/ec267ce5d985592649c3de8fa479119b) is 8.0M, max 75.4M, 67.4M free. Sep 13 00:03:51.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.007943 systemd-modules-load[311]: Inserted module 'overlay' Sep 13 00:03:51.087867 systemd[1]: Started systemd-journald.service. Sep 13 00:03:51.097876 kernel: audit: type=1130 audit(1757721831.087:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.104192 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:03:51.108581 systemd-resolved[312]: Positive Trust Anchors: Sep 13 00:03:51.108595 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:03:51.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.108647 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:03:51.139867 kernel: audit: type=1130 audit(1757721831.108:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.145854 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:03:51.146400 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:03:51.162862 kernel: Bridge firewalling registered Sep 13 00:03:51.163247 systemd-modules-load[311]: Inserted module 'br_netfilter' Sep 13 00:03:51.175909 dracut-cmdline[328]: dracut-dracut-053 Sep 13 00:03:51.193378 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:03:51.206764 kernel: SCSI subsystem initialized Sep 13 00:03:51.227120 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:03:51.227191 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:03:51.232876 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:03:51.240180 systemd-modules-load[311]: Inserted module 'dm_multipath' Sep 13 00:03:51.243876 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:03:51.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.249181 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:03:51.265894 kernel: audit: type=1130 audit(1757721831.246:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.287336 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:03:51.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.304859 kernel: audit: type=1130 audit(1757721831.288:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.386849 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:03:51.408859 kernel: iscsi: registered transport (tcp) Sep 13 00:03:51.437237 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:03:51.437320 kernel: QLogic iSCSI HBA Driver Sep 13 00:03:51.573947 kernel: random: crng init done Sep 13 00:03:51.574327 systemd-resolved[312]: Defaulting to hostname 'linux'. Sep 13 00:03:51.578652 systemd[1]: Started systemd-resolved.service. Sep 13 00:03:51.594459 kernel: audit: type=1130 audit(1757721831.579:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.580869 systemd[1]: Reached target nss-lookup.target. Sep 13 00:03:51.613981 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:03:51.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.623777 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:03:51.631765 kernel: audit: type=1130 audit(1757721831.615:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.694875 kernel: raid6: neonx8 gen() 6424 MB/s Sep 13 00:03:51.712868 kernel: raid6: neonx8 xor() 4726 MB/s Sep 13 00:03:51.730877 kernel: raid6: neonx4 gen() 6468 MB/s Sep 13 00:03:51.748869 kernel: raid6: neonx4 xor() 4894 MB/s Sep 13 00:03:51.766868 kernel: raid6: neonx2 gen() 5811 MB/s Sep 13 00:03:51.784871 kernel: raid6: neonx2 xor() 4505 MB/s Sep 13 00:03:51.802877 kernel: raid6: neonx1 gen() 4482 MB/s Sep 13 00:03:51.820873 kernel: raid6: neonx1 xor() 3673 MB/s Sep 13 00:03:51.838862 kernel: raid6: int64x8 gen() 3420 MB/s Sep 13 00:03:51.856877 kernel: raid6: int64x8 xor() 2079 MB/s Sep 13 00:03:51.874871 kernel: raid6: int64x4 gen() 3817 MB/s Sep 13 00:03:51.892875 kernel: raid6: int64x4 xor() 2187 MB/s Sep 13 00:03:51.910875 kernel: raid6: int64x2 gen() 3587 MB/s Sep 13 00:03:51.928878 kernel: raid6: int64x2 xor() 1932 MB/s Sep 13 00:03:51.946873 kernel: raid6: int64x1 gen() 2753 MB/s Sep 13 00:03:51.966358 kernel: raid6: int64x1 xor() 1447 MB/s Sep 13 00:03:51.966424 kernel: raid6: using algorithm neonx4 gen() 6468 MB/s Sep 13 00:03:51.966449 kernel: raid6: .... xor() 4894 MB/s, rmw enabled Sep 13 00:03:51.968145 kernel: raid6: using neon recovery algorithm Sep 13 00:03:51.989031 kernel: xor: measuring software checksum speed Sep 13 00:03:51.989100 kernel: 8regs : 9333 MB/sec Sep 13 00:03:51.990932 kernel: 32regs : 11115 MB/sec Sep 13 00:03:51.992906 kernel: arm64_neon : 9571 MB/sec Sep 13 00:03:51.992944 kernel: xor: using function: 32regs (11115 MB/sec) Sep 13 00:03:52.093864 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 00:03:52.111500 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:03:52.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:52.115000 audit: BPF prog-id=7 op=LOAD Sep 13 00:03:52.123000 audit: BPF prog-id=8 op=LOAD Sep 13 00:03:52.124895 kernel: audit: type=1130 audit(1757721832.111:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:52.125641 systemd[1]: Starting systemd-udevd.service... Sep 13 00:03:52.155688 systemd-udevd[509]: Using default interface naming scheme 'v252'. Sep 13 00:03:52.166539 systemd[1]: Started systemd-udevd.service. Sep 13 00:03:52.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:52.176079 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:03:52.208065 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Sep 13 00:03:52.277148 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:03:52.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:52.280752 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:03:52.381982 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:03:52.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:52.507863 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:03:52.507923 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 13 00:03:52.526626 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:03:52.526915 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 13 00:03:52.526941 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:03:52.527149 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:03:52.527391 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:38:37:ef:82:23 Sep 13 00:03:52.532847 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:03:52.541511 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:03:52.541562 kernel: GPT:9289727 != 16777215 Sep 13 00:03:52.543827 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:03:52.545410 kernel: GPT:9289727 != 16777215 Sep 13 00:03:52.547597 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:03:52.549165 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:03:52.553226 (udev-worker)[559]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:03:52.626848 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (564) Sep 13 00:03:52.701432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:03:52.722989 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:03:52.762109 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:03:52.783703 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:03:52.788179 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:03:52.794937 systemd[1]: Starting disk-uuid.service... Sep 13 00:03:52.805239 disk-uuid[661]: Primary Header is updated. Sep 13 00:03:52.805239 disk-uuid[661]: Secondary Entries is updated. Sep 13 00:03:52.805239 disk-uuid[661]: Secondary Header is updated. Sep 13 00:03:52.815885 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:03:52.824875 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:03:53.836999 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:03:53.837077 disk-uuid[662]: The operation has completed successfully. Sep 13 00:03:54.035249 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:03:54.035455 systemd[1]: Finished disk-uuid.service. Sep 13 00:03:54.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.050690 systemd[1]: Starting verity-setup.service... Sep 13 00:03:54.087842 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:03:54.183930 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:03:54.188986 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:03:54.197803 systemd[1]: Finished verity-setup.service. Sep 13 00:03:54.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.287853 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:03:54.288588 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:03:54.292002 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:03:54.296205 systemd[1]: Starting ignition-setup.service... Sep 13 00:03:54.307368 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:03:54.331734 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:03:54.331801 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:03:54.331845 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:03:54.367863 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:03:54.386318 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:03:54.415296 systemd[1]: Finished ignition-setup.service. Sep 13 00:03:54.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.419349 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:03:54.460392 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:03:54.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.463000 audit: BPF prog-id=9 op=LOAD Sep 13 00:03:54.466745 systemd[1]: Starting systemd-networkd.service... Sep 13 00:03:54.514460 systemd-networkd[1175]: lo: Link UP Sep 13 00:03:54.514482 systemd-networkd[1175]: lo: Gained carrier Sep 13 00:03:54.518500 systemd-networkd[1175]: Enumeration completed Sep 13 00:03:54.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.519010 systemd-networkd[1175]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:03:54.519528 systemd[1]: Started systemd-networkd.service. Sep 13 00:03:54.521594 systemd[1]: Reached target network.target. Sep 13 00:03:54.525748 systemd[1]: Starting iscsiuio.service... Sep 13 00:03:54.544784 systemd-networkd[1175]: eth0: Link UP Sep 13 00:03:54.546604 systemd-networkd[1175]: eth0: Gained carrier Sep 13 00:03:54.551151 systemd[1]: Started iscsiuio.service. Sep 13 00:03:54.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.555797 systemd[1]: Starting iscsid.service... Sep 13 00:03:54.566026 iscsid[1180]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:03:54.566026 iscsid[1180]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:03:54.566026 iscsid[1180]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:03:54.566026 iscsid[1180]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:03:54.566026 iscsid[1180]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:03:54.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.590216 iscsid[1180]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:03:54.580921 systemd[1]: Started iscsid.service. Sep 13 00:03:54.593010 systemd-networkd[1175]: eth0: DHCPv4 address 172.31.31.19/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:03:54.595973 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:03:54.628614 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:03:54.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.631186 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:03:54.634444 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:03:54.638114 systemd[1]: Reached target remote-fs.target. Sep 13 00:03:54.645923 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:03:54.666985 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:03:54.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.219557 ignition[1148]: Ignition 2.14.0 Sep 13 00:03:55.220131 ignition[1148]: Stage: fetch-offline Sep 13 00:03:55.220737 ignition[1148]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:55.220804 ignition[1148]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:55.249621 ignition[1148]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:55.252757 ignition[1148]: Ignition finished successfully Sep 13 00:03:55.256273 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:03:55.271775 kernel: kauditd_printk_skb: 16 callbacks suppressed Sep 13 00:03:55.273351 kernel: audit: type=1130 audit(1757721835.257:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.260228 systemd[1]: Starting ignition-fetch.service... Sep 13 00:03:55.279974 ignition[1199]: Ignition 2.14.0 Sep 13 00:03:55.280028 ignition[1199]: Stage: fetch Sep 13 00:03:55.280564 ignition[1199]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:55.281674 ignition[1199]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:55.300229 ignition[1199]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:55.303043 ignition[1199]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:55.315065 ignition[1199]: INFO : PUT result: OK Sep 13 00:03:55.319391 ignition[1199]: DEBUG : parsed url from cmdline: "" Sep 13 00:03:55.322063 ignition[1199]: INFO : no config URL provided Sep 13 00:03:55.322063 ignition[1199]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:03:55.322063 ignition[1199]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 13 00:03:55.322063 ignition[1199]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:55.332340 ignition[1199]: INFO : PUT result: OK Sep 13 00:03:55.332340 ignition[1199]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:03:55.332340 ignition[1199]: INFO : GET result: OK Sep 13 00:03:55.332340 ignition[1199]: DEBUG : parsing config with SHA512: 023ba25be553132329aaca237c9ab78d1de3eb4dd85bc4df89b3ff04814bd8bf030346fad51ab0288c3f5460c222e307243607a672891489fd92f4df54db0d90 Sep 13 00:03:55.353023 unknown[1199]: fetched base config from "system" Sep 13 00:03:55.353292 unknown[1199]: fetched base config from "system" Sep 13 00:03:55.353309 unknown[1199]: fetched user config from "aws" Sep 13 00:03:55.356318 ignition[1199]: fetch: fetch complete Sep 13 00:03:55.356333 ignition[1199]: fetch: fetch passed Sep 13 00:03:55.356441 ignition[1199]: Ignition finished successfully Sep 13 00:03:55.371042 systemd[1]: Finished ignition-fetch.service. Sep 13 00:03:55.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.376049 systemd[1]: Starting ignition-kargs.service... Sep 13 00:03:55.391254 kernel: audit: type=1130 audit(1757721835.372:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.400246 ignition[1205]: Ignition 2.14.0 Sep 13 00:03:55.400276 ignition[1205]: Stage: kargs Sep 13 00:03:55.400605 ignition[1205]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:55.400668 ignition[1205]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:55.416411 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:55.419200 ignition[1205]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:55.422465 ignition[1205]: INFO : PUT result: OK Sep 13 00:03:55.433465 ignition[1205]: kargs: kargs passed Sep 13 00:03:55.433626 ignition[1205]: Ignition finished successfully Sep 13 00:03:55.438878 systemd[1]: Finished ignition-kargs.service. Sep 13 00:03:55.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.452878 kernel: audit: type=1130 audit(1757721835.439:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.442546 systemd[1]: Starting ignition-disks.service... Sep 13 00:03:55.461632 ignition[1211]: Ignition 2.14.0 Sep 13 00:03:55.461661 ignition[1211]: Stage: disks Sep 13 00:03:55.462036 ignition[1211]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:55.462100 ignition[1211]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:55.481668 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:55.484484 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:55.487776 ignition[1211]: INFO : PUT result: OK Sep 13 00:03:55.493264 ignition[1211]: disks: disks passed Sep 13 00:03:55.493394 ignition[1211]: Ignition finished successfully Sep 13 00:03:55.498017 systemd[1]: Finished ignition-disks.service. Sep 13 00:03:55.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.501500 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:03:55.518318 kernel: audit: type=1130 audit(1757721835.499:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.510673 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:03:55.512679 systemd[1]: Reached target local-fs.target. Sep 13 00:03:55.514555 systemd[1]: Reached target sysinit.target. Sep 13 00:03:55.516467 systemd[1]: Reached target basic.target. Sep 13 00:03:55.521602 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:03:55.564783 systemd-fsck[1219]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 13 00:03:55.574046 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:03:55.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.575661 systemd[1]: Mounting sysroot.mount... Sep 13 00:03:55.589637 kernel: audit: type=1130 audit(1757721835.572:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.611850 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:03:55.612723 systemd[1]: Mounted sysroot.mount. Sep 13 00:03:55.616298 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:03:55.638599 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:03:55.642423 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:03:55.642507 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:03:55.642559 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:03:55.660567 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:03:55.692046 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:03:55.698361 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:03:55.719838 initrd-setup-root[1241]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:03:55.731895 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1236) Sep 13 00:03:55.737767 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:03:55.737866 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:03:55.740056 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:03:55.743541 initrd-setup-root[1265]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:03:55.763853 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:03:55.766891 initrd-setup-root[1275]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:03:55.771361 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:03:55.782525 initrd-setup-root[1283]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:03:56.025187 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:03:56.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:56.026866 systemd[1]: Starting ignition-mount.service... Sep 13 00:03:56.047510 kernel: audit: type=1130 audit(1757721836.023:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:56.028635 systemd[1]: Starting sysroot-boot.service... Sep 13 00:03:56.056199 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:03:56.056479 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:03:56.084762 systemd[1]: Finished sysroot-boot.service. Sep 13 00:03:56.095940 kernel: audit: type=1130 audit(1757721836.085:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:56.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:56.128081 systemd-networkd[1175]: eth0: Gained IPv6LL Sep 13 00:03:56.133335 ignition[1304]: INFO : Ignition 2.14.0 Sep 13 00:03:56.133335 ignition[1304]: INFO : Stage: mount Sep 13 00:03:56.136929 ignition[1304]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:56.136929 ignition[1304]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:56.155182 ignition[1304]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:56.157916 ignition[1304]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:56.161332 ignition[1304]: INFO : PUT result: OK Sep 13 00:03:56.167474 ignition[1304]: INFO : mount: mount passed Sep 13 00:03:56.169364 ignition[1304]: INFO : Ignition finished successfully Sep 13 00:03:56.172352 systemd[1]: Finished ignition-mount.service. Sep 13 00:03:56.190328 kernel: audit: type=1130 audit(1757721836.172:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:56.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:56.175594 systemd[1]: Starting ignition-files.service... Sep 13 00:03:56.198562 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:03:56.221857 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1311) Sep 13 00:03:56.228670 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:03:56.228729 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:03:56.230962 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:03:56.246843 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:03:56.252463 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:03:56.274646 ignition[1330]: INFO : Ignition 2.14.0 Sep 13 00:03:56.276734 ignition[1330]: INFO : Stage: files Sep 13 00:03:56.278569 ignition[1330]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:56.281529 ignition[1330]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:56.297386 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:56.300108 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:56.303066 ignition[1330]: INFO : PUT result: OK Sep 13 00:03:56.309012 ignition[1330]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:03:56.315286 ignition[1330]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:03:56.315286 ignition[1330]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:03:56.352973 ignition[1330]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:03:56.356212 ignition[1330]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:03:56.361476 unknown[1330]: wrote ssh authorized keys file for user: core Sep 13 00:03:56.364220 ignition[1330]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:03:56.368127 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 00:03:56.372639 ignition[1330]: INFO : GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 13 00:03:56.458674 ignition[1330]: INFO : GET result: OK Sep 13 00:03:56.813763 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 00:03:56.818140 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:03:56.822005 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:03:56.822005 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:03:56.829638 ignition[1330]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:03:56.841149 ignition[1330]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem255172852" Sep 13 00:03:56.844656 ignition[1330]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem255172852": device or resource busy Sep 13 00:03:56.844656 ignition[1330]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem255172852", trying btrfs: device or resource busy Sep 13 00:03:56.844656 ignition[1330]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem255172852" Sep 13 00:03:56.844656 ignition[1330]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem255172852" Sep 13 00:03:56.858887 ignition[1330]: INFO : op(3): [started] unmounting "/mnt/oem255172852" Sep 13 00:03:56.858887 ignition[1330]: INFO : op(3): [finished] unmounting "/mnt/oem255172852" Sep 13 00:03:56.858887 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:03:56.867912 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:03:56.867912 ignition[1330]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 00:03:56.881790 systemd[1]: mnt-oem255172852.mount: Deactivated successfully. Sep 13 00:03:57.083782 ignition[1330]: INFO : GET result: OK Sep 13 00:03:57.272920 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:03:57.276952 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:03:57.318018 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:03:57.318018 ignition[1330]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:03:57.334990 ignition[1330]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3808135122" Sep 13 00:03:57.334990 ignition[1330]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3808135122": device or resource busy Sep 13 00:03:57.334990 ignition[1330]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3808135122", trying btrfs: device or resource busy Sep 13 00:03:57.334990 ignition[1330]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3808135122" Sep 13 00:03:57.334990 ignition[1330]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3808135122" Sep 13 00:03:57.334990 ignition[1330]: INFO : op(6): [started] unmounting "/mnt/oem3808135122" Sep 13 00:03:57.334990 ignition[1330]: INFO : op(6): [finished] unmounting "/mnt/oem3808135122" Sep 13 00:03:57.334990 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:03:57.334990 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:03:57.334990 ignition[1330]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:03:57.370223 systemd[1]: mnt-oem3808135122.mount: Deactivated successfully. Sep 13 00:03:57.394986 ignition[1330]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2303970523" Sep 13 00:03:57.398045 ignition[1330]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2303970523": device or resource busy Sep 13 00:03:57.398045 ignition[1330]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2303970523", trying btrfs: device or resource busy Sep 13 00:03:57.398045 ignition[1330]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2303970523" Sep 13 00:03:57.409461 ignition[1330]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2303970523" Sep 13 00:03:57.409461 ignition[1330]: INFO : op(9): [started] unmounting "/mnt/oem2303970523" Sep 13 00:03:57.409461 ignition[1330]: INFO : op(9): [finished] unmounting "/mnt/oem2303970523" Sep 13 00:03:57.409461 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:03:57.409461 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:03:57.409461 ignition[1330]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 13 00:03:57.692160 ignition[1330]: INFO : GET result: OK Sep 13 00:03:58.374046 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:03:58.379471 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:03:58.379471 ignition[1330]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:03:58.404083 ignition[1330]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2366888878" Sep 13 00:03:58.407958 ignition[1330]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2366888878": device or resource busy Sep 13 00:03:58.407958 ignition[1330]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2366888878", trying btrfs: device or resource busy Sep 13 00:03:58.407958 ignition[1330]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2366888878" Sep 13 00:03:58.407958 ignition[1330]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2366888878" Sep 13 00:03:58.407958 ignition[1330]: INFO : op(c): [started] unmounting "/mnt/oem2366888878" Sep 13 00:03:58.407958 ignition[1330]: INFO : op(c): [finished] unmounting "/mnt/oem2366888878" Sep 13 00:03:58.407958 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:03:58.407958 ignition[1330]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:03:58.407958 ignition[1330]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:03:58.407958 ignition[1330]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Sep 13 00:03:58.442992 ignition[1330]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:03:58.442992 ignition[1330]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:03:58.442992 ignition[1330]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Sep 13 00:03:58.442992 ignition[1330]: INFO : files: op(13): [started] processing unit "nvidia.service" Sep 13 00:03:58.442992 ignition[1330]: INFO : files: op(13): [finished] processing unit "nvidia.service" Sep 13 00:03:58.442992 ignition[1330]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Sep 13 00:03:58.442992 ignition[1330]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(16): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:03:58.478086 ignition[1330]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:03:58.478086 ignition[1330]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:03:58.478086 ignition[1330]: INFO : files: files passed Sep 13 00:03:58.478086 ignition[1330]: INFO : Ignition finished successfully Sep 13 00:03:58.547175 kernel: audit: type=1130 audit(1757721838.481:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.479502 systemd[1]: Finished ignition-files.service. Sep 13 00:03:58.490027 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:03:58.552181 initrd-setup-root-after-ignition[1354]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:03:58.512974 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:03:58.539920 systemd[1]: Starting ignition-quench.service... Sep 13 00:03:58.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.560456 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:03:58.568674 systemd[1]: Reached target ignition-complete.target. Sep 13 00:03:58.585624 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:03:58.591867 kernel: audit: type=1130 audit(1757721838.567:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.591979 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:03:58.592425 systemd[1]: Finished ignition-quench.service. Sep 13 00:03:58.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.622590 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:03:58.623066 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:03:58.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.627570 systemd[1]: Reached target initrd-fs.target. Sep 13 00:03:58.630588 systemd[1]: Reached target initrd.target. Sep 13 00:03:58.634018 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:03:58.635635 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:03:58.664433 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:03:58.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.668337 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:03:58.687786 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:03:58.691487 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:03:58.693921 systemd[1]: Stopped target timers.target. Sep 13 00:03:58.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.695936 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:03:58.696883 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:03:58.700315 systemd[1]: Stopped target initrd.target. Sep 13 00:03:58.702782 systemd[1]: Stopped target basic.target. Sep 13 00:03:58.706512 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:03:58.716536 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:03:58.721037 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:03:58.729970 systemd[1]: Stopped target remote-fs.target. Sep 13 00:03:58.734229 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:03:58.739049 systemd[1]: Stopped target sysinit.target. Sep 13 00:03:58.747741 systemd[1]: Stopped target local-fs.target. Sep 13 00:03:58.751488 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:03:58.755450 systemd[1]: Stopped target swap.target. Sep 13 00:03:58.758939 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:03:58.760141 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:03:58.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.764897 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:03:58.768106 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:03:58.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.768344 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:03:58.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.771862 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:03:58.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.772118 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:03:58.776621 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:03:58.776888 systemd[1]: Stopped ignition-files.service. Sep 13 00:03:58.782239 systemd[1]: Stopping ignition-mount.service... Sep 13 00:03:58.786028 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:03:58.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.786332 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:03:58.797537 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:03:58.805086 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:03:58.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.806864 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:03:58.816031 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:03:58.817695 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:03:58.838551 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:03:58.839151 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:03:58.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.849953 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:03:58.853967 ignition[1368]: INFO : Ignition 2.14.0 Sep 13 00:03:58.859184 ignition[1368]: INFO : Stage: umount Sep 13 00:03:58.859184 ignition[1368]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:58.859184 ignition[1368]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:58.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.864057 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:03:58.864273 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:03:58.883991 ignition[1368]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:58.886735 ignition[1368]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:58.890265 ignition[1368]: INFO : PUT result: OK Sep 13 00:03:58.896260 ignition[1368]: INFO : umount: umount passed Sep 13 00:03:58.898470 ignition[1368]: INFO : Ignition finished successfully Sep 13 00:03:58.901917 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:03:58.902116 systemd[1]: Stopped ignition-mount.service. Sep 13 00:03:58.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.906398 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:03:58.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.906497 systemd[1]: Stopped ignition-disks.service. Sep 13 00:03:58.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.910183 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:03:58.910285 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:03:58.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.913977 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:03:58.914073 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:03:58.916125 systemd[1]: Stopped target network.target. Sep 13 00:03:58.919750 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:03:58.919897 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:03:58.924080 systemd[1]: Stopped target paths.target. Sep 13 00:03:58.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.927635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:03:58.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.928011 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:03:58.931979 systemd[1]: Stopped target slices.target. Sep 13 00:03:58.935308 systemd[1]: Stopped target sockets.target. Sep 13 00:03:58.938574 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:03:58.938639 systemd[1]: Closed iscsid.socket. Sep 13 00:03:58.942631 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:03:58.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.942705 systemd[1]: Closed iscsiuio.socket. Sep 13 00:03:58.944609 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:03:58.944900 systemd[1]: Stopped ignition-setup.service. Sep 13 00:03:58.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.948692 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:03:58.948788 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:03:58.988000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:03:58.952261 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:03:58.956668 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:03:58.963392 systemd-networkd[1175]: eth0: DHCPv6 lease lost Sep 13 00:03:58.989000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:03:58.966638 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:03:58.966885 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:03:58.977342 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:03:58.979201 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:03:58.988712 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:03:58.988801 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:03:59.009452 systemd[1]: Stopping network-cleanup.service... Sep 13 00:03:59.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.012015 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:03:59.012156 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:03:59.014569 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:03:59.014677 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:03:59.017042 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:03:59.017144 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:03:59.019609 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:03:59.035293 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:03:59.051609 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:03:59.052268 systemd[1]: Stopped network-cleanup.service. Sep 13 00:03:59.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.059729 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:03:59.060124 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:03:59.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.064796 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:03:59.064916 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:03:59.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.067787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:03:59.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.067898 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:03:59.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.070029 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:03:59.070215 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:03:59.074962 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:03:59.075062 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:03:59.078721 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:03:59.078908 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:03:59.084512 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:03:59.111610 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:03:59.114393 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:03:59.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.119279 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:03:59.121913 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:03:59.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:59.126483 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:03:59.132115 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:03:59.149332 systemd[1]: Switching root. Sep 13 00:03:59.184417 iscsid[1180]: iscsid shutting down. Sep 13 00:03:59.189888 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Sep 13 00:03:59.189990 systemd-journald[310]: Journal stopped Sep 13 00:04:05.286175 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:04:05.286317 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:04:05.286355 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:04:05.286388 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:04:05.286421 kernel: SELinux: policy capability open_perms=1 Sep 13 00:04:05.286452 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:04:05.286483 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:04:05.286512 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:04:05.286550 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:04:05.286587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:04:05.286618 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:04:05.286662 systemd[1]: Successfully loaded SELinux policy in 130.294ms. Sep 13 00:04:05.286717 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.137ms. Sep 13 00:04:05.286755 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:04:05.286790 systemd[1]: Detected virtualization amazon. Sep 13 00:04:05.286868 systemd[1]: Detected architecture arm64. Sep 13 00:04:05.286905 systemd[1]: Detected first boot. Sep 13 00:04:05.286948 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:04:05.286981 kernel: kauditd_printk_skb: 39 callbacks suppressed Sep 13 00:04:05.287015 kernel: audit: type=1400 audit(1757721840.305:76): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:04:05.287103 kernel: audit: type=1400 audit(1757721840.307:77): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:04:05.287153 kernel: audit: type=1334 audit(1757721840.315:78): prog-id=10 op=LOAD Sep 13 00:04:05.287276 kernel: audit: type=1334 audit(1757721840.315:79): prog-id=10 op=UNLOAD Sep 13 00:04:05.288023 kernel: audit: type=1334 audit(1757721840.322:80): prog-id=11 op=LOAD Sep 13 00:04:05.288416 kernel: audit: type=1334 audit(1757721840.322:81): prog-id=11 op=UNLOAD Sep 13 00:04:05.288791 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:04:05.288945 kernel: audit: type=1400 audit(1757721840.562:82): avc: denied { associate } for pid=1401 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:04:05.288985 kernel: audit: type=1300 audit(1757721840.562:82): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b4 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1384 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:05.289019 kernel: audit: type=1327 audit(1757721840.562:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:04:05.289059 kernel: audit: type=1400 audit(1757721840.566:83): avc: denied { associate } for pid=1401 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:04:05.289094 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:04:05.289130 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:04:05.289166 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:04:05.289200 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:05.289235 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:04:05.289266 systemd[1]: Stopped iscsiuio.service. Sep 13 00:04:05.289297 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:04:05.289332 systemd[1]: Stopped iscsid.service. Sep 13 00:04:05.289373 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:04:05.289406 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:04:05.289461 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:04:05.289496 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:04:05.289528 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:04:05.289562 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:04:05.289599 systemd[1]: Created slice system-getty.slice. Sep 13 00:04:05.289633 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:04:05.289667 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:04:05.289698 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:04:05.289730 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:04:05.289763 systemd[1]: Created slice user.slice. Sep 13 00:04:05.289795 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:04:05.290134 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:04:05.290191 systemd[1]: Set up automount boot.automount. Sep 13 00:04:05.290292 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:04:05.290333 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:04:05.290368 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:04:05.290400 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:04:05.290431 systemd[1]: Reached target integritysetup.target. Sep 13 00:04:05.290463 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:04:05.290499 systemd[1]: Reached target remote-fs.target. Sep 13 00:04:05.290530 systemd[1]: Reached target slices.target. Sep 13 00:04:05.290560 systemd[1]: Reached target swap.target. Sep 13 00:04:05.290591 systemd[1]: Reached target torcx.target. Sep 13 00:04:05.290639 systemd[1]: Reached target veritysetup.target. Sep 13 00:04:05.290670 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:04:05.290707 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:04:05.290738 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:04:05.290768 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:04:05.290801 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:04:05.293887 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:04:05.293941 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:04:05.293976 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:04:05.294016 systemd[1]: Mounting media.mount... Sep 13 00:04:05.294052 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:04:05.294083 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:04:05.294114 systemd[1]: Mounting tmp.mount... Sep 13 00:04:05.294145 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:04:05.294180 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:04:05.294211 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:04:05.294244 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:04:05.294274 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:04:05.294311 systemd[1]: Starting modprobe@drm.service... Sep 13 00:04:05.294346 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:04:05.294377 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:04:05.294407 systemd[1]: Starting modprobe@loop.service... Sep 13 00:04:05.294440 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:04:05.294473 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:04:05.294506 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:04:05.294539 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:04:05.294569 kernel: fuse: init (API version 7.34) Sep 13 00:04:05.294606 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:04:05.294638 systemd[1]: Stopped systemd-journald.service. Sep 13 00:04:05.294671 systemd[1]: Starting systemd-journald.service... Sep 13 00:04:05.294703 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:04:05.294734 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:04:05.294765 kernel: loop: module loaded Sep 13 00:04:05.294799 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:04:05.294877 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:04:05.294913 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:04:05.294945 systemd[1]: Stopped verity-setup.service. Sep 13 00:04:05.294983 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:04:05.295014 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:04:05.295045 systemd[1]: Mounted media.mount. Sep 13 00:04:05.295076 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:04:05.295108 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:04:05.295138 systemd[1]: Mounted tmp.mount. Sep 13 00:04:05.295169 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:04:05.295205 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:04:05.295236 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:04:05.295267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:05.295298 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:04:05.295329 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:04:05.295361 systemd[1]: Finished modprobe@drm.service. Sep 13 00:04:05.295394 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:05.295429 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:04:05.295460 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:04:05.295490 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:04:05.295521 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:05.295556 systemd[1]: Finished modprobe@loop.service. Sep 13 00:04:05.295594 systemd-journald[1480]: Journal started Sep 13 00:04:05.295705 systemd-journald[1480]: Runtime Journal (/run/log/journal/ec267ce5d985592649c3de8fa479119b) is 8.0M, max 75.4M, 67.4M free. Sep 13 00:04:00.099000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:04:00.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:04:00.307000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:04:00.315000 audit: BPF prog-id=10 op=LOAD Sep 13 00:04:00.315000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:04:00.322000 audit: BPF prog-id=11 op=LOAD Sep 13 00:04:00.322000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:04:00.562000 audit[1401]: AVC avc: denied { associate } for pid=1401 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:04:00.562000 audit[1401]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b4 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1384 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:00.562000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:04:00.566000 audit[1401]: AVC avc: denied { associate } for pid=1401 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:04:00.566000 audit[1401]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145989 a2=1ed a3=0 items=2 ppid=1384 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:00.566000 audit: CWD cwd="/" Sep 13 00:04:00.566000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:04:00.566000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:04:00.566000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:04:04.803000 audit: BPF prog-id=12 op=LOAD Sep 13 00:04:04.803000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:04:04.803000 audit: BPF prog-id=13 op=LOAD Sep 13 00:04:04.803000 audit: BPF prog-id=14 op=LOAD Sep 13 00:04:04.804000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:04:04.804000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:04:04.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.819000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:04:04.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.127000 audit: BPF prog-id=15 op=LOAD Sep 13 00:04:05.127000 audit: BPF prog-id=16 op=LOAD Sep 13 00:04:05.127000 audit: BPF prog-id=17 op=LOAD Sep 13 00:04:05.127000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:04:05.127000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:04:05.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.272000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:04:05.272000 audit[1480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe8195bc0 a2=4000 a3=1 items=0 ppid=1 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:05.272000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:04:05.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:00.550562 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:04:04.800685 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:04:00.552237 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:04:04.800709 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 13 00:04:00.552285 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:04:04.809739 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:04:00.552350 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:04:00.552376 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:04:00.552436 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:04:00.552467 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:04:05.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:00.552894 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:04:00.552972 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:04:00.553007 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:04:00.562409 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:04:00.562533 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:04:00.562608 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:04:00.562672 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:04:00.562732 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:04:00.562802 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:04:03.915812 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:04:03.916384 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:04:05.332661 systemd[1]: Started systemd-journald.service. Sep 13 00:04:05.332770 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 13 00:04:05.332872 kernel: audit: type=1130 audit(1757721845.307:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.332920 kernel: audit: type=1130 audit(1757721845.319:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:03.916631 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:04:05.310368 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:04:05.348618 kernel: audit: type=1130 audit(1757721845.331:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:03.917108 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:04:05.321605 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:04:03.917217 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:04:05.334178 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:04:03.917361 /usr/lib/systemd/system-generators/torcx-generator[1401]: time="2025-09-13T00:04:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:04:05.337635 systemd[1]: Reached target network-pre.target. Sep 13 00:04:05.362897 kernel: audit: type=1130 audit(1757721845.335:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.365130 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:04:05.372070 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:04:05.379797 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:04:05.383399 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:04:05.388009 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:04:05.390102 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:05.392941 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:04:05.398617 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:04:05.401133 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:04:05.408129 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:04:05.424453 kernel: audit: type=1130 audit(1757721845.409:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.410579 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:04:05.421079 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:04:05.432934 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:04:05.444722 systemd-journald[1480]: Time spent on flushing to /var/log/journal/ec267ce5d985592649c3de8fa479119b is 67.846ms for 1137 entries. Sep 13 00:04:05.444722 systemd-journald[1480]: System Journal (/var/log/journal/ec267ce5d985592649c3de8fa479119b) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:04:05.530734 systemd-journald[1480]: Received client request to flush runtime journal. Sep 13 00:04:05.530877 kernel: audit: type=1130 audit(1757721845.470:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.530978 kernel: audit: type=1130 audit(1757721845.517:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.469402 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:04:05.471924 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:04:05.516626 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:04:05.532563 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:04:05.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.550885 kernel: audit: type=1130 audit(1757721845.533:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.568600 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:04:05.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.573432 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:04:05.587007 kernel: audit: type=1130 audit(1757721845.569:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.600728 udevadm[1520]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:04:05.711775 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:04:05.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.724883 kernel: audit: type=1130 audit(1757721845.712:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.345520 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:04:06.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.349000 audit: BPF prog-id=18 op=LOAD Sep 13 00:04:06.349000 audit: BPF prog-id=19 op=LOAD Sep 13 00:04:06.349000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:04:06.349000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:04:06.352852 systemd[1]: Starting systemd-udevd.service... Sep 13 00:04:06.391949 systemd-udevd[1521]: Using default interface naming scheme 'v252'. Sep 13 00:04:06.470609 systemd[1]: Started systemd-udevd.service. Sep 13 00:04:06.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.472000 audit: BPF prog-id=20 op=LOAD Sep 13 00:04:06.476047 systemd[1]: Starting systemd-networkd.service... Sep 13 00:04:06.487000 audit: BPF prog-id=21 op=LOAD Sep 13 00:04:06.487000 audit: BPF prog-id=22 op=LOAD Sep 13 00:04:06.487000 audit: BPF prog-id=23 op=LOAD Sep 13 00:04:06.490703 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:04:06.570073 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:04:06.571340 (udev-worker)[1524]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:04:06.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.595490 systemd[1]: Started systemd-userdbd.service. Sep 13 00:04:06.873110 systemd-networkd[1527]: lo: Link UP Sep 13 00:04:06.873140 systemd-networkd[1527]: lo: Gained carrier Sep 13 00:04:06.874504 systemd-networkd[1527]: Enumeration completed Sep 13 00:04:06.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.874738 systemd[1]: Started systemd-networkd.service. Sep 13 00:04:06.874793 systemd-networkd[1527]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:04:06.879504 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:04:06.892043 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:04:06.892283 systemd-networkd[1527]: eth0: Link UP Sep 13 00:04:06.892666 systemd-networkd[1527]: eth0: Gained carrier Sep 13 00:04:06.922200 systemd-networkd[1527]: eth0: DHCPv4 address 172.31.31.19/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:04:07.010026 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:04:07.020772 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:04:07.025698 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:04:07.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.088571 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:04:07.128041 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:04:07.130399 systemd[1]: Reached target cryptsetup.target. Sep 13 00:04:07.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.134981 systemd[1]: Starting lvm2-activation.service... Sep 13 00:04:07.145147 lvm[1641]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:04:07.184710 systemd[1]: Finished lvm2-activation.service. Sep 13 00:04:07.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.187118 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:04:07.189249 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:04:07.189312 systemd[1]: Reached target local-fs.target. Sep 13 00:04:07.191313 systemd[1]: Reached target machines.target. Sep 13 00:04:07.195606 systemd[1]: Starting ldconfig.service... Sep 13 00:04:07.198499 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.198798 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:07.201547 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:04:07.206338 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:04:07.212860 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:04:07.220461 systemd[1]: Starting systemd-sysext.service... Sep 13 00:04:07.223469 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1643 (bootctl) Sep 13 00:04:07.226529 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:04:07.268311 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:04:07.283416 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:04:07.283839 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:04:07.305038 kernel: loop0: detected capacity change from 0 to 211168 Sep 13 00:04:07.314351 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:04:07.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.427341 systemd-fsck[1653]: fsck.fat 4.2 (2021-01-31) Sep 13 00:04:07.427341 systemd-fsck[1653]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Sep 13 00:04:07.432012 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:04:07.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.437724 systemd[1]: Mounting boot.mount... Sep 13 00:04:07.487448 systemd[1]: Mounted boot.mount. Sep 13 00:04:07.529339 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:04:07.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.633496 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:04:07.634411 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:04:07.636318 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:04:07.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.668866 kernel: loop1: detected capacity change from 0 to 211168 Sep 13 00:04:07.685303 (sd-sysext)[1668]: Using extensions 'kubernetes'. Sep 13 00:04:07.686965 (sd-sysext)[1668]: Merged extensions into '/usr'. Sep 13 00:04:07.725929 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:04:07.728586 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.735283 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:04:07.741349 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:04:07.746369 systemd[1]: Starting modprobe@loop.service... Sep 13 00:04:07.748689 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.750152 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:07.757444 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:04:07.760462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:07.760800 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:04:07.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.763719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:07.764072 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:04:07.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.767080 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:07.767366 systemd[1]: Finished modprobe@loop.service. Sep 13 00:04:07.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.773103 systemd[1]: Finished systemd-sysext.service. Sep 13 00:04:07.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.777732 systemd[1]: Starting ensure-sysext.service... Sep 13 00:04:07.779527 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:07.779682 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.782186 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:04:07.805069 systemd[1]: Reloading. Sep 13 00:04:07.853892 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:04:07.885739 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:04:07.895611 /usr/lib/systemd/system-generators/torcx-generator[1697]: time="2025-09-13T00:04:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:04:07.895688 /usr/lib/systemd/system-generators/torcx-generator[1697]: time="2025-09-13T00:04:07Z" level=info msg="torcx already run" Sep 13 00:04:07.934873 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:04:08.149126 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:04:08.149453 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:04:08.195971 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:08.361000 audit: BPF prog-id=24 op=LOAD Sep 13 00:04:08.361000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:04:08.362000 audit: BPF prog-id=25 op=LOAD Sep 13 00:04:08.362000 audit: BPF prog-id=26 op=LOAD Sep 13 00:04:08.362000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:04:08.362000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:04:08.371000 audit: BPF prog-id=27 op=LOAD Sep 13 00:04:08.372000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:04:08.372000 audit: BPF prog-id=28 op=LOAD Sep 13 00:04:08.373000 audit: BPF prog-id=29 op=LOAD Sep 13 00:04:08.373000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:04:08.373000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:04:08.375000 audit: BPF prog-id=30 op=LOAD Sep 13 00:04:08.375000 audit: BPF prog-id=31 op=LOAD Sep 13 00:04:08.375000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:04:08.376000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:04:08.382000 audit: BPF prog-id=32 op=LOAD Sep 13 00:04:08.382000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:04:08.423644 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:04:08.427608 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:04:08.433073 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:04:08.439136 systemd[1]: Starting modprobe@loop.service... Sep 13 00:04:08.441309 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:04:08.442140 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:08.446367 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:04:08.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.450667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:08.451408 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:04:08.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.456356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:08.456719 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:04:08.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.459932 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:08.460289 systemd[1]: Finished modprobe@loop.service. Sep 13 00:04:08.466500 systemd[1]: Starting audit-rules.service... Sep 13 00:04:08.471661 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:04:08.481072 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:04:08.488000 audit: BPF prog-id=33 op=LOAD Sep 13 00:04:08.484875 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:08.485195 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:04:08.494171 systemd[1]: Starting systemd-resolved.service... Sep 13 00:04:08.497000 audit: BPF prog-id=34 op=LOAD Sep 13 00:04:08.502362 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:04:08.509147 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:04:08.526072 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:04:08.531581 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:04:08.536802 systemd[1]: Starting modprobe@drm.service... Sep 13 00:04:08.542327 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:04:08.548718 systemd[1]: Starting modprobe@loop.service... Sep 13 00:04:08.552529 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:04:08.552701 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:08.555102 systemd[1]: Finished ensure-sysext.service. Sep 13 00:04:08.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.558322 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:04:08.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.563532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:08.563862 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:04:08.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.567074 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:04:08.567417 systemd[1]: Finished modprobe@drm.service. Sep 13 00:04:08.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.573224 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:04:08.588000 audit[1760]: SYSTEM_BOOT pid=1760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.595232 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:04:08.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.600163 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:08.600458 systemd[1]: Finished modprobe@loop.service. Sep 13 00:04:08.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.604710 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:04:08.611214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:08.611492 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:04:08.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.614584 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:08.630929 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:04:08.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.752691 systemd-resolved[1758]: Positive Trust Anchors: Sep 13 00:04:08.753677 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:04:08.756551 systemd-resolved[1758]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:04:08.756880 systemd-resolved[1758]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:04:08.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:08.758752 systemd[1]: Reached target time-set.target. Sep 13 00:04:08.763330 augenrules[1779]: No rules Sep 13 00:04:08.761000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:04:08.761000 audit[1779]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff749f9f0 a2=420 a3=0 items=0 ppid=1755 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:08.761000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:04:08.766095 systemd[1]: Finished audit-rules.service. Sep 13 00:04:08.826483 systemd-resolved[1758]: Defaulting to hostname 'linux'. Sep 13 00:04:08.829214 ldconfig[1642]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:04:08.831186 systemd[1]: Started systemd-resolved.service. Sep 13 00:04:08.833463 systemd[1]: Reached target network.target. Sep 13 00:04:08.835508 systemd[1]: Reached target nss-lookup.target. Sep 13 00:04:08.843410 systemd[1]: Finished ldconfig.service. Sep 13 00:04:08.848222 systemd[1]: Starting systemd-update-done.service... Sep 13 00:04:08.863082 systemd-networkd[1527]: eth0: Gained IPv6LL Sep 13 00:04:08.866383 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:04:08.869555 systemd[1]: Finished systemd-update-done.service. Sep 13 00:04:08.872343 systemd[1]: Reached target network-online.target. Sep 13 00:04:08.874516 systemd[1]: Reached target sysinit.target. Sep 13 00:04:08.876606 systemd[1]: Started motdgen.path. Sep 13 00:04:08.878402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:04:08.881439 systemd[1]: Started logrotate.timer. Sep 13 00:04:08.883359 systemd[1]: Started mdadm.timer. Sep 13 00:04:08.885092 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:04:08.887219 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:04:08.887274 systemd[1]: Reached target paths.target. Sep 13 00:04:08.889055 systemd[1]: Reached target timers.target. Sep 13 00:04:08.891426 systemd[1]: Listening on dbus.socket. Sep 13 00:04:08.895464 systemd[1]: Starting docker.socket... Sep 13 00:04:08.903475 systemd[1]: Listening on sshd.socket. Sep 13 00:04:08.905885 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:08.907129 systemd[1]: Listening on docker.socket. Sep 13 00:04:08.909368 systemd-timesyncd[1759]: Contacted time server 45.79.189.79:123 (0.flatcar.pool.ntp.org). Sep 13 00:04:08.909539 systemd-timesyncd[1759]: Initial clock synchronization to Sat 2025-09-13 00:04:08.928388 UTC. Sep 13 00:04:08.910324 systemd[1]: Reached target sockets.target. Sep 13 00:04:08.912551 systemd[1]: Reached target basic.target. Sep 13 00:04:08.914658 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:04:08.914954 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:04:08.917626 systemd[1]: Started amazon-ssm-agent.service. Sep 13 00:04:08.922985 systemd[1]: Starting containerd.service... Sep 13 00:04:08.928897 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:04:08.934463 systemd[1]: Starting dbus.service... Sep 13 00:04:08.939296 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:04:08.946259 systemd[1]: Starting extend-filesystems.service... Sep 13 00:04:08.953681 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:04:08.956656 systemd[1]: Starting kubelet.service... Sep 13 00:04:08.960806 systemd[1]: Starting motdgen.service... Sep 13 00:04:08.965606 systemd[1]: Started nvidia.service. Sep 13 00:04:08.976231 systemd[1]: Starting prepare-helm.service... Sep 13 00:04:08.980894 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:04:08.983727 jq[1792]: false Sep 13 00:04:08.994219 systemd[1]: Starting sshd-keygen.service... Sep 13 00:04:09.009087 systemd[1]: Starting systemd-logind.service... Sep 13 00:04:09.010998 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:09.011160 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:04:09.012791 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:04:09.014539 systemd[1]: Starting update-engine.service... Sep 13 00:04:09.022927 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:04:09.049850 jq[1804]: true Sep 13 00:04:09.055450 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:04:09.055889 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:04:09.087213 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:04:09.087588 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:04:09.163368 tar[1807]: linux-arm64/LICENSE Sep 13 00:04:09.164209 tar[1807]: linux-arm64/helm Sep 13 00:04:09.183651 extend-filesystems[1793]: Found loop1 Sep 13 00:04:09.189942 extend-filesystems[1793]: Found nvme0n1 Sep 13 00:04:09.189942 extend-filesystems[1793]: Found nvme0n1p1 Sep 13 00:04:09.189942 extend-filesystems[1793]: Found nvme0n1p2 Sep 13 00:04:09.189942 extend-filesystems[1793]: Found nvme0n1p3 Sep 13 00:04:09.189942 extend-filesystems[1793]: Found usr Sep 13 00:04:09.189942 extend-filesystems[1793]: Found nvme0n1p4 Sep 13 00:04:09.189942 extend-filesystems[1793]: Found nvme0n1p6 Sep 13 00:04:09.189942 extend-filesystems[1793]: Found nvme0n1p7 Sep 13 00:04:09.189942 extend-filesystems[1793]: Found nvme0n1p9 Sep 13 00:04:09.189942 extend-filesystems[1793]: Checking size of /dev/nvme0n1p9 Sep 13 00:04:09.236161 jq[1808]: true Sep 13 00:04:09.283649 dbus-daemon[1791]: [system] SELinux support is enabled Sep 13 00:04:09.287694 systemd[1]: Started dbus.service. Sep 13 00:04:09.293048 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:04:09.293097 systemd[1]: Reached target system-config.target. Sep 13 00:04:09.295264 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:04:09.295306 systemd[1]: Reached target user-config.target. Sep 13 00:04:09.332793 dbus-daemon[1791]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1527 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:04:09.340042 extend-filesystems[1793]: Resized partition /dev/nvme0n1p9 Sep 13 00:04:09.342557 systemd[1]: Starting systemd-hostnamed.service... Sep 13 00:04:09.361229 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:04:09.361717 systemd[1]: Finished motdgen.service. Sep 13 00:04:09.379952 extend-filesystems[1842]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:04:09.446866 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:04:09.466614 update_engine[1801]: I0913 00:04:09.465854 1801 main.cc:92] Flatcar Update Engine starting Sep 13 00:04:09.474266 systemd[1]: Started update-engine.service. Sep 13 00:04:09.479754 systemd[1]: Started locksmithd.service. Sep 13 00:04:09.484135 update_engine[1801]: I0913 00:04:09.483721 1801 update_check_scheduler.cc:74] Next update check in 6m39s Sep 13 00:04:09.543880 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:04:09.563059 extend-filesystems[1842]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:04:09.563059 extend-filesystems[1842]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:04:09.563059 extend-filesystems[1842]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:04:09.576052 extend-filesystems[1793]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:04:09.576076 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:04:09.578544 systemd[1]: Finished extend-filesystems.service. Sep 13 00:04:09.585088 bash[1859]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:04:09.586569 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:04:09.603542 systemd-logind[1800]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:04:09.603615 systemd-logind[1800]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 13 00:04:09.605205 systemd-logind[1800]: New seat seat0. Sep 13 00:04:09.610575 systemd[1]: Started systemd-logind.service. Sep 13 00:04:09.692076 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:04:09.723058 amazon-ssm-agent[1788]: 2025/09/13 00:04:09 Failed to load instance info from vault. RegistrationKey does not exist. Sep 13 00:04:09.758432 amazon-ssm-agent[1788]: Initializing new seelog logger Sep 13 00:04:09.764335 amazon-ssm-agent[1788]: New Seelog Logger Creation Complete Sep 13 00:04:09.765800 amazon-ssm-agent[1788]: 2025/09/13 00:04:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:04:09.768921 amazon-ssm-agent[1788]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:04:09.770939 amazon-ssm-agent[1788]: 2025/09/13 00:04:09 processing appconfig overrides Sep 13 00:04:09.933943 env[1810]: time="2025-09-13T00:04:09.932098865Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:04:10.092071 env[1810]: time="2025-09-13T00:04:10.091975859Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:04:10.092420 env[1810]: time="2025-09-13T00:04:10.092285760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:10.111261 env[1810]: time="2025-09-13T00:04:10.111165124Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:10.111261 env[1810]: time="2025-09-13T00:04:10.111245086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:10.111799 env[1810]: time="2025-09-13T00:04:10.111714395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:10.111799 env[1810]: time="2025-09-13T00:04:10.111785334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:10.112022 env[1810]: time="2025-09-13T00:04:10.111869922Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:04:10.112022 env[1810]: time="2025-09-13T00:04:10.111900424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:10.112197 env[1810]: time="2025-09-13T00:04:10.112137919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:10.112751 env[1810]: time="2025-09-13T00:04:10.112683994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:10.125284 env[1810]: time="2025-09-13T00:04:10.125188994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:10.125284 env[1810]: time="2025-09-13T00:04:10.125272175Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:04:10.125477 env[1810]: time="2025-09-13T00:04:10.125439283Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:04:10.125580 env[1810]: time="2025-09-13T00:04:10.125470398Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:04:10.131240 dbus-daemon[1791]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:04:10.131509 systemd[1]: Started systemd-hostnamed.service. Sep 13 00:04:10.136010 dbus-daemon[1791]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1843 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:04:10.141953 systemd[1]: Starting polkit.service... Sep 13 00:04:10.153621 env[1810]: time="2025-09-13T00:04:10.153544543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:04:10.153794 env[1810]: time="2025-09-13T00:04:10.153628518Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:04:10.153794 env[1810]: time="2025-09-13T00:04:10.153673809Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:04:10.153794 env[1810]: time="2025-09-13T00:04:10.153750118Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:04:10.153997 env[1810]: time="2025-09-13T00:04:10.153791217Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:04:10.153997 env[1810]: time="2025-09-13T00:04:10.153878975Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:04:10.153997 env[1810]: time="2025-09-13T00:04:10.153918692Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:04:10.154716 env[1810]: time="2025-09-13T00:04:10.154454183Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:04:10.154716 env[1810]: time="2025-09-13T00:04:10.154531322Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:04:10.160009 env[1810]: time="2025-09-13T00:04:10.154567879Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:04:10.160174 env[1810]: time="2025-09-13T00:04:10.160018308Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:04:10.160174 env[1810]: time="2025-09-13T00:04:10.160061761Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:04:10.160393 env[1810]: time="2025-09-13T00:04:10.160328197Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:04:10.160643 env[1810]: time="2025-09-13T00:04:10.160583881Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:04:10.165727 env[1810]: time="2025-09-13T00:04:10.165604441Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:04:10.165919 env[1810]: time="2025-09-13T00:04:10.165744735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.165919 env[1810]: time="2025-09-13T00:04:10.165789534Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:04:10.170918 env[1810]: time="2025-09-13T00:04:10.167787251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171108 env[1810]: time="2025-09-13T00:04:10.170948646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171108 env[1810]: time="2025-09-13T00:04:10.171001830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171108 env[1810]: time="2025-09-13T00:04:10.171033822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171108 env[1810]: time="2025-09-13T00:04:10.171065694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171108 env[1810]: time="2025-09-13T00:04:10.171096917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171366 env[1810]: time="2025-09-13T00:04:10.171127972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171366 env[1810]: time="2025-09-13T00:04:10.171165406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171366 env[1810]: time="2025-09-13T00:04:10.171203477Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:04:10.171586 env[1810]: time="2025-09-13T00:04:10.171526220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171669 env[1810]: time="2025-09-13T00:04:10.171587021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171669 env[1810]: time="2025-09-13T00:04:10.171622821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.171669 env[1810]: time="2025-09-13T00:04:10.171654849Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:04:10.171904 env[1810]: time="2025-09-13T00:04:10.171690878Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:04:10.171904 env[1810]: time="2025-09-13T00:04:10.171720071Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:04:10.171904 env[1810]: time="2025-09-13T00:04:10.171757385Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:04:10.171904 env[1810]: time="2025-09-13T00:04:10.171863188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:04:10.179377 env[1810]: time="2025-09-13T00:04:10.179212294Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:04:10.179377 env[1810]: time="2025-09-13T00:04:10.179368566Z" level=info msg="Connect containerd service" Sep 13 00:04:10.181585 env[1810]: time="2025-09-13T00:04:10.179442846Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:04:10.191722 env[1810]: time="2025-09-13T00:04:10.191641620Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:04:10.192508 env[1810]: time="2025-09-13T00:04:10.192190098Z" level=info msg="Start subscribing containerd event" Sep 13 00:04:10.192508 env[1810]: time="2025-09-13T00:04:10.192303361Z" level=info msg="Start recovering state" Sep 13 00:04:10.192508 env[1810]: time="2025-09-13T00:04:10.192421671Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:04:10.192803 env[1810]: time="2025-09-13T00:04:10.192552137Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:04:10.192776 systemd[1]: Started containerd.service. Sep 13 00:04:10.196471 env[1810]: time="2025-09-13T00:04:10.194534057Z" level=info msg="Start event monitor" Sep 13 00:04:10.196471 env[1810]: time="2025-09-13T00:04:10.194691314Z" level=info msg="Start snapshots syncer" Sep 13 00:04:10.196471 env[1810]: time="2025-09-13T00:04:10.194722429Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:04:10.196471 env[1810]: time="2025-09-13T00:04:10.194771636Z" level=info msg="Start streaming server" Sep 13 00:04:10.212780 polkitd[1933]: Started polkitd version 121 Sep 13 00:04:10.242626 env[1810]: time="2025-09-13T00:04:10.242548108Z" level=info msg="containerd successfully booted in 0.414736s" Sep 13 00:04:10.257127 polkitd[1933]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:04:10.259744 polkitd[1933]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:04:10.266945 polkitd[1933]: Finished loading, compiling and executing 2 rules Sep 13 00:04:10.271220 dbus-daemon[1791]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:04:10.271522 systemd[1]: Started polkit.service. Sep 13 00:04:10.276912 polkitd[1933]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:04:10.329394 systemd-resolved[1758]: System hostname changed to 'ip-172-31-31-19'. Sep 13 00:04:10.329403 systemd-hostnamed[1843]: Hostname set to (transient) Sep 13 00:04:10.377615 coreos-metadata[1790]: Sep 13 00:04:10.377 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:04:10.380071 coreos-metadata[1790]: Sep 13 00:04:10.379 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 13 00:04:10.381356 coreos-metadata[1790]: Sep 13 00:04:10.381 INFO Fetch successful Sep 13 00:04:10.381356 coreos-metadata[1790]: Sep 13 00:04:10.381 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:04:10.382462 coreos-metadata[1790]: Sep 13 00:04:10.382 INFO Fetch successful Sep 13 00:04:10.386901 unknown[1790]: wrote ssh authorized keys file for user: core Sep 13 00:04:10.415993 update-ssh-keys[1969]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:04:10.417423 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:04:10.650099 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Create new startup processor Sep 13 00:04:10.657674 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [LongRunningPluginsManager] registered plugins: {} Sep 13 00:04:10.658645 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Initializing bookkeeping folders Sep 13 00:04:10.658865 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO removing the completed state files Sep 13 00:04:10.659011 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Initializing bookkeeping folders for long running plugins Sep 13 00:04:10.659138 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 13 00:04:10.659271 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Initializing healthcheck folders for long running plugins Sep 13 00:04:10.659439 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Initializing locations for inventory plugin Sep 13 00:04:10.659615 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Initializing default location for custom inventory Sep 13 00:04:10.659758 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Initializing default location for file inventory Sep 13 00:04:10.659978 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Initializing default location for role inventory Sep 13 00:04:10.660126 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Init the cloudwatchlogs publisher Sep 13 00:04:10.660269 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 13 00:04:10.660406 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 13 00:04:10.660532 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform independent plugin aws:runDockerAction Sep 13 00:04:10.660651 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform independent plugin aws:configurePackage Sep 13 00:04:10.660860 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform independent plugin aws:downloadContent Sep 13 00:04:10.662924 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform independent plugin aws:softwareInventory Sep 13 00:04:10.663140 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform independent plugin aws:configureDocker Sep 13 00:04:10.663280 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform independent plugin aws:refreshAssociation Sep 13 00:04:10.663448 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform independent plugin aws:runDocument Sep 13 00:04:10.663587 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Successfully loaded platform dependent plugin aws:runShellScript Sep 13 00:04:10.663721 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 13 00:04:10.663928 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO OS: linux, Arch: arm64 Sep 13 00:04:10.664624 amazon-ssm-agent[1788]: datastore file /var/lib/amazon/ssm/i-0cad2ccc02b1dfa06/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 13 00:04:10.752428 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessageGatewayService] Starting session document processing engine... Sep 13 00:04:10.847401 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 13 00:04:10.942085 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 13 00:04:11.036520 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0cad2ccc02b1dfa06, requestId: dc054486-d1eb-4eef-b89c-b11ec44d4f85 Sep 13 00:04:11.131319 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 13 00:04:11.226354 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] Starting document processing engine... Sep 13 00:04:11.244394 tar[1807]: linux-arm64/README.md Sep 13 00:04:11.253877 systemd[1]: Finished prepare-helm.service. Sep 13 00:04:11.321482 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 13 00:04:11.400272 locksmithd[1860]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:04:11.416796 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 13 00:04:11.512546 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] Starting message polling Sep 13 00:04:11.608169 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 13 00:04:11.704128 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [instanceID=i-0cad2ccc02b1dfa06] Starting association polling Sep 13 00:04:11.800367 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 13 00:04:11.896668 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 13 00:04:11.993144 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 13 00:04:12.089987 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 13 00:04:12.186920 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 13 00:04:12.284074 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessageGatewayService] listening reply. Sep 13 00:04:12.381384 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [OfflineService] Starting document processing engine... Sep 13 00:04:12.478828 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [OfflineService] [EngineProcessor] Starting Sep 13 00:04:12.576579 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [OfflineService] [EngineProcessor] Initial processing Sep 13 00:04:12.674531 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [OfflineService] Starting message polling Sep 13 00:04:12.772490 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [OfflineService] Starting send replies to MDS Sep 13 00:04:12.870778 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:04:12.969343 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 13 00:04:13.068006 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 13 00:04:13.166938 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [StartupProcessor] Executing startup processor tasks Sep 13 00:04:13.266063 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 13 00:04:13.365240 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 13 00:04:13.464694 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 13 00:04:13.564426 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0cad2ccc02b1dfa06?role=subscribe&stream=input Sep 13 00:04:13.643420 systemd[1]: Started kubelet.service. Sep 13 00:04:13.664193 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0cad2ccc02b1dfa06?role=subscribe&stream=input Sep 13 00:04:13.764316 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessageGatewayService] Starting receiving message from control channel Sep 13 00:04:13.864666 amazon-ssm-agent[1788]: 2025-09-13 00:04:10 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 13 00:04:14.282850 sshd_keygen[1833]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:04:14.326722 systemd[1]: Finished sshd-keygen.service. Sep 13 00:04:14.332136 systemd[1]: Starting issuegen.service... Sep 13 00:04:14.343989 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:04:14.344388 systemd[1]: Finished issuegen.service. Sep 13 00:04:14.349728 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:04:14.369362 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:04:14.375478 systemd[1]: Started getty@tty1.service. Sep 13 00:04:14.382414 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:04:14.385072 systemd[1]: Reached target getty.target. Sep 13 00:04:14.387214 systemd[1]: Reached target multi-user.target. Sep 13 00:04:14.393336 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:04:14.418238 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:04:14.418782 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:04:14.421625 systemd[1]: Startup finished in 1.227s (kernel) + 9.349s (initrd) + 14.468s (userspace) = 25.045s. Sep 13 00:04:15.111685 kubelet[2000]: E0913 00:04:15.111577 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:15.115523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:15.115890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:15.116370 systemd[1]: kubelet.service: Consumed 1.634s CPU time. Sep 13 00:04:17.893751 systemd[1]: Created slice system-sshd.slice. Sep 13 00:04:17.896187 systemd[1]: Started sshd@0-172.31.31.19:22-139.178.89.65:35050.service. Sep 13 00:04:18.224711 sshd[2021]: Accepted publickey for core from 139.178.89.65 port 35050 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:18.232100 sshd[2021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:18.255094 systemd[1]: Created slice user-500.slice. Sep 13 00:04:18.257629 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:04:18.265971 systemd-logind[1800]: New session 1 of user core. Sep 13 00:04:18.280040 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:04:18.283575 systemd[1]: Starting user@500.service... Sep 13 00:04:18.291855 (systemd)[2024]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:18.495713 systemd[2024]: Queued start job for default target default.target. Sep 13 00:04:18.496925 systemd[2024]: Reached target paths.target. Sep 13 00:04:18.496984 systemd[2024]: Reached target sockets.target. Sep 13 00:04:18.497018 systemd[2024]: Reached target timers.target. Sep 13 00:04:18.497048 systemd[2024]: Reached target basic.target. Sep 13 00:04:18.497228 systemd[1]: Started user@500.service. Sep 13 00:04:18.499321 systemd[1]: Started session-1.scope. Sep 13 00:04:18.500913 systemd[2024]: Reached target default.target. Sep 13 00:04:18.501399 systemd[2024]: Startup finished in 197ms. Sep 13 00:04:18.659062 systemd[1]: Started sshd@1-172.31.31.19:22-139.178.89.65:35052.service. Sep 13 00:04:18.832293 sshd[2033]: Accepted publickey for core from 139.178.89.65 port 35052 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:18.835394 sshd[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:18.844703 systemd[1]: Started session-2.scope. Sep 13 00:04:18.845668 systemd-logind[1800]: New session 2 of user core. Sep 13 00:04:18.976906 sshd[2033]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:18.982587 systemd[1]: sshd@1-172.31.31.19:22-139.178.89.65:35052.service: Deactivated successfully. Sep 13 00:04:18.983812 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:04:18.984131 systemd-logind[1800]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:04:18.985972 systemd-logind[1800]: Removed session 2. Sep 13 00:04:19.007180 systemd[1]: Started sshd@2-172.31.31.19:22-139.178.89.65:35066.service. Sep 13 00:04:19.179269 sshd[2039]: Accepted publickey for core from 139.178.89.65 port 35066 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:19.182261 sshd[2039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:19.190717 systemd[1]: Started session-3.scope. Sep 13 00:04:19.191667 systemd-logind[1800]: New session 3 of user core. Sep 13 00:04:19.314030 sshd[2039]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:19.318648 systemd[1]: sshd@2-172.31.31.19:22-139.178.89.65:35066.service: Deactivated successfully. Sep 13 00:04:19.320012 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:04:19.321192 systemd-logind[1800]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:04:19.323220 systemd-logind[1800]: Removed session 3. Sep 13 00:04:19.342209 systemd[1]: Started sshd@3-172.31.31.19:22-139.178.89.65:35070.service. Sep 13 00:04:19.516761 sshd[2045]: Accepted publickey for core from 139.178.89.65 port 35070 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:19.519614 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:19.529778 systemd[1]: Started session-4.scope. Sep 13 00:04:19.530873 systemd-logind[1800]: New session 4 of user core. Sep 13 00:04:19.661769 sshd[2045]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:19.666271 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:04:19.667535 systemd-logind[1800]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:04:19.667959 systemd[1]: sshd@3-172.31.31.19:22-139.178.89.65:35070.service: Deactivated successfully. Sep 13 00:04:19.670343 systemd-logind[1800]: Removed session 4. Sep 13 00:04:19.688917 systemd[1]: Started sshd@4-172.31.31.19:22-139.178.89.65:35076.service. Sep 13 00:04:19.861992 sshd[2051]: Accepted publickey for core from 139.178.89.65 port 35076 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:19.864676 sshd[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:19.873430 systemd-logind[1800]: New session 5 of user core. Sep 13 00:04:19.874393 systemd[1]: Started session-5.scope. Sep 13 00:04:20.050028 sudo[2054]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:04:20.050601 sudo[2054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:04:20.140937 systemd[1]: Starting docker.service... Sep 13 00:04:20.270475 env[2064]: time="2025-09-13T00:04:20.270410205Z" level=info msg="Starting up" Sep 13 00:04:20.281026 env[2064]: time="2025-09-13T00:04:20.280980506Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:04:20.281203 env[2064]: time="2025-09-13T00:04:20.281173580Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:04:20.281335 env[2064]: time="2025-09-13T00:04:20.281301876Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:04:20.281444 env[2064]: time="2025-09-13T00:04:20.281416736Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:04:20.285423 env[2064]: time="2025-09-13T00:04:20.285350548Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:04:20.285423 env[2064]: time="2025-09-13T00:04:20.285399008Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:04:20.285635 env[2064]: time="2025-09-13T00:04:20.285438356Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:04:20.285635 env[2064]: time="2025-09-13T00:04:20.285463066Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:04:20.295762 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport315162989-merged.mount: Deactivated successfully. Sep 13 00:04:20.345486 env[2064]: time="2025-09-13T00:04:20.344894122Z" level=info msg="Loading containers: start." Sep 13 00:04:20.660893 kernel: Initializing XFRM netlink socket Sep 13 00:04:20.733773 env[2064]: time="2025-09-13T00:04:20.733725343Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:04:20.737321 (udev-worker)[2074]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:04:20.886960 systemd-networkd[1527]: docker0: Link UP Sep 13 00:04:20.917658 env[2064]: time="2025-09-13T00:04:20.917521639Z" level=info msg="Loading containers: done." Sep 13 00:04:20.952662 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2510700744-merged.mount: Deactivated successfully. Sep 13 00:04:20.965224 env[2064]: time="2025-09-13T00:04:20.965103122Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:04:20.965526 env[2064]: time="2025-09-13T00:04:20.965469723Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:04:20.965726 env[2064]: time="2025-09-13T00:04:20.965687760Z" level=info msg="Daemon has completed initialization" Sep 13 00:04:21.003417 systemd[1]: Started docker.service. Sep 13 00:04:21.012420 env[2064]: time="2025-09-13T00:04:21.012330104Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:04:22.596749 env[1810]: time="2025-09-13T00:04:22.596681443Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:04:23.253152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783852434.mount: Deactivated successfully. Sep 13 00:04:25.367287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:04:25.367600 systemd[1]: Stopped kubelet.service. Sep 13 00:04:25.367674 systemd[1]: kubelet.service: Consumed 1.634s CPU time. Sep 13 00:04:25.370355 systemd[1]: Starting kubelet.service... Sep 13 00:04:25.601848 env[1810]: time="2025-09-13T00:04:25.599808148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:25.605178 env[1810]: time="2025-09-13T00:04:25.605117060Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:25.619401 env[1810]: time="2025-09-13T00:04:25.618685842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:25.624242 env[1810]: time="2025-09-13T00:04:25.624156932Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:25.625414 env[1810]: time="2025-09-13T00:04:25.625319737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 13 00:04:25.629501 env[1810]: time="2025-09-13T00:04:25.629421665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:04:25.743172 systemd[1]: Started kubelet.service. Sep 13 00:04:25.826243 kubelet[2193]: E0913 00:04:25.826178 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:25.834074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:25.834401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:27.951785 env[1810]: time="2025-09-13T00:04:27.951699167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:27.956925 env[1810]: time="2025-09-13T00:04:27.955471382Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:27.960127 env[1810]: time="2025-09-13T00:04:27.960049303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:27.964081 env[1810]: time="2025-09-13T00:04:27.964008693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:27.966518 env[1810]: time="2025-09-13T00:04:27.966444337Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 13 00:04:27.967386 env[1810]: time="2025-09-13T00:04:27.967329898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:04:29.639470 env[1810]: time="2025-09-13T00:04:29.639396893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:29.642608 env[1810]: time="2025-09-13T00:04:29.642524287Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:29.648947 env[1810]: time="2025-09-13T00:04:29.647079301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:29.652116 env[1810]: time="2025-09-13T00:04:29.652038740Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:29.654084 env[1810]: time="2025-09-13T00:04:29.653997908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 13 00:04:29.654801 env[1810]: time="2025-09-13T00:04:29.654758323Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:04:31.040475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount453017914.mount: Deactivated successfully. Sep 13 00:04:31.995216 env[1810]: time="2025-09-13T00:04:31.995150028Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:31.998623 env[1810]: time="2025-09-13T00:04:31.998543606Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:32.001735 env[1810]: time="2025-09-13T00:04:32.001666430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:32.004407 env[1810]: time="2025-09-13T00:04:32.004335153Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:32.005955 env[1810]: time="2025-09-13T00:04:32.005881798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 13 00:04:32.007047 env[1810]: time="2025-09-13T00:04:32.006978681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:04:32.478016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896965880.mount: Deactivated successfully. Sep 13 00:04:34.189530 env[1810]: time="2025-09-13T00:04:34.189467509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:34.192290 env[1810]: time="2025-09-13T00:04:34.192239680Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:34.195749 env[1810]: time="2025-09-13T00:04:34.195662924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:34.200903 env[1810]: time="2025-09-13T00:04:34.200851814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:34.202999 env[1810]: time="2025-09-13T00:04:34.202942655Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 13 00:04:34.204732 env[1810]: time="2025-09-13T00:04:34.204670073Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:04:34.693029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177392338.mount: Deactivated successfully. Sep 13 00:04:34.702469 env[1810]: time="2025-09-13T00:04:34.702405708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:34.705436 env[1810]: time="2025-09-13T00:04:34.705380200Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:34.708620 env[1810]: time="2025-09-13T00:04:34.708565151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:34.710952 env[1810]: time="2025-09-13T00:04:34.710897765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:34.712393 env[1810]: time="2025-09-13T00:04:34.712343883Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:04:34.713329 env[1810]: time="2025-09-13T00:04:34.713285197Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:04:35.186486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608578041.mount: Deactivated successfully. Sep 13 00:04:36.085768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:04:36.086144 systemd[1]: Stopped kubelet.service. Sep 13 00:04:36.088897 systemd[1]: Starting kubelet.service... Sep 13 00:04:36.843394 amazon-ssm-agent[1788]: 2025-09-13 00:04:36 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 13 00:04:37.665292 systemd[1]: Started kubelet.service. Sep 13 00:04:37.754134 kubelet[2204]: E0913 00:04:37.754075 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:37.758800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:37.759154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:39.097362 env[1810]: time="2025-09-13T00:04:39.097273696Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:39.100537 env[1810]: time="2025-09-13T00:04:39.100471879Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:39.104371 env[1810]: time="2025-09-13T00:04:39.104315317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:39.109945 env[1810]: time="2025-09-13T00:04:39.109881342Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 13 00:04:39.110264 env[1810]: time="2025-09-13T00:04:39.109937152Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:40.347020 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:04:45.896617 systemd[1]: Stopped kubelet.service. Sep 13 00:04:45.904629 systemd[1]: Starting kubelet.service... Sep 13 00:04:45.987734 systemd[1]: Reloading. Sep 13 00:04:46.160008 /usr/lib/systemd/system-generators/torcx-generator[2262]: time="2025-09-13T00:04:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:04:46.160075 /usr/lib/systemd/system-generators/torcx-generator[2262]: time="2025-09-13T00:04:46Z" level=info msg="torcx already run" Sep 13 00:04:46.351184 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:04:46.351233 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:04:46.392384 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:46.623532 systemd[1]: Stopping kubelet.service... Sep 13 00:04:46.624457 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:04:46.625372 systemd[1]: Stopped kubelet.service. Sep 13 00:04:46.629586 systemd[1]: Starting kubelet.service... Sep 13 00:04:46.954177 systemd[1]: Started kubelet.service. Sep 13 00:04:47.046098 kubelet[2321]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:47.046628 kubelet[2321]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:04:47.046628 kubelet[2321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:47.046628 kubelet[2321]: I0913 00:04:47.046291 2321 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:04:47.508615 kubelet[2321]: I0913 00:04:47.508554 2321 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:04:47.508975 kubelet[2321]: I0913 00:04:47.508951 2321 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:04:47.509504 kubelet[2321]: I0913 00:04:47.509476 2321 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:04:47.660504 kubelet[2321]: I0913 00:04:47.660460 2321 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:04:47.662416 kubelet[2321]: E0913 00:04:47.662346 2321 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:04:47.679400 kubelet[2321]: E0913 00:04:47.679346 2321 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:04:47.679709 kubelet[2321]: I0913 00:04:47.679684 2321 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:04:47.685105 kubelet[2321]: I0913 00:04:47.685067 2321 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:04:47.686024 kubelet[2321]: I0913 00:04:47.685972 2321 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:04:47.686436 kubelet[2321]: I0913 00:04:47.686183 2321 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:04:47.686804 kubelet[2321]: I0913 00:04:47.686778 2321 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:04:47.686949 kubelet[2321]: I0913 00:04:47.686928 2321 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:04:47.687378 kubelet[2321]: I0913 00:04:47.687356 2321 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:47.695104 kubelet[2321]: I0913 00:04:47.695066 2321 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:04:47.695317 kubelet[2321]: I0913 00:04:47.695292 2321 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:04:47.695466 kubelet[2321]: I0913 00:04:47.695446 2321 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:04:47.708005 kubelet[2321]: I0913 00:04:47.707965 2321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:04:47.710247 kubelet[2321]: E0913 00:04:47.710189 2321 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-19&limit=500&resourceVersion=0\": dial tcp 172.31.31.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:04:47.711003 kubelet[2321]: I0913 00:04:47.710969 2321 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:04:47.712400 kubelet[2321]: I0913 00:04:47.712365 2321 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:04:47.712787 kubelet[2321]: W0913 00:04:47.712763 2321 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:04:47.720789 kubelet[2321]: E0913 00:04:47.720740 2321 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:04:47.724490 kubelet[2321]: I0913 00:04:47.724457 2321 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:04:47.724728 kubelet[2321]: I0913 00:04:47.724705 2321 server.go:1289] "Started kubelet" Sep 13 00:04:47.743994 kubelet[2321]: E0913 00:04:47.741412 2321 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.19:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-19.1864aebe7bc1f1e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-19,UID:ip-172-31-31-19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-19,},FirstTimestamp:2025-09-13 00:04:47.72466327 +0000 UTC m=+0.756718885,LastTimestamp:2025-09-13 00:04:47.72466327 +0000 UTC m=+0.756718885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-19,}" Sep 13 00:04:47.747968 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:04:47.748725 kubelet[2321]: I0913 00:04:47.748695 2321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:04:47.750856 kubelet[2321]: I0913 00:04:47.750762 2321 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:04:47.752604 kubelet[2321]: I0913 00:04:47.752543 2321 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:04:47.754357 kubelet[2321]: I0913 00:04:47.754255 2321 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:04:47.754676 kubelet[2321]: I0913 00:04:47.754634 2321 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:04:47.755934 kubelet[2321]: I0913 00:04:47.755884 2321 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:04:47.763924 kubelet[2321]: E0913 00:04:47.762033 2321 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:04:47.763924 kubelet[2321]: I0913 00:04:47.762208 2321 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:04:47.763924 kubelet[2321]: E0913 00:04:47.762547 2321 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-19\" not found" Sep 13 00:04:47.765228 kubelet[2321]: I0913 00:04:47.765171 2321 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:04:47.765396 kubelet[2321]: I0913 00:04:47.765289 2321 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:04:47.765707 kubelet[2321]: E0913 00:04:47.765670 2321 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:04:47.766425 kubelet[2321]: I0913 00:04:47.766373 2321 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:04:47.766770 kubelet[2321]: I0913 00:04:47.766735 2321 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:04:47.767737 kubelet[2321]: E0913 00:04:47.767071 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-19?timeout=10s\": dial tcp 172.31.31.19:6443: connect: connection refused" interval="200ms" Sep 13 00:04:47.769812 kubelet[2321]: I0913 00:04:47.769776 2321 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:04:47.803995 kubelet[2321]: I0913 00:04:47.803939 2321 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:04:47.803995 kubelet[2321]: I0913 00:04:47.803976 2321 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:04:47.804221 kubelet[2321]: I0913 00:04:47.804014 2321 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:47.806661 kubelet[2321]: I0913 00:04:47.806626 2321 policy_none.go:49] "None policy: Start" Sep 13 00:04:47.806884 kubelet[2321]: I0913 00:04:47.806859 2321 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:04:47.807016 kubelet[2321]: I0913 00:04:47.806996 2321 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:04:47.816235 systemd[1]: Created slice kubepods.slice. Sep 13 00:04:47.827301 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:04:47.833701 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:04:47.844247 kubelet[2321]: E0913 00:04:47.844205 2321 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:04:47.844810 kubelet[2321]: I0913 00:04:47.844783 2321 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:04:47.845209 kubelet[2321]: I0913 00:04:47.845156 2321 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:04:47.849138 kubelet[2321]: I0913 00:04:47.848159 2321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:04:47.849138 kubelet[2321]: I0913 00:04:47.848435 2321 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:04:47.852731 kubelet[2321]: I0913 00:04:47.852690 2321 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:04:47.852993 kubelet[2321]: I0913 00:04:47.852968 2321 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:04:47.853151 kubelet[2321]: I0913 00:04:47.853127 2321 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:04:47.853297 kubelet[2321]: I0913 00:04:47.853275 2321 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:04:47.853581 kubelet[2321]: E0913 00:04:47.853553 2321 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:04:47.853722 kubelet[2321]: E0913 00:04:47.853605 2321 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:04:47.853976 kubelet[2321]: E0913 00:04:47.853937 2321 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-19\" not found" Sep 13 00:04:47.857777 kubelet[2321]: E0913 00:04:47.857726 2321 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:04:47.950294 kubelet[2321]: I0913 00:04:47.950219 2321 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-19" Sep 13 00:04:47.951130 kubelet[2321]: E0913 00:04:47.951086 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.19:6443/api/v1/nodes\": dial tcp 172.31.31.19:6443: connect: connection refused" node="ip-172-31-31-19" Sep 13 00:04:47.968843 kubelet[2321]: I0913 00:04:47.968751 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:47.968988 kubelet[2321]: I0913 00:04:47.968865 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:47.968988 kubelet[2321]: I0913 00:04:47.968910 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:47.968988 kubelet[2321]: I0913 00:04:47.968948 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:47.969240 kubelet[2321]: I0913 00:04:47.968989 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf723825ed8298513bb3d5948c7377fc-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-19\" (UID: \"cf723825ed8298513bb3d5948c7377fc\") " pod="kube-system/kube-scheduler-ip-172-31-31-19" Sep 13 00:04:47.969240 kubelet[2321]: I0913 00:04:47.969025 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ba13677895b2fca592ee3aa582ef47a-ca-certs\") pod \"kube-apiserver-ip-172-31-31-19\" (UID: \"2ba13677895b2fca592ee3aa582ef47a\") " pod="kube-system/kube-apiserver-ip-172-31-31-19" Sep 13 00:04:47.969240 kubelet[2321]: I0913 00:04:47.969060 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ba13677895b2fca592ee3aa582ef47a-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-19\" (UID: \"2ba13677895b2fca592ee3aa582ef47a\") " pod="kube-system/kube-apiserver-ip-172-31-31-19" Sep 13 00:04:47.969240 kubelet[2321]: I0913 00:04:47.969095 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:47.970791 kubelet[2321]: E0913 00:04:47.970516 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-19?timeout=10s\": dial tcp 172.31.31.19:6443: connect: connection refused" interval="400ms" Sep 13 00:04:47.976950 systemd[1]: Created slice kubepods-burstable-pod1d168e18b5f408f3ea3ed431e8335147.slice. Sep 13 00:04:47.993299 kubelet[2321]: E0913 00:04:47.993244 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:47.998072 systemd[1]: Created slice kubepods-burstable-podcf723825ed8298513bb3d5948c7377fc.slice. Sep 13 00:04:48.001729 kubelet[2321]: E0913 00:04:48.001506 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:48.006428 systemd[1]: Created slice kubepods-burstable-pod2ba13677895b2fca592ee3aa582ef47a.slice. Sep 13 00:04:48.010762 kubelet[2321]: E0913 00:04:48.010679 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:48.070449 kubelet[2321]: I0913 00:04:48.070297 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ba13677895b2fca592ee3aa582ef47a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-19\" (UID: \"2ba13677895b2fca592ee3aa582ef47a\") " pod="kube-system/kube-apiserver-ip-172-31-31-19" Sep 13 00:04:48.153774 kubelet[2321]: I0913 00:04:48.153736 2321 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-19" Sep 13 00:04:48.154642 kubelet[2321]: E0913 00:04:48.154600 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.19:6443/api/v1/nodes\": dial tcp 172.31.31.19:6443: connect: connection refused" node="ip-172-31-31-19" Sep 13 00:04:48.295678 env[1810]: time="2025-09-13T00:04:48.295131644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-19,Uid:1d168e18b5f408f3ea3ed431e8335147,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:48.303859 env[1810]: time="2025-09-13T00:04:48.303736791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-19,Uid:cf723825ed8298513bb3d5948c7377fc,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:48.312854 env[1810]: time="2025-09-13T00:04:48.312746641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-19,Uid:2ba13677895b2fca592ee3aa582ef47a,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:48.372547 kubelet[2321]: E0913 00:04:48.372074 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-19?timeout=10s\": dial tcp 172.31.31.19:6443: connect: connection refused" interval="800ms" Sep 13 00:04:48.550935 kubelet[2321]: E0913 00:04:48.550812 2321 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:04:48.558038 kubelet[2321]: I0913 00:04:48.557369 2321 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-19" Sep 13 00:04:48.558038 kubelet[2321]: E0913 00:04:48.557962 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.19:6443/api/v1/nodes\": dial tcp 172.31.31.19:6443: connect: connection refused" node="ip-172-31-31-19" Sep 13 00:04:48.763251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257819911.mount: Deactivated successfully. Sep 13 00:04:48.769808 env[1810]: time="2025-09-13T00:04:48.769716694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.776610 env[1810]: time="2025-09-13T00:04:48.776532155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.781350 env[1810]: time="2025-09-13T00:04:48.779636869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.785319 env[1810]: time="2025-09-13T00:04:48.784626113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.785494 kubelet[2321]: E0913 00:04:48.785239 2321 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:04:48.787155 env[1810]: time="2025-09-13T00:04:48.787104017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.794039 env[1810]: time="2025-09-13T00:04:48.793944321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.797715 env[1810]: time="2025-09-13T00:04:48.797658273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.804252 env[1810]: time="2025-09-13T00:04:48.804132373Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.811265 env[1810]: time="2025-09-13T00:04:48.811155995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.814957 env[1810]: time="2025-09-13T00:04:48.814899950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.816743 env[1810]: time="2025-09-13T00:04:48.816691640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.826708 env[1810]: time="2025-09-13T00:04:48.826641829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:48.841155 kubelet[2321]: E0913 00:04:48.841074 2321 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:04:48.880843 env[1810]: time="2025-09-13T00:04:48.880699946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:48.880843 env[1810]: time="2025-09-13T00:04:48.880775937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:48.881187 env[1810]: time="2025-09-13T00:04:48.880803960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:48.883313 env[1810]: time="2025-09-13T00:04:48.883162921Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbf3462b3d17cd303fb151fdefce0ea6fce9f89c93c5e7cee096d77938c78fe1 pid=2366 runtime=io.containerd.runc.v2 Sep 13 00:04:48.890529 env[1810]: time="2025-09-13T00:04:48.890348714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:48.890680 env[1810]: time="2025-09-13T00:04:48.890558098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:48.890772 env[1810]: time="2025-09-13T00:04:48.890663073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:48.891240 env[1810]: time="2025-09-13T00:04:48.891158565Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b233d35245ee5711519a666daf8371eec69f3e63f6791498a64fbcaf3e1bfdf8 pid=2381 runtime=io.containerd.runc.v2 Sep 13 00:04:48.901601 env[1810]: time="2025-09-13T00:04:48.901461996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:48.901781 env[1810]: time="2025-09-13T00:04:48.901626640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:48.901781 env[1810]: time="2025-09-13T00:04:48.901722038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:48.902182 env[1810]: time="2025-09-13T00:04:48.902083801Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d90c4aa3fcbcd7935d4b3d57f9d9d2d4083d98ae5040abdcc78cd05b0fd8bcc2 pid=2394 runtime=io.containerd.runc.v2 Sep 13 00:04:48.917327 systemd[1]: Started cri-containerd-b233d35245ee5711519a666daf8371eec69f3e63f6791498a64fbcaf3e1bfdf8.scope. Sep 13 00:04:48.959195 systemd[1]: Started cri-containerd-d90c4aa3fcbcd7935d4b3d57f9d9d2d4083d98ae5040abdcc78cd05b0fd8bcc2.scope. Sep 13 00:04:48.966316 systemd[1]: Started cri-containerd-cbf3462b3d17cd303fb151fdefce0ea6fce9f89c93c5e7cee096d77938c78fe1.scope. Sep 13 00:04:49.095716 env[1810]: time="2025-09-13T00:04:49.094021854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-19,Uid:2ba13677895b2fca592ee3aa582ef47a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b233d35245ee5711519a666daf8371eec69f3e63f6791498a64fbcaf3e1bfdf8\"" Sep 13 00:04:49.106293 env[1810]: time="2025-09-13T00:04:49.106228272Z" level=info msg="CreateContainer within sandbox \"b233d35245ee5711519a666daf8371eec69f3e63f6791498a64fbcaf3e1bfdf8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:04:49.136769 env[1810]: time="2025-09-13T00:04:49.136693499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-19,Uid:1d168e18b5f408f3ea3ed431e8335147,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf3462b3d17cd303fb151fdefce0ea6fce9f89c93c5e7cee096d77938c78fe1\"" Sep 13 00:04:49.141138 env[1810]: time="2025-09-13T00:04:49.141060265Z" level=info msg="CreateContainer within sandbox \"b233d35245ee5711519a666daf8371eec69f3e63f6791498a64fbcaf3e1bfdf8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"67aa6009bbd4102fe3015bd3ae69e1f03774329528c25fc98aa25fd7c468f81d\"" Sep 13 00:04:49.144110 env[1810]: time="2025-09-13T00:04:49.144009065Z" level=info msg="StartContainer for \"67aa6009bbd4102fe3015bd3ae69e1f03774329528c25fc98aa25fd7c468f81d\"" Sep 13 00:04:49.147053 env[1810]: time="2025-09-13T00:04:49.146969218Z" level=info msg="CreateContainer within sandbox \"cbf3462b3d17cd303fb151fdefce0ea6fce9f89c93c5e7cee096d77938c78fe1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:04:49.163619 env[1810]: time="2025-09-13T00:04:49.163539245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-19,Uid:cf723825ed8298513bb3d5948c7377fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d90c4aa3fcbcd7935d4b3d57f9d9d2d4083d98ae5040abdcc78cd05b0fd8bcc2\"" Sep 13 00:04:49.173115 kubelet[2321]: E0913 00:04:49.173038 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-19?timeout=10s\": dial tcp 172.31.31.19:6443: connect: connection refused" interval="1.6s" Sep 13 00:04:49.174226 env[1810]: time="2025-09-13T00:04:49.174131709Z" level=info msg="CreateContainer within sandbox \"d90c4aa3fcbcd7935d4b3d57f9d9d2d4083d98ae5040abdcc78cd05b0fd8bcc2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:04:49.179635 env[1810]: time="2025-09-13T00:04:49.179547093Z" level=info msg="CreateContainer within sandbox \"cbf3462b3d17cd303fb151fdefce0ea6fce9f89c93c5e7cee096d77938c78fe1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d\"" Sep 13 00:04:49.180515 env[1810]: time="2025-09-13T00:04:49.180464001Z" level=info msg="StartContainer for \"5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d\"" Sep 13 00:04:49.203064 systemd[1]: Started cri-containerd-67aa6009bbd4102fe3015bd3ae69e1f03774329528c25fc98aa25fd7c468f81d.scope. Sep 13 00:04:49.206236 env[1810]: time="2025-09-13T00:04:49.203797759Z" level=info msg="CreateContainer within sandbox \"d90c4aa3fcbcd7935d4b3d57f9d9d2d4083d98ae5040abdcc78cd05b0fd8bcc2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327\"" Sep 13 00:04:49.210566 env[1810]: time="2025-09-13T00:04:49.210274640Z" level=info msg="StartContainer for \"a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327\"" Sep 13 00:04:49.241083 kubelet[2321]: E0913 00:04:49.241001 2321 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-19&limit=500&resourceVersion=0\": dial tcp 172.31.31.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:04:49.255878 systemd[1]: Started cri-containerd-5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d.scope. Sep 13 00:04:49.281522 systemd[1]: Started cri-containerd-a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327.scope. Sep 13 00:04:49.353379 env[1810]: time="2025-09-13T00:04:49.353199625Z" level=info msg="StartContainer for \"67aa6009bbd4102fe3015bd3ae69e1f03774329528c25fc98aa25fd7c468f81d\" returns successfully" Sep 13 00:04:49.361475 kubelet[2321]: I0913 00:04:49.360798 2321 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-19" Sep 13 00:04:49.361664 kubelet[2321]: E0913 00:04:49.361579 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.19:6443/api/v1/nodes\": dial tcp 172.31.31.19:6443: connect: connection refused" node="ip-172-31-31-19" Sep 13 00:04:49.423544 env[1810]: time="2025-09-13T00:04:49.423477945Z" level=info msg="StartContainer for \"5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d\" returns successfully" Sep 13 00:04:49.588308 env[1810]: time="2025-09-13T00:04:49.588248661Z" level=info msg="StartContainer for \"a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327\" returns successfully" Sep 13 00:04:49.867763 kubelet[2321]: E0913 00:04:49.867725 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:49.874849 kubelet[2321]: E0913 00:04:49.872198 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:49.880778 kubelet[2321]: E0913 00:04:49.880732 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:50.883068 kubelet[2321]: E0913 00:04:50.883024 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:50.884415 kubelet[2321]: E0913 00:04:50.884371 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:50.965149 kubelet[2321]: I0913 00:04:50.965068 2321 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-19" Sep 13 00:04:51.885588 kubelet[2321]: E0913 00:04:51.885552 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:51.889318 kubelet[2321]: E0913 00:04:51.889271 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:51.901954 kubelet[2321]: E0913 00:04:51.901901 2321 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:53.153298 kubelet[2321]: E0913 00:04:53.153237 2321 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-19\" not found" node="ip-172-31-31-19" Sep 13 00:04:53.165911 kubelet[2321]: I0913 00:04:53.165852 2321 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-19" Sep 13 00:04:53.263607 kubelet[2321]: I0913 00:04:53.263551 2321 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:53.284744 kubelet[2321]: E0913 00:04:53.284691 2321 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-19\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:53.285012 kubelet[2321]: I0913 00:04:53.284977 2321 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-19" Sep 13 00:04:53.300635 kubelet[2321]: E0913 00:04:53.300529 2321 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-19\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-19" Sep 13 00:04:53.300635 kubelet[2321]: I0913 00:04:53.300628 2321 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-19" Sep 13 00:04:53.305642 kubelet[2321]: E0913 00:04:53.305578 2321 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-19\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-19" Sep 13 00:04:53.712108 kubelet[2321]: I0913 00:04:53.712040 2321 apiserver.go:52] "Watching apiserver" Sep 13 00:04:53.765408 kubelet[2321]: I0913 00:04:53.765364 2321 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:04:54.381494 update_engine[1801]: I0913 00:04:54.380903 1801 update_attempter.cc:509] Updating boot flags... Sep 13 00:04:57.324385 systemd[1]: Reloading. Sep 13 00:04:57.517629 /usr/lib/systemd/system-generators/torcx-generator[2722]: time="2025-09-13T00:04:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:04:57.520989 /usr/lib/systemd/system-generators/torcx-generator[2722]: time="2025-09-13T00:04:57Z" level=info msg="torcx already run" Sep 13 00:04:57.709195 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:04:57.709237 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:04:57.757147 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:58.072360 systemd[1]: Stopping kubelet.service... Sep 13 00:04:58.092611 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:04:58.093105 systemd[1]: Stopped kubelet.service. Sep 13 00:04:58.093479 systemd[1]: kubelet.service: Consumed 1.557s CPU time. Sep 13 00:04:58.099932 systemd[1]: Starting kubelet.service... Sep 13 00:04:58.462519 systemd[1]: Started kubelet.service. Sep 13 00:04:58.579949 kubelet[2778]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:58.579949 kubelet[2778]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:04:58.579949 kubelet[2778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:58.580770 kubelet[2778]: I0913 00:04:58.580079 2778 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:04:58.613646 kubelet[2778]: I0913 00:04:58.613584 2778 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:04:58.613969 kubelet[2778]: I0913 00:04:58.613935 2778 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:04:58.614696 kubelet[2778]: I0913 00:04:58.614647 2778 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:04:58.617723 kubelet[2778]: I0913 00:04:58.617677 2778 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:04:58.623513 kubelet[2778]: I0913 00:04:58.623466 2778 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:04:58.633058 kubelet[2778]: E0913 00:04:58.632998 2778 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:04:58.633358 kubelet[2778]: I0913 00:04:58.633316 2778 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:04:58.640356 kubelet[2778]: I0913 00:04:58.640300 2778 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:04:58.641276 kubelet[2778]: I0913 00:04:58.641212 2778 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:04:58.641788 kubelet[2778]: I0913 00:04:58.641456 2778 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:04:58.642156 kubelet[2778]: I0913 00:04:58.642126 2778 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:04:58.642301 kubelet[2778]: I0913 00:04:58.642280 2778 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:04:58.644551 sudo[2793]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:04:58.645280 sudo[2793]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:04:58.647551 kubelet[2778]: I0913 00:04:58.647483 2778 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:58.648192 kubelet[2778]: I0913 00:04:58.648154 2778 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:04:58.648472 kubelet[2778]: I0913 00:04:58.648438 2778 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:04:58.648676 kubelet[2778]: I0913 00:04:58.648646 2778 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:04:58.648960 kubelet[2778]: I0913 00:04:58.648934 2778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:04:58.656532 kubelet[2778]: I0913 00:04:58.655939 2778 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:04:58.657694 kubelet[2778]: I0913 00:04:58.657566 2778 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:04:58.662595 kubelet[2778]: I0913 00:04:58.662553 2778 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:04:58.662896 kubelet[2778]: I0913 00:04:58.662862 2778 server.go:1289] "Started kubelet" Sep 13 00:04:58.666809 kubelet[2778]: I0913 00:04:58.666765 2778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:04:58.704969 kubelet[2778]: I0913 00:04:58.667777 2778 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:04:58.712191 kubelet[2778]: I0913 00:04:58.712133 2778 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:04:58.718398 kubelet[2778]: I0913 00:04:58.681719 2778 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:04:58.721966 kubelet[2778]: I0913 00:04:58.667902 2778 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:04:58.723889 kubelet[2778]: I0913 00:04:58.705203 2778 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:04:58.724118 kubelet[2778]: E0913 00:04:58.705443 2778 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-19\" not found" Sep 13 00:04:58.724344 kubelet[2778]: I0913 00:04:58.705180 2778 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:04:58.724914 kubelet[2778]: I0913 00:04:58.724879 2778 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:04:58.754067 kubelet[2778]: I0913 00:04:58.731804 2778 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:04:58.754483 kubelet[2778]: I0913 00:04:58.754429 2778 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:04:58.754666 kubelet[2778]: I0913 00:04:58.754614 2778 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:04:58.815290 kubelet[2778]: E0913 00:04:58.815225 2778 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:04:58.827172 kubelet[2778]: I0913 00:04:58.827129 2778 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:04:58.855667 kubelet[2778]: I0913 00:04:58.855605 2778 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:04:58.860149 kubelet[2778]: I0913 00:04:58.860105 2778 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:04:58.860527 kubelet[2778]: I0913 00:04:58.860501 2778 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:04:58.860736 kubelet[2778]: I0913 00:04:58.860710 2778 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:04:58.860937 kubelet[2778]: I0913 00:04:58.860915 2778 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:04:58.861202 kubelet[2778]: E0913 00:04:58.861156 2778 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:04:58.961515 kubelet[2778]: E0913 00:04:58.961468 2778 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:04:58.978205 kubelet[2778]: I0913 00:04:58.978095 2778 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:04:58.979962 kubelet[2778]: I0913 00:04:58.979886 2778 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:04:58.980298 kubelet[2778]: I0913 00:04:58.980270 2778 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:58.980885 kubelet[2778]: I0913 00:04:58.980788 2778 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:04:58.984269 kubelet[2778]: I0913 00:04:58.984137 2778 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:04:58.984785 kubelet[2778]: I0913 00:04:58.984754 2778 policy_none.go:49] "None policy: Start" Sep 13 00:04:58.985021 kubelet[2778]: I0913 00:04:58.984991 2778 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:04:58.985162 kubelet[2778]: I0913 00:04:58.985140 2778 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:04:58.987279 kubelet[2778]: I0913 00:04:58.987241 2778 state_mem.go:75] "Updated machine memory state" Sep 13 00:04:59.006194 kubelet[2778]: E0913 00:04:59.006155 2778 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:04:59.006704 kubelet[2778]: I0913 00:04:59.006675 2778 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:04:59.006988 kubelet[2778]: I0913 00:04:59.006911 2778 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:04:59.008466 kubelet[2778]: I0913 00:04:59.008433 2778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:04:59.018558 kubelet[2778]: E0913 00:04:59.018507 2778 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:04:59.135854 kubelet[2778]: I0913 00:04:59.135777 2778 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-19" Sep 13 00:04:59.157788 kubelet[2778]: I0913 00:04:59.157734 2778 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-19" Sep 13 00:04:59.158207 kubelet[2778]: I0913 00:04:59.158166 2778 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-19" Sep 13 00:04:59.163442 kubelet[2778]: I0913 00:04:59.163383 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-19" Sep 13 00:04:59.164583 kubelet[2778]: I0913 00:04:59.164542 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-19" Sep 13 00:04:59.165544 kubelet[2778]: I0913 00:04:59.165500 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:59.261528 kubelet[2778]: I0913 00:04:59.261392 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:59.261791 kubelet[2778]: I0913 00:04:59.261750 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:59.262154 kubelet[2778]: I0913 00:04:59.262085 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf723825ed8298513bb3d5948c7377fc-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-19\" (UID: \"cf723825ed8298513bb3d5948c7377fc\") " pod="kube-system/kube-scheduler-ip-172-31-31-19" Sep 13 00:04:59.262432 kubelet[2778]: I0913 00:04:59.262340 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:59.262758 kubelet[2778]: I0913 00:04:59.262646 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ba13677895b2fca592ee3aa582ef47a-ca-certs\") pod \"kube-apiserver-ip-172-31-31-19\" (UID: \"2ba13677895b2fca592ee3aa582ef47a\") " pod="kube-system/kube-apiserver-ip-172-31-31-19" Sep 13 00:04:59.263082 kubelet[2778]: I0913 00:04:59.262975 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ba13677895b2fca592ee3aa582ef47a-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-19\" (UID: \"2ba13677895b2fca592ee3aa582ef47a\") " pod="kube-system/kube-apiserver-ip-172-31-31-19" Sep 13 00:04:59.263389 kubelet[2778]: I0913 00:04:59.263276 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ba13677895b2fca592ee3aa582ef47a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-19\" (UID: \"2ba13677895b2fca592ee3aa582ef47a\") " pod="kube-system/kube-apiserver-ip-172-31-31-19" Sep 13 00:04:59.263750 kubelet[2778]: I0913 00:04:59.263692 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:59.264039 kubelet[2778]: I0913 00:04:59.263990 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d168e18b5f408f3ea3ed431e8335147-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-19\" (UID: \"1d168e18b5f408f3ea3ed431e8335147\") " pod="kube-system/kube-controller-manager-ip-172-31-31-19" Sep 13 00:04:59.680958 kubelet[2778]: I0913 00:04:59.680806 2778 apiserver.go:52] "Watching apiserver" Sep 13 00:04:59.724583 kubelet[2778]: I0913 00:04:59.724540 2778 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:04:59.794494 sudo[2793]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:00.015607 kubelet[2778]: I0913 00:05:00.015392 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-19" podStartSLOduration=1.015374208 podStartE2EDuration="1.015374208s" podCreationTimestamp="2025-09-13 00:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:00.015308817 +0000 UTC m=+1.539667801" watchObservedRunningTime="2025-09-13 00:05:00.015374208 +0000 UTC m=+1.539733168" Sep 13 00:05:00.122551 kubelet[2778]: I0913 00:05:00.122434 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-19" podStartSLOduration=1.122410727 podStartE2EDuration="1.122410727s" podCreationTimestamp="2025-09-13 00:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:00.048140965 +0000 UTC m=+1.572499937" watchObservedRunningTime="2025-09-13 00:05:00.122410727 +0000 UTC m=+1.646769723" Sep 13 00:05:00.179556 kubelet[2778]: I0913 00:05:00.179452 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-19" podStartSLOduration=1.179427557 podStartE2EDuration="1.179427557s" podCreationTimestamp="2025-09-13 00:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:00.122949287 +0000 UTC m=+1.647308283" watchObservedRunningTime="2025-09-13 00:05:00.179427557 +0000 UTC m=+1.703786517" Sep 13 00:05:01.997167 kubelet[2778]: I0913 00:05:01.997076 2778 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:05:01.997812 env[1810]: time="2025-09-13T00:05:01.997685102Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:05:02.002317 kubelet[2778]: I0913 00:05:01.999884 2778 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:05:02.655324 systemd[1]: Created slice kubepods-besteffort-pod04afe3a0_b78f_4fa9_b7ac_7e21bd83705e.slice. Sep 13 00:05:02.686343 kubelet[2778]: I0913 00:05:02.686290 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/04afe3a0-b78f-4fa9-b7ac-7e21bd83705e-kube-proxy\") pod \"kube-proxy-scmh2\" (UID: \"04afe3a0-b78f-4fa9-b7ac-7e21bd83705e\") " pod="kube-system/kube-proxy-scmh2" Sep 13 00:05:02.686605 kubelet[2778]: I0913 00:05:02.686569 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04afe3a0-b78f-4fa9-b7ac-7e21bd83705e-xtables-lock\") pod \"kube-proxy-scmh2\" (UID: \"04afe3a0-b78f-4fa9-b7ac-7e21bd83705e\") " pod="kube-system/kube-proxy-scmh2" Sep 13 00:05:02.686852 kubelet[2778]: I0913 00:05:02.686781 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04afe3a0-b78f-4fa9-b7ac-7e21bd83705e-lib-modules\") pod \"kube-proxy-scmh2\" (UID: \"04afe3a0-b78f-4fa9-b7ac-7e21bd83705e\") " pod="kube-system/kube-proxy-scmh2" Sep 13 00:05:02.687098 kubelet[2778]: I0913 00:05:02.687053 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4c9s\" (UniqueName: \"kubernetes.io/projected/04afe3a0-b78f-4fa9-b7ac-7e21bd83705e-kube-api-access-v4c9s\") pod \"kube-proxy-scmh2\" (UID: \"04afe3a0-b78f-4fa9-b7ac-7e21bd83705e\") " pod="kube-system/kube-proxy-scmh2" Sep 13 00:05:02.735878 systemd[1]: Created slice kubepods-burstable-pod719ebe1a_1227_4299_ad6f_a89e9713df0c.slice. Sep 13 00:05:02.787878 kubelet[2778]: I0913 00:05:02.787802 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-xtables-lock\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.788179 kubelet[2778]: I0913 00:05:02.788141 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/719ebe1a-1227-4299-ad6f-a89e9713df0c-clustermesh-secrets\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.788385 kubelet[2778]: I0913 00:05:02.788344 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-config-path\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.788586 kubelet[2778]: I0913 00:05:02.788555 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/719ebe1a-1227-4299-ad6f-a89e9713df0c-hubble-tls\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.788771 kubelet[2778]: I0913 00:05:02.788735 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk7rc\" (UniqueName: \"kubernetes.io/projected/719ebe1a-1227-4299-ad6f-a89e9713df0c-kube-api-access-hk7rc\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.788968 kubelet[2778]: I0913 00:05:02.788940 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-run\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.789142 kubelet[2778]: I0913 00:05:02.789113 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-bpf-maps\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.789317 kubelet[2778]: I0913 00:05:02.789288 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-cgroup\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.791495 kubelet[2778]: I0913 00:05:02.791435 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cni-path\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.791877 kubelet[2778]: I0913 00:05:02.791808 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-etc-cni-netd\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.792030 kubelet[2778]: I0913 00:05:02.792000 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-lib-modules\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.792285 kubelet[2778]: I0913 00:05:02.792217 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-hostproc\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.792544 kubelet[2778]: I0913 00:05:02.792492 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-host-proc-sys-net\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.792824 kubelet[2778]: I0913 00:05:02.792769 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-host-proc-sys-kernel\") pod \"cilium-cv9tg\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " pod="kube-system/cilium-cv9tg" Sep 13 00:05:02.844936 kubelet[2778]: I0913 00:05:02.844886 2778 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:05:02.970623 env[1810]: time="2025-09-13T00:05:02.970437255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scmh2,Uid:04afe3a0-b78f-4fa9-b7ac-7e21bd83705e,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:03.037413 env[1810]: time="2025-09-13T00:05:03.034044222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:03.037413 env[1810]: time="2025-09-13T00:05:03.034175639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:03.037413 env[1810]: time="2025-09-13T00:05:03.034203696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:03.037413 env[1810]: time="2025-09-13T00:05:03.034532508Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14cd51577adab5bc82a24fe703689ed0e44b66cf126defe96dd7a739370413ed pid=2842 runtime=io.containerd.runc.v2 Sep 13 00:05:03.135140 systemd[1]: Started cri-containerd-14cd51577adab5bc82a24fe703689ed0e44b66cf126defe96dd7a739370413ed.scope. Sep 13 00:05:03.353955 systemd[1]: Created slice kubepods-besteffort-pod91309607_f47f_4763_a1ba_2b17fd81c84c.slice. Sep 13 00:05:03.370508 env[1810]: time="2025-09-13T00:05:03.368174469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cv9tg,Uid:719ebe1a-1227-4299-ad6f-a89e9713df0c,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:03.405862 kubelet[2778]: I0913 00:05:03.405547 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91309607-f47f-4763-a1ba-2b17fd81c84c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gqkxq\" (UID: \"91309607-f47f-4763-a1ba-2b17fd81c84c\") " pod="kube-system/cilium-operator-6c4d7847fc-gqkxq" Sep 13 00:05:03.405862 kubelet[2778]: I0913 00:05:03.405682 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvclj\" (UniqueName: \"kubernetes.io/projected/91309607-f47f-4763-a1ba-2b17fd81c84c-kube-api-access-hvclj\") pod \"cilium-operator-6c4d7847fc-gqkxq\" (UID: \"91309607-f47f-4763-a1ba-2b17fd81c84c\") " pod="kube-system/cilium-operator-6c4d7847fc-gqkxq" Sep 13 00:05:03.406934 env[1810]: time="2025-09-13T00:05:03.404391156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:03.406934 env[1810]: time="2025-09-13T00:05:03.404542817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:03.406934 env[1810]: time="2025-09-13T00:05:03.404571930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:03.406934 env[1810]: time="2025-09-13T00:05:03.404981265Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0 pid=2882 runtime=io.containerd.runc.v2 Sep 13 00:05:03.512920 systemd[1]: Started cri-containerd-5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0.scope. Sep 13 00:05:03.668910 env[1810]: time="2025-09-13T00:05:03.668835542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gqkxq,Uid:91309607-f47f-4763-a1ba-2b17fd81c84c,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:03.674136 env[1810]: time="2025-09-13T00:05:03.674070975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scmh2,Uid:04afe3a0-b78f-4fa9-b7ac-7e21bd83705e,Namespace:kube-system,Attempt:0,} returns sandbox id \"14cd51577adab5bc82a24fe703689ed0e44b66cf126defe96dd7a739370413ed\"" Sep 13 00:05:03.691077 env[1810]: time="2025-09-13T00:05:03.691005435Z" level=info msg="CreateContainer within sandbox \"14cd51577adab5bc82a24fe703689ed0e44b66cf126defe96dd7a739370413ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:05:03.724154 env[1810]: time="2025-09-13T00:05:03.723997219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:03.725983 env[1810]: time="2025-09-13T00:05:03.724547679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:03.725983 env[1810]: time="2025-09-13T00:05:03.724659595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:03.725983 env[1810]: time="2025-09-13T00:05:03.725044126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174 pid=2922 runtime=io.containerd.runc.v2 Sep 13 00:05:03.752064 env[1810]: time="2025-09-13T00:05:03.751998439Z" level=info msg="CreateContainer within sandbox \"14cd51577adab5bc82a24fe703689ed0e44b66cf126defe96dd7a739370413ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"403c924c550200e6e0b8ff5a87305278f6217f6dec8939c86be5fc43f5af7e0b\"" Sep 13 00:05:03.754872 env[1810]: time="2025-09-13T00:05:03.753621151Z" level=info msg="StartContainer for \"403c924c550200e6e0b8ff5a87305278f6217f6dec8939c86be5fc43f5af7e0b\"" Sep 13 00:05:03.774347 systemd[1]: Started cri-containerd-1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174.scope. Sep 13 00:05:03.815319 systemd[1]: Started cri-containerd-403c924c550200e6e0b8ff5a87305278f6217f6dec8939c86be5fc43f5af7e0b.scope. Sep 13 00:05:03.904092 env[1810]: time="2025-09-13T00:05:03.904013322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cv9tg,Uid:719ebe1a-1227-4299-ad6f-a89e9713df0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\"" Sep 13 00:05:03.908978 env[1810]: time="2025-09-13T00:05:03.908900766Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:05:04.011181 env[1810]: time="2025-09-13T00:05:04.004725213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gqkxq,Uid:91309607-f47f-4763-a1ba-2b17fd81c84c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\"" Sep 13 00:05:04.036408 env[1810]: time="2025-09-13T00:05:04.033036819Z" level=info msg="StartContainer for \"403c924c550200e6e0b8ff5a87305278f6217f6dec8939c86be5fc43f5af7e0b\" returns successfully" Sep 13 00:05:05.032959 kubelet[2778]: I0913 00:05:05.032871 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-scmh2" podStartSLOduration=3.032846892 podStartE2EDuration="3.032846892s" podCreationTimestamp="2025-09-13 00:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:05.004562216 +0000 UTC m=+6.528921200" watchObservedRunningTime="2025-09-13 00:05:05.032846892 +0000 UTC m=+6.557205888" Sep 13 00:05:06.891122 amazon-ssm-agent[1788]: 2025-09-13 00:05:06 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 13 00:05:11.569217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047382013.mount: Deactivated successfully. Sep 13 00:05:15.840582 env[1810]: time="2025-09-13T00:05:15.840510954Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:15.844628 env[1810]: time="2025-09-13T00:05:15.844565631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:15.848268 env[1810]: time="2025-09-13T00:05:15.848201550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:15.850437 env[1810]: time="2025-09-13T00:05:15.850359101Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 00:05:15.858336 env[1810]: time="2025-09-13T00:05:15.858259513Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:05:15.867291 env[1810]: time="2025-09-13T00:05:15.867216043Z" level=info msg="CreateContainer within sandbox \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:05:15.890236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184973733.mount: Deactivated successfully. Sep 13 00:05:15.906141 env[1810]: time="2025-09-13T00:05:15.906062252Z" level=info msg="CreateContainer within sandbox \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed\"" Sep 13 00:05:15.907603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738080436.mount: Deactivated successfully. Sep 13 00:05:15.912376 env[1810]: time="2025-09-13T00:05:15.912258758Z" level=info msg="StartContainer for \"dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed\"" Sep 13 00:05:16.000358 systemd[1]: Started cri-containerd-dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed.scope. Sep 13 00:05:16.046564 systemd[1]: cri-containerd-dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed.scope: Deactivated successfully. Sep 13 00:05:16.881242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed-rootfs.mount: Deactivated successfully. Sep 13 00:05:17.057052 env[1810]: time="2025-09-13T00:05:17.056952827Z" level=info msg="shim disconnected" id=dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed Sep 13 00:05:17.058153 env[1810]: time="2025-09-13T00:05:17.057804068Z" level=warning msg="cleaning up after shim disconnected" id=dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed namespace=k8s.io Sep 13 00:05:17.058153 env[1810]: time="2025-09-13T00:05:17.057900185Z" level=info msg="cleaning up dead shim" Sep 13 00:05:17.078100 env[1810]: time="2025-09-13T00:05:17.078021151Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:05:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3155 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:05:17Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 00:05:17.078999 env[1810]: time="2025-09-13T00:05:17.078797480Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Sep 13 00:05:17.080434 env[1810]: time="2025-09-13T00:05:17.079269929Z" level=error msg="Failed to pipe stdout of container \"dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed\"" error="reading from a closed fifo" Sep 13 00:05:17.080966 env[1810]: time="2025-09-13T00:05:17.079963571Z" level=error msg="Failed to pipe stderr of container \"dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed\"" error="reading from a closed fifo" Sep 13 00:05:17.082037 env[1810]: time="2025-09-13T00:05:17.081945414Z" level=error msg="StartContainer for \"dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 00:05:17.082750 kubelet[2778]: E0913 00:05:17.082547 2778 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed" Sep 13 00:05:17.083770 kubelet[2778]: E0913 00:05:17.083309 2778 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 13 00:05:17.083770 kubelet[2778]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 00:05:17.083770 kubelet[2778]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 00:05:17.083770 kubelet[2778]: rm /hostbin/cilium-mount Sep 13 00:05:17.084143 kubelet[2778]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hk7rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-cv9tg_kube-system(719ebe1a-1227-4299-ad6f-a89e9713df0c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 00:05:17.084143 kubelet[2778]: > logger="UnhandledError" Sep 13 00:05:17.087361 kubelet[2778]: E0913 00:05:17.084960 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cv9tg" podUID="719ebe1a-1227-4299-ad6f-a89e9713df0c" Sep 13 00:05:17.915099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611388910.mount: Deactivated successfully. Sep 13 00:05:18.054782 env[1810]: time="2025-09-13T00:05:18.054710926Z" level=info msg="StopPodSandbox for \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\"" Sep 13 00:05:18.055166 env[1810]: time="2025-09-13T00:05:18.055107179Z" level=info msg="Container to stop \"dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:05:18.059045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0-shm.mount: Deactivated successfully. Sep 13 00:05:18.091176 systemd[1]: cri-containerd-5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0.scope: Deactivated successfully. Sep 13 00:05:18.157261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0-rootfs.mount: Deactivated successfully. Sep 13 00:05:18.182351 env[1810]: time="2025-09-13T00:05:18.181469631Z" level=info msg="shim disconnected" id=5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0 Sep 13 00:05:18.183570 env[1810]: time="2025-09-13T00:05:18.183513106Z" level=warning msg="cleaning up after shim disconnected" id=5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0 namespace=k8s.io Sep 13 00:05:18.183788 env[1810]: time="2025-09-13T00:05:18.183748472Z" level=info msg="cleaning up dead shim" Sep 13 00:05:18.218656 env[1810]: time="2025-09-13T00:05:18.218586157Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:05:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3187 runtime=io.containerd.runc.v2\n" Sep 13 00:05:18.219577 env[1810]: time="2025-09-13T00:05:18.219503701Z" level=info msg="TearDown network for sandbox \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\" successfully" Sep 13 00:05:18.219869 env[1810]: time="2025-09-13T00:05:18.219778694Z" level=info msg="StopPodSandbox for \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\" returns successfully" Sep 13 00:05:18.330431 kubelet[2778]: I0913 00:05:18.330381 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-cgroup\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330436 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-xtables-lock\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330475 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-bpf-maps\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330523 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/719ebe1a-1227-4299-ad6f-a89e9713df0c-clustermesh-secrets\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330567 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/719ebe1a-1227-4299-ad6f-a89e9713df0c-hubble-tls\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330604 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-etc-cni-netd\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330636 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-host-proc-sys-net\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330676 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk7rc\" (UniqueName: \"kubernetes.io/projected/719ebe1a-1227-4299-ad6f-a89e9713df0c-kube-api-access-hk7rc\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330712 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-hostproc\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330745 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cni-path\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330777 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-lib-modules\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330809 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-run\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330900 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-config-path\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.330937 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-host-proc-sys-kernel\") pod \"719ebe1a-1227-4299-ad6f-a89e9713df0c\" (UID: \"719ebe1a-1227-4299-ad6f-a89e9713df0c\") " Sep 13 00:05:18.331069 kubelet[2778]: I0913 00:05:18.331070 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.332081 kubelet[2778]: I0913 00:05:18.331144 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.332081 kubelet[2778]: I0913 00:05:18.331184 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.332081 kubelet[2778]: I0913 00:05:18.331222 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.332081 kubelet[2778]: I0913 00:05:18.331714 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-hostproc" (OuterVolumeSpecName: "hostproc") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.337848 kubelet[2778]: I0913 00:05:18.335931 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cni-path" (OuterVolumeSpecName: "cni-path") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.337848 kubelet[2778]: I0913 00:05:18.336027 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.337848 kubelet[2778]: I0913 00:05:18.336071 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.337848 kubelet[2778]: I0913 00:05:18.336763 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.337848 kubelet[2778]: I0913 00:05:18.336889 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:05:18.344010 systemd[1]: var-lib-kubelet-pods-719ebe1a\x2d1227\x2d4299\x2dad6f\x2da89e9713df0c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:05:18.348135 kubelet[2778]: I0913 00:05:18.348060 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719ebe1a-1227-4299-ad6f-a89e9713df0c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:05:18.351029 kubelet[2778]: I0913 00:05:18.350958 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:05:18.360478 kubelet[2778]: I0913 00:05:18.360404 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719ebe1a-1227-4299-ad6f-a89e9713df0c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:05:18.363209 kubelet[2778]: I0913 00:05:18.363130 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719ebe1a-1227-4299-ad6f-a89e9713df0c-kube-api-access-hk7rc" (OuterVolumeSpecName: "kube-api-access-hk7rc") pod "719ebe1a-1227-4299-ad6f-a89e9713df0c" (UID: "719ebe1a-1227-4299-ad6f-a89e9713df0c"). InnerVolumeSpecName "kube-api-access-hk7rc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:05:18.431682 kubelet[2778]: I0913 00:05:18.431607 2778 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-etc-cni-netd\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.431682 kubelet[2778]: I0913 00:05:18.431671 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-host-proc-sys-net\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.431985 kubelet[2778]: I0913 00:05:18.431757 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hk7rc\" (UniqueName: \"kubernetes.io/projected/719ebe1a-1227-4299-ad6f-a89e9713df0c-kube-api-access-hk7rc\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.431985 kubelet[2778]: I0913 00:05:18.431786 2778 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-hostproc\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.431985 kubelet[2778]: I0913 00:05:18.431870 2778 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cni-path\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.431985 kubelet[2778]: I0913 00:05:18.431897 2778 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-lib-modules\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.431985 kubelet[2778]: I0913 00:05:18.431921 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-run\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.431985 kubelet[2778]: I0913 00:05:18.431959 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-config-path\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.434898 kubelet[2778]: I0913 00:05:18.432019 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-host-proc-sys-kernel\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.434898 kubelet[2778]: I0913 00:05:18.432043 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-cilium-cgroup\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.434898 kubelet[2778]: I0913 00:05:18.432064 2778 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-xtables-lock\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.434898 kubelet[2778]: I0913 00:05:18.432087 2778 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/719ebe1a-1227-4299-ad6f-a89e9713df0c-bpf-maps\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.434898 kubelet[2778]: I0913 00:05:18.432108 2778 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/719ebe1a-1227-4299-ad6f-a89e9713df0c-clustermesh-secrets\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.434898 kubelet[2778]: I0913 00:05:18.432132 2778 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/719ebe1a-1227-4299-ad6f-a89e9713df0c-hubble-tls\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:05:18.879993 systemd[1]: Removed slice kubepods-burstable-pod719ebe1a_1227_4299_ad6f_a89e9713df0c.slice. Sep 13 00:05:18.896757 systemd[1]: var-lib-kubelet-pods-719ebe1a\x2d1227\x2d4299\x2dad6f\x2da89e9713df0c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhk7rc.mount: Deactivated successfully. Sep 13 00:05:18.897052 systemd[1]: var-lib-kubelet-pods-719ebe1a\x2d1227\x2d4299\x2dad6f\x2da89e9713df0c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:05:19.038253 env[1810]: time="2025-09-13T00:05:19.038165514Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:19.041995 env[1810]: time="2025-09-13T00:05:19.041923337Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:19.045957 env[1810]: time="2025-09-13T00:05:19.045893676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:19.048666 env[1810]: time="2025-09-13T00:05:19.047208777Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 00:05:19.057444 env[1810]: time="2025-09-13T00:05:19.057336997Z" level=info msg="CreateContainer within sandbox \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:05:19.065024 kubelet[2778]: I0913 00:05:19.064962 2778 scope.go:117] "RemoveContainer" containerID="dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed" Sep 13 00:05:19.078526 env[1810]: time="2025-09-13T00:05:19.076851543Z" level=info msg="RemoveContainer for \"dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed\"" Sep 13 00:05:19.088931 env[1810]: time="2025-09-13T00:05:19.087237642Z" level=info msg="RemoveContainer for \"dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed\" returns successfully" Sep 13 00:05:19.102369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547136442.mount: Deactivated successfully. Sep 13 00:05:19.123888 env[1810]: time="2025-09-13T00:05:19.123072577Z" level=info msg="CreateContainer within sandbox \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\"" Sep 13 00:05:19.125150 env[1810]: time="2025-09-13T00:05:19.125085672Z" level=info msg="StartContainer for \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\"" Sep 13 00:05:19.187509 systemd[1]: Started cri-containerd-5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1.scope. Sep 13 00:05:19.221659 systemd[1]: Created slice kubepods-burstable-podaf7df171_0caa_4e71_ac66_b9eb231a4ac5.slice. Sep 13 00:05:19.325975 env[1810]: time="2025-09-13T00:05:19.325788706Z" level=info msg="StartContainer for \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\" returns successfully" Sep 13 00:05:19.341116 kubelet[2778]: I0913 00:05:19.341046 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-xtables-lock\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.341733 kubelet[2778]: I0913 00:05:19.341150 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af7df171-0caa-4e71-ac66-b9eb231a4ac5-hubble-tls\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.341733 kubelet[2778]: I0913 00:05:19.341227 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-bpf-maps\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.341733 kubelet[2778]: I0913 00:05:19.341271 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-run\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.341733 kubelet[2778]: I0913 00:05:19.341345 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-etc-cni-netd\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.341733 kubelet[2778]: I0913 00:05:19.341418 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fznq\" (UniqueName: \"kubernetes.io/projected/af7df171-0caa-4e71-ac66-b9eb231a4ac5-kube-api-access-9fznq\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.341733 kubelet[2778]: I0913 00:05:19.341488 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-lib-modules\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.341733 kubelet[2778]: I0913 00:05:19.341535 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af7df171-0caa-4e71-ac66-b9eb231a4ac5-clustermesh-secrets\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.341733 kubelet[2778]: I0913 00:05:19.341603 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-host-proc-sys-kernel\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.341733 kubelet[2778]: I0913 00:05:19.341676 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-cgroup\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.342388 kubelet[2778]: I0913 00:05:19.341723 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cni-path\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.342388 kubelet[2778]: I0913 00:05:19.341790 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-config-path\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.342388 kubelet[2778]: I0913 00:05:19.341908 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-host-proc-sys-net\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.342388 kubelet[2778]: I0913 00:05:19.341980 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-hostproc\") pod \"cilium-6w8lh\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " pod="kube-system/cilium-6w8lh" Sep 13 00:05:19.532210 env[1810]: time="2025-09-13T00:05:19.531289049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6w8lh,Uid:af7df171-0caa-4e71-ac66-b9eb231a4ac5,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:19.565351 env[1810]: time="2025-09-13T00:05:19.565180602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:19.565774 env[1810]: time="2025-09-13T00:05:19.565694296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:19.566079 env[1810]: time="2025-09-13T00:05:19.565996206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:19.569010 env[1810]: time="2025-09-13T00:05:19.568789012Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923 pid=3253 runtime=io.containerd.runc.v2 Sep 13 00:05:19.593389 systemd[1]: Started cri-containerd-c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923.scope. Sep 13 00:05:19.705811 env[1810]: time="2025-09-13T00:05:19.705740628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6w8lh,Uid:af7df171-0caa-4e71-ac66-b9eb231a4ac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\"" Sep 13 00:05:19.716291 env[1810]: time="2025-09-13T00:05:19.716186180Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:05:19.741330 env[1810]: time="2025-09-13T00:05:19.741260066Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\"" Sep 13 00:05:19.742846 env[1810]: time="2025-09-13T00:05:19.742741466Z" level=info msg="StartContainer for \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\"" Sep 13 00:05:19.795072 systemd[1]: Started cri-containerd-76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458.scope. Sep 13 00:05:19.979417 env[1810]: time="2025-09-13T00:05:19.979333872Z" level=info msg="StartContainer for \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\" returns successfully" Sep 13 00:05:20.063602 systemd[1]: cri-containerd-76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458.scope: Deactivated successfully. Sep 13 00:05:20.144985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458-rootfs.mount: Deactivated successfully. Sep 13 00:05:20.170147 kubelet[2778]: W0913 00:05:20.169994 2778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod719ebe1a_1227_4299_ad6f_a89e9713df0c.slice/cri-containerd-dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed.scope WatchSource:0}: container "dae275da9f77195a93c1cfc349425646382c12b8ce2fb09d8670f67359bfc3ed" in namespace "k8s.io": not found Sep 13 00:05:20.222232 kubelet[2778]: I0913 00:05:20.222084 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gqkxq" podStartSLOduration=2.186502155 podStartE2EDuration="17.222057266s" podCreationTimestamp="2025-09-13 00:05:03 +0000 UTC" firstStartedPulling="2025-09-13 00:05:04.014690477 +0000 UTC m=+5.539049437" lastFinishedPulling="2025-09-13 00:05:19.050245588 +0000 UTC m=+20.574604548" observedRunningTime="2025-09-13 00:05:20.11978225 +0000 UTC m=+21.644141222" watchObservedRunningTime="2025-09-13 00:05:20.222057266 +0000 UTC m=+21.746416310" Sep 13 00:05:20.241810 env[1810]: time="2025-09-13T00:05:20.241720962Z" level=info msg="shim disconnected" id=76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458 Sep 13 00:05:20.242103 env[1810]: time="2025-09-13T00:05:20.241802185Z" level=warning msg="cleaning up after shim disconnected" id=76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458 namespace=k8s.io Sep 13 00:05:20.242103 env[1810]: time="2025-09-13T00:05:20.241863834Z" level=info msg="cleaning up dead shim" Sep 13 00:05:20.253966 env[1810]: time="2025-09-13T00:05:20.250473669Z" level=error msg="collecting metrics for 76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458" error="ttrpc: closed: unknown" Sep 13 00:05:20.277107 env[1810]: time="2025-09-13T00:05:20.277019806Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:05:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3335 runtime=io.containerd.runc.v2\n" Sep 13 00:05:20.869394 kubelet[2778]: I0913 00:05:20.869276 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719ebe1a-1227-4299-ad6f-a89e9713df0c" path="/var/lib/kubelet/pods/719ebe1a-1227-4299-ad6f-a89e9713df0c/volumes" Sep 13 00:05:21.115271 env[1810]: time="2025-09-13T00:05:21.115167953Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:05:21.161347 env[1810]: time="2025-09-13T00:05:21.161250045Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\"" Sep 13 00:05:21.162318 env[1810]: time="2025-09-13T00:05:21.162234296Z" level=info msg="StartContainer for \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\"" Sep 13 00:05:21.164169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784782073.mount: Deactivated successfully. Sep 13 00:05:21.242300 systemd[1]: Started cri-containerd-736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e.scope. Sep 13 00:05:21.406655 env[1810]: time="2025-09-13T00:05:21.406576747Z" level=info msg="StartContainer for \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\" returns successfully" Sep 13 00:05:21.463002 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:05:21.464311 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:05:21.464650 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:05:21.469724 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:05:21.480390 systemd[1]: cri-containerd-736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e.scope: Deactivated successfully. Sep 13 00:05:21.510774 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:05:21.530569 env[1810]: time="2025-09-13T00:05:21.530499839Z" level=info msg="shim disconnected" id=736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e Sep 13 00:05:21.531100 env[1810]: time="2025-09-13T00:05:21.531060238Z" level=warning msg="cleaning up after shim disconnected" id=736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e namespace=k8s.io Sep 13 00:05:21.531242 env[1810]: time="2025-09-13T00:05:21.531213419Z" level=info msg="cleaning up dead shim" Sep 13 00:05:21.548222 env[1810]: time="2025-09-13T00:05:21.548157602Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:05:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3401 runtime=io.containerd.runc.v2\n" Sep 13 00:05:22.133063 env[1810]: time="2025-09-13T00:05:22.132968306Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:05:22.148563 systemd[1]: run-containerd-runc-k8s.io-736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e-runc.YwFTub.mount: Deactivated successfully. Sep 13 00:05:22.148751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e-rootfs.mount: Deactivated successfully. Sep 13 00:05:22.188616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657723259.mount: Deactivated successfully. Sep 13 00:05:22.196543 env[1810]: time="2025-09-13T00:05:22.196392979Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\"" Sep 13 00:05:22.199508 env[1810]: time="2025-09-13T00:05:22.199432577Z" level=info msg="StartContainer for \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\"" Sep 13 00:05:22.245225 systemd[1]: Started cri-containerd-bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18.scope. Sep 13 00:05:22.341567 env[1810]: time="2025-09-13T00:05:22.341481219Z" level=info msg="StartContainer for \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\" returns successfully" Sep 13 00:05:22.352448 systemd[1]: cri-containerd-bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18.scope: Deactivated successfully. Sep 13 00:05:22.417447 env[1810]: time="2025-09-13T00:05:22.417271660Z" level=info msg="shim disconnected" id=bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18 Sep 13 00:05:22.417447 env[1810]: time="2025-09-13T00:05:22.417344134Z" level=warning msg="cleaning up after shim disconnected" id=bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18 namespace=k8s.io Sep 13 00:05:22.417447 env[1810]: time="2025-09-13T00:05:22.417366455Z" level=info msg="cleaning up dead shim" Sep 13 00:05:22.444719 env[1810]: time="2025-09-13T00:05:22.444641553Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:05:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3457 runtime=io.containerd.runc.v2\n" Sep 13 00:05:23.130973 env[1810]: time="2025-09-13T00:05:23.130352722Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:05:23.168379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585170629.mount: Deactivated successfully. Sep 13 00:05:23.174144 env[1810]: time="2025-09-13T00:05:23.174075302Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\"" Sep 13 00:05:23.176769 env[1810]: time="2025-09-13T00:05:23.175790396Z" level=info msg="StartContainer for \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\"" Sep 13 00:05:23.223324 systemd[1]: Started cri-containerd-bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045.scope. Sep 13 00:05:23.305293 systemd[1]: cri-containerd-bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045.scope: Deactivated successfully. Sep 13 00:05:23.309104 env[1810]: time="2025-09-13T00:05:23.308953753Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf7df171_0caa_4e71_ac66_b9eb231a4ac5.slice/cri-containerd-bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045.scope/memory.events\": no such file or directory" Sep 13 00:05:23.314450 env[1810]: time="2025-09-13T00:05:23.313632543Z" level=info msg="StartContainer for \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\" returns successfully" Sep 13 00:05:23.355696 env[1810]: time="2025-09-13T00:05:23.355620313Z" level=info msg="shim disconnected" id=bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045 Sep 13 00:05:23.356021 env[1810]: time="2025-09-13T00:05:23.355696723Z" level=warning msg="cleaning up after shim disconnected" id=bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045 namespace=k8s.io Sep 13 00:05:23.356021 env[1810]: time="2025-09-13T00:05:23.355720665Z" level=info msg="cleaning up dead shim" Sep 13 00:05:23.372420 env[1810]: time="2025-09-13T00:05:23.372346735Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:05:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3514 runtime=io.containerd.runc.v2\n" Sep 13 00:05:24.142488 env[1810]: time="2025-09-13T00:05:24.142392275Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:05:24.148910 systemd[1]: run-containerd-runc-k8s.io-bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045-runc.LKRJLo.mount: Deactivated successfully. Sep 13 00:05:24.149102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045-rootfs.mount: Deactivated successfully. Sep 13 00:05:24.197140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715862688.mount: Deactivated successfully. Sep 13 00:05:24.199431 env[1810]: time="2025-09-13T00:05:24.197651884Z" level=info msg="CreateContainer within sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\"" Sep 13 00:05:24.203095 env[1810]: time="2025-09-13T00:05:24.203032322Z" level=info msg="StartContainer for \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\"" Sep 13 00:05:24.264683 systemd[1]: Started cri-containerd-80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed.scope. Sep 13 00:05:24.369554 env[1810]: time="2025-09-13T00:05:24.369482721Z" level=info msg="StartContainer for \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\" returns successfully" Sep 13 00:05:24.596600 kubelet[2778]: I0913 00:05:24.596261 2778 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:05:24.659232 systemd[1]: Created slice kubepods-burstable-podce8764b9_98fa_49bb_8ed6_882c8ece4996.slice. Sep 13 00:05:24.680811 systemd[1]: Created slice kubepods-burstable-podd18892ca_da90_4d4b_9ac6_1eb84901feaf.slice. Sep 13 00:05:24.777887 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:05:24.794591 kubelet[2778]: I0913 00:05:24.794529 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzr9d\" (UniqueName: \"kubernetes.io/projected/ce8764b9-98fa-49bb-8ed6-882c8ece4996-kube-api-access-vzr9d\") pod \"coredns-674b8bbfcf-xd2mv\" (UID: \"ce8764b9-98fa-49bb-8ed6-882c8ece4996\") " pod="kube-system/coredns-674b8bbfcf-xd2mv" Sep 13 00:05:24.794960 kubelet[2778]: I0913 00:05:24.794924 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5rt4\" (UniqueName: \"kubernetes.io/projected/d18892ca-da90-4d4b-9ac6-1eb84901feaf-kube-api-access-v5rt4\") pod \"coredns-674b8bbfcf-5w79s\" (UID: \"d18892ca-da90-4d4b-9ac6-1eb84901feaf\") " pod="kube-system/coredns-674b8bbfcf-5w79s" Sep 13 00:05:24.795236 kubelet[2778]: I0913 00:05:24.795198 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d18892ca-da90-4d4b-9ac6-1eb84901feaf-config-volume\") pod \"coredns-674b8bbfcf-5w79s\" (UID: \"d18892ca-da90-4d4b-9ac6-1eb84901feaf\") " pod="kube-system/coredns-674b8bbfcf-5w79s" Sep 13 00:05:24.795450 kubelet[2778]: I0913 00:05:24.795417 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce8764b9-98fa-49bb-8ed6-882c8ece4996-config-volume\") pod \"coredns-674b8bbfcf-xd2mv\" (UID: \"ce8764b9-98fa-49bb-8ed6-882c8ece4996\") " pod="kube-system/coredns-674b8bbfcf-xd2mv" Sep 13 00:05:24.973298 env[1810]: time="2025-09-13T00:05:24.973205814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xd2mv,Uid:ce8764b9-98fa-49bb-8ed6-882c8ece4996,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:24.989559 env[1810]: time="2025-09-13T00:05:24.989473956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5w79s,Uid:d18892ca-da90-4d4b-9ac6-1eb84901feaf,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:25.207186 kubelet[2778]: I0913 00:05:25.207084 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6w8lh" podStartSLOduration=6.207060471 podStartE2EDuration="6.207060471s" podCreationTimestamp="2025-09-13 00:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:25.206231928 +0000 UTC m=+26.730590912" watchObservedRunningTime="2025-09-13 00:05:25.207060471 +0000 UTC m=+26.731419479" Sep 13 00:05:25.723868 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:05:27.580597 systemd-networkd[1527]: cilium_host: Link UP Sep 13 00:05:27.584964 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:05:27.585142 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:05:27.585171 systemd-networkd[1527]: cilium_net: Link UP Sep 13 00:05:27.587694 systemd-networkd[1527]: cilium_net: Gained carrier Sep 13 00:05:27.589284 systemd-networkd[1527]: cilium_host: Gained carrier Sep 13 00:05:27.592608 (udev-worker)[3642]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:27.595171 (udev-worker)[3701]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:27.802496 (udev-worker)[3717]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:27.810512 systemd-networkd[1527]: cilium_vxlan: Link UP Sep 13 00:05:27.810535 systemd-networkd[1527]: cilium_vxlan: Gained carrier Sep 13 00:05:27.839463 systemd-networkd[1527]: cilium_host: Gained IPv6LL Sep 13 00:05:28.287149 systemd-networkd[1527]: cilium_net: Gained IPv6LL Sep 13 00:05:28.489892 kernel: NET: Registered PF_ALG protocol family Sep 13 00:05:28.670156 systemd[1]: run-containerd-runc-k8s.io-80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed-runc.d7OcMr.mount: Deactivated successfully. Sep 13 00:05:28.992108 systemd-networkd[1527]: cilium_vxlan: Gained IPv6LL Sep 13 00:05:30.107078 (udev-worker)[3716]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:30.107927 systemd-networkd[1527]: lxc_health: Link UP Sep 13 00:05:30.131099 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:05:30.127113 systemd-networkd[1527]: lxc_health: Gained carrier Sep 13 00:05:30.579711 systemd-networkd[1527]: lxcb69e9427a996: Link UP Sep 13 00:05:30.598949 kernel: eth0: renamed from tmp62bff Sep 13 00:05:30.609937 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb69e9427a996: link becomes ready Sep 13 00:05:30.610267 systemd-networkd[1527]: lxcb69e9427a996: Gained carrier Sep 13 00:05:30.626621 systemd-networkd[1527]: lxc6d5a02f3af31: Link UP Sep 13 00:05:30.638045 kernel: eth0: renamed from tmp81b84 Sep 13 00:05:30.647009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6d5a02f3af31: link becomes ready Sep 13 00:05:30.647338 systemd-networkd[1527]: lxc6d5a02f3af31: Gained carrier Sep 13 00:05:31.615253 systemd-networkd[1527]: lxc_health: Gained IPv6LL Sep 13 00:05:31.999254 systemd-networkd[1527]: lxcb69e9427a996: Gained IPv6LL Sep 13 00:05:32.383130 systemd-networkd[1527]: lxc6d5a02f3af31: Gained IPv6LL Sep 13 00:05:37.807552 systemd[1]: run-containerd-runc-k8s.io-80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed-runc.L9GOKN.mount: Deactivated successfully. Sep 13 00:05:40.110068 systemd[1]: run-containerd-runc-k8s.io-80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed-runc.auKddQ.mount: Deactivated successfully. Sep 13 00:05:40.287863 env[1810]: time="2025-09-13T00:05:40.287633667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:40.287863 env[1810]: time="2025-09-13T00:05:40.287741769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:40.288898 env[1810]: time="2025-09-13T00:05:40.287772718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:40.289696 env[1810]: time="2025-09-13T00:05:40.289545041Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62bff20c212dd207e17c445fc4e5702ae68ca2c135f43814c6c6b1a1af0a0bbf pid=4202 runtime=io.containerd.runc.v2 Sep 13 00:05:40.360030 systemd[1]: Started cri-containerd-62bff20c212dd207e17c445fc4e5702ae68ca2c135f43814c6c6b1a1af0a0bbf.scope. Sep 13 00:05:40.452418 env[1810]: time="2025-09-13T00:05:40.452246617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:40.452418 env[1810]: time="2025-09-13T00:05:40.452338301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:40.452418 env[1810]: time="2025-09-13T00:05:40.452366959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:40.453893 env[1810]: time="2025-09-13T00:05:40.453215714Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81b8440661fe6d60de72d7adceded1550955996e0414abdcbef30c971c45e6a6 pid=4237 runtime=io.containerd.runc.v2 Sep 13 00:05:40.487244 systemd[1]: Started cri-containerd-81b8440661fe6d60de72d7adceded1550955996e0414abdcbef30c971c45e6a6.scope. Sep 13 00:05:40.656574 env[1810]: time="2025-09-13T00:05:40.656478198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xd2mv,Uid:ce8764b9-98fa-49bb-8ed6-882c8ece4996,Namespace:kube-system,Attempt:0,} returns sandbox id \"62bff20c212dd207e17c445fc4e5702ae68ca2c135f43814c6c6b1a1af0a0bbf\"" Sep 13 00:05:40.671974 env[1810]: time="2025-09-13T00:05:40.671887661Z" level=info msg="CreateContainer within sandbox \"62bff20c212dd207e17c445fc4e5702ae68ca2c135f43814c6c6b1a1af0a0bbf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:05:40.719020 env[1810]: time="2025-09-13T00:05:40.718930510Z" level=info msg="CreateContainer within sandbox \"62bff20c212dd207e17c445fc4e5702ae68ca2c135f43814c6c6b1a1af0a0bbf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e5eb64a103f870e2481cfb4d089ae2221a4df398ed585d08cb3924d4e2d47cb\"" Sep 13 00:05:40.729303 env[1810]: time="2025-09-13T00:05:40.729227723Z" level=info msg="StartContainer for \"9e5eb64a103f870e2481cfb4d089ae2221a4df398ed585d08cb3924d4e2d47cb\"" Sep 13 00:05:40.735073 env[1810]: time="2025-09-13T00:05:40.735008528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5w79s,Uid:d18892ca-da90-4d4b-9ac6-1eb84901feaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"81b8440661fe6d60de72d7adceded1550955996e0414abdcbef30c971c45e6a6\"" Sep 13 00:05:40.750962 env[1810]: time="2025-09-13T00:05:40.750795519Z" level=info msg="CreateContainer within sandbox \"81b8440661fe6d60de72d7adceded1550955996e0414abdcbef30c971c45e6a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:05:40.797072 systemd[1]: Started cri-containerd-9e5eb64a103f870e2481cfb4d089ae2221a4df398ed585d08cb3924d4e2d47cb.scope. Sep 13 00:05:40.805800 env[1810]: time="2025-09-13T00:05:40.805720049Z" level=info msg="CreateContainer within sandbox \"81b8440661fe6d60de72d7adceded1550955996e0414abdcbef30c971c45e6a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75f19e167aad03ab13235b4060c7227ce6b199ff941149d58152467d2dcd7f9f\"" Sep 13 00:05:40.808166 env[1810]: time="2025-09-13T00:05:40.808092967Z" level=info msg="StartContainer for \"75f19e167aad03ab13235b4060c7227ce6b199ff941149d58152467d2dcd7f9f\"" Sep 13 00:05:40.891704 systemd[1]: Started cri-containerd-75f19e167aad03ab13235b4060c7227ce6b199ff941149d58152467d2dcd7f9f.scope. Sep 13 00:05:40.921362 env[1810]: time="2025-09-13T00:05:40.921216503Z" level=info msg="StartContainer for \"9e5eb64a103f870e2481cfb4d089ae2221a4df398ed585d08cb3924d4e2d47cb\" returns successfully" Sep 13 00:05:41.011953 env[1810]: time="2025-09-13T00:05:41.011883548Z" level=info msg="StartContainer for \"75f19e167aad03ab13235b4060c7227ce6b199ff941149d58152467d2dcd7f9f\" returns successfully" Sep 13 00:05:41.180700 kubelet[2778]: I0913 00:05:41.180209 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:05:41.284309 kubelet[2778]: I0913 00:05:41.284206 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xd2mv" podStartSLOduration=38.284180761 podStartE2EDuration="38.284180761s" podCreationTimestamp="2025-09-13 00:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:41.259855998 +0000 UTC m=+42.784214958" watchObservedRunningTime="2025-09-13 00:05:41.284180761 +0000 UTC m=+42.808539733" Sep 13 00:05:42.873283 sudo[2054]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:42.896661 sshd[2051]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:42.903042 systemd-logind[1800]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:05:42.903482 systemd[1]: sshd@4-172.31.31.19:22-139.178.89.65:35076.service: Deactivated successfully. Sep 13 00:05:42.905386 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:05:42.906098 systemd[1]: session-5.scope: Consumed 11.601s CPU time. Sep 13 00:05:42.907310 systemd-logind[1800]: Removed session 5. Sep 13 00:05:58.702696 env[1810]: time="2025-09-13T00:05:58.702587768Z" level=info msg="StopPodSandbox for \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\"" Sep 13 00:05:58.703458 env[1810]: time="2025-09-13T00:05:58.702854201Z" level=info msg="TearDown network for sandbox \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\" successfully" Sep 13 00:05:58.703458 env[1810]: time="2025-09-13T00:05:58.702949245Z" level=info msg="StopPodSandbox for \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\" returns successfully" Sep 13 00:05:58.704105 env[1810]: time="2025-09-13T00:05:58.704045830Z" level=info msg="RemovePodSandbox for \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\"" Sep 13 00:05:58.704259 env[1810]: time="2025-09-13T00:05:58.704134825Z" level=info msg="Forcibly stopping sandbox \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\"" Sep 13 00:05:58.704408 env[1810]: time="2025-09-13T00:05:58.704340908Z" level=info msg="TearDown network for sandbox \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\" successfully" Sep 13 00:05:58.716032 env[1810]: time="2025-09-13T00:05:58.715925742Z" level=info msg="RemovePodSandbox \"5a084e5be36051de47d7e7a7c4f1f406f180509635d555fca709293d3f6e48e0\" returns successfully" Sep 13 00:06:23.989627 systemd[1]: Started sshd@5-172.31.31.19:22-139.178.89.65:54860.service. Sep 13 00:06:24.167490 sshd[4401]: Accepted publickey for core from 139.178.89.65 port 54860 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:24.170554 sshd[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:24.181199 systemd-logind[1800]: New session 6 of user core. Sep 13 00:06:24.181277 systemd[1]: Started session-6.scope. Sep 13 00:06:24.460648 sshd[4401]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:24.466870 systemd-logind[1800]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:06:24.467458 systemd[1]: sshd@5-172.31.31.19:22-139.178.89.65:54860.service: Deactivated successfully. Sep 13 00:06:24.468807 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:06:24.471185 systemd-logind[1800]: Removed session 6. Sep 13 00:06:29.492252 systemd[1]: Started sshd@6-172.31.31.19:22-139.178.89.65:54864.service. Sep 13 00:06:29.669132 sshd[4414]: Accepted publickey for core from 139.178.89.65 port 54864 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:29.671890 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:29.681009 systemd[1]: Started session-7.scope. Sep 13 00:06:29.682220 systemd-logind[1800]: New session 7 of user core. Sep 13 00:06:29.926773 sshd[4414]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:29.933809 systemd[1]: sshd@6-172.31.31.19:22-139.178.89.65:54864.service: Deactivated successfully. Sep 13 00:06:29.935202 systemd-logind[1800]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:06:29.935385 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:06:29.937998 systemd-logind[1800]: Removed session 7. Sep 13 00:06:34.958693 systemd[1]: Started sshd@7-172.31.31.19:22-139.178.89.65:46314.service. Sep 13 00:06:35.144271 sshd[4429]: Accepted publickey for core from 139.178.89.65 port 46314 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:35.147868 sshd[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:35.161001 systemd[1]: Started session-8.scope. Sep 13 00:06:35.162222 systemd-logind[1800]: New session 8 of user core. Sep 13 00:06:35.420203 sshd[4429]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:35.426047 systemd[1]: sshd@7-172.31.31.19:22-139.178.89.65:46314.service: Deactivated successfully. Sep 13 00:06:35.427464 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:06:35.428731 systemd-logind[1800]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:06:35.431569 systemd-logind[1800]: Removed session 8. Sep 13 00:06:40.452335 systemd[1]: Started sshd@8-172.31.31.19:22-139.178.89.65:37604.service. Sep 13 00:06:40.632362 sshd[4441]: Accepted publickey for core from 139.178.89.65 port 37604 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:40.635767 sshd[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:40.644740 systemd-logind[1800]: New session 9 of user core. Sep 13 00:06:40.646596 systemd[1]: Started session-9.scope. Sep 13 00:06:40.925287 sshd[4441]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:40.930972 systemd[1]: sshd@8-172.31.31.19:22-139.178.89.65:37604.service: Deactivated successfully. Sep 13 00:06:40.932462 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:06:40.935156 systemd-logind[1800]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:06:40.939432 systemd-logind[1800]: Removed session 9. Sep 13 00:06:45.956579 systemd[1]: Started sshd@9-172.31.31.19:22-139.178.89.65:37618.service. Sep 13 00:06:46.129409 sshd[4454]: Accepted publickey for core from 139.178.89.65 port 37618 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:46.132674 sshd[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:46.141717 systemd[1]: Started session-10.scope. Sep 13 00:06:46.143096 systemd-logind[1800]: New session 10 of user core. Sep 13 00:06:46.392262 sshd[4454]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:46.398385 systemd-logind[1800]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:06:46.399422 systemd[1]: sshd@9-172.31.31.19:22-139.178.89.65:37618.service: Deactivated successfully. Sep 13 00:06:46.400939 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:06:46.403206 systemd-logind[1800]: Removed session 10. Sep 13 00:06:46.422643 systemd[1]: Started sshd@10-172.31.31.19:22-139.178.89.65:37630.service. Sep 13 00:06:46.601623 sshd[4466]: Accepted publickey for core from 139.178.89.65 port 37630 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:46.604338 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:46.613885 systemd[1]: Started session-11.scope. Sep 13 00:06:46.614677 systemd-logind[1800]: New session 11 of user core. Sep 13 00:06:46.964219 sshd[4466]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:46.971023 systemd[1]: sshd@10-172.31.31.19:22-139.178.89.65:37630.service: Deactivated successfully. Sep 13 00:06:46.972393 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:06:46.974873 systemd-logind[1800]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:06:46.978110 systemd-logind[1800]: Removed session 11. Sep 13 00:06:47.000199 systemd[1]: Started sshd@11-172.31.31.19:22-139.178.89.65:37646.service. Sep 13 00:06:47.182427 sshd[4476]: Accepted publickey for core from 139.178.89.65 port 37646 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:47.186181 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:47.194553 systemd-logind[1800]: New session 12 of user core. Sep 13 00:06:47.195623 systemd[1]: Started session-12.scope. Sep 13 00:06:47.448374 sshd[4476]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:47.453863 systemd[1]: sshd@11-172.31.31.19:22-139.178.89.65:37646.service: Deactivated successfully. Sep 13 00:06:47.455253 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:06:47.456637 systemd-logind[1800]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:06:47.459501 systemd-logind[1800]: Removed session 12. Sep 13 00:06:52.478426 systemd[1]: Started sshd@12-172.31.31.19:22-139.178.89.65:33108.service. Sep 13 00:06:52.654490 sshd[4488]: Accepted publickey for core from 139.178.89.65 port 33108 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:52.657925 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:52.667061 systemd[1]: Started session-13.scope. Sep 13 00:06:52.667958 systemd-logind[1800]: New session 13 of user core. Sep 13 00:06:52.939298 sshd[4488]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:52.945706 systemd[1]: sshd@12-172.31.31.19:22-139.178.89.65:33108.service: Deactivated successfully. Sep 13 00:06:52.947350 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:06:52.949748 systemd-logind[1800]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:06:52.952193 systemd-logind[1800]: Removed session 13. Sep 13 00:06:57.969512 systemd[1]: Started sshd@13-172.31.31.19:22-139.178.89.65:33120.service. Sep 13 00:06:58.146751 sshd[4500]: Accepted publickey for core from 139.178.89.65 port 33120 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:58.150055 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:58.160743 systemd[1]: Started session-14.scope. Sep 13 00:06:58.161609 systemd-logind[1800]: New session 14 of user core. Sep 13 00:06:58.413335 sshd[4500]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:58.418643 systemd-logind[1800]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:06:58.419316 systemd[1]: sshd@13-172.31.31.19:22-139.178.89.65:33120.service: Deactivated successfully. Sep 13 00:06:58.420796 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:06:58.422949 systemd-logind[1800]: Removed session 14. Sep 13 00:07:03.444608 systemd[1]: Started sshd@14-172.31.31.19:22-139.178.89.65:33806.service. Sep 13 00:07:03.623445 sshd[4514]: Accepted publickey for core from 139.178.89.65 port 33806 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:03.626359 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:03.637846 systemd-logind[1800]: New session 15 of user core. Sep 13 00:07:03.639123 systemd[1]: Started session-15.scope. Sep 13 00:07:03.896089 sshd[4514]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:03.901787 systemd[1]: sshd@14-172.31.31.19:22-139.178.89.65:33806.service: Deactivated successfully. Sep 13 00:07:03.903329 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:07:03.905713 systemd-logind[1800]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:07:03.908452 systemd-logind[1800]: Removed session 15. Sep 13 00:07:03.924503 systemd[1]: Started sshd@15-172.31.31.19:22-139.178.89.65:33820.service. Sep 13 00:07:04.107789 sshd[4526]: Accepted publickey for core from 139.178.89.65 port 33820 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:04.110803 sshd[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:04.120983 systemd-logind[1800]: New session 16 of user core. Sep 13 00:07:04.122389 systemd[1]: Started session-16.scope. Sep 13 00:07:04.467090 sshd[4526]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:04.472134 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:07:04.474375 systemd[1]: sshd@15-172.31.31.19:22-139.178.89.65:33820.service: Deactivated successfully. Sep 13 00:07:04.477770 systemd-logind[1800]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:07:04.480711 systemd-logind[1800]: Removed session 16. Sep 13 00:07:04.499668 systemd[1]: Started sshd@16-172.31.31.19:22-139.178.89.65:33822.service. Sep 13 00:07:04.679572 sshd[4538]: Accepted publickey for core from 139.178.89.65 port 33822 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:04.682711 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:04.692996 systemd-logind[1800]: New session 17 of user core. Sep 13 00:07:04.694292 systemd[1]: Started session-17.scope. Sep 13 00:07:05.708166 sshd[4538]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:05.715603 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:07:05.717143 systemd[1]: sshd@16-172.31.31.19:22-139.178.89.65:33822.service: Deactivated successfully. Sep 13 00:07:05.719979 systemd-logind[1800]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:07:05.725234 systemd-logind[1800]: Removed session 17. Sep 13 00:07:05.745268 systemd[1]: Started sshd@17-172.31.31.19:22-139.178.89.65:33828.service. Sep 13 00:07:05.935401 sshd[4555]: Accepted publickey for core from 139.178.89.65 port 33828 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:05.938328 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:05.950032 systemd[1]: Started session-18.scope. Sep 13 00:07:05.952154 systemd-logind[1800]: New session 18 of user core. Sep 13 00:07:06.504167 sshd[4555]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:06.510643 systemd[1]: sshd@17-172.31.31.19:22-139.178.89.65:33828.service: Deactivated successfully. Sep 13 00:07:06.512522 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:07:06.514157 systemd-logind[1800]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:07:06.517019 systemd-logind[1800]: Removed session 18. Sep 13 00:07:06.536286 systemd[1]: Started sshd@18-172.31.31.19:22-139.178.89.65:33834.service. Sep 13 00:07:06.716518 sshd[4565]: Accepted publickey for core from 139.178.89.65 port 33834 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:06.720217 sshd[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:06.729890 systemd-logind[1800]: New session 19 of user core. Sep 13 00:07:06.731051 systemd[1]: Started session-19.scope. Sep 13 00:07:07.001353 sshd[4565]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:07.008988 systemd-logind[1800]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:07:07.009104 systemd[1]: sshd@18-172.31.31.19:22-139.178.89.65:33834.service: Deactivated successfully. Sep 13 00:07:07.010556 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:07:07.012330 systemd-logind[1800]: Removed session 19. Sep 13 00:07:12.032111 systemd[1]: Started sshd@19-172.31.31.19:22-139.178.89.65:58146.service. Sep 13 00:07:12.208532 sshd[4577]: Accepted publickey for core from 139.178.89.65 port 58146 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:12.211440 sshd[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:12.221948 systemd-logind[1800]: New session 20 of user core. Sep 13 00:07:12.222226 systemd[1]: Started session-20.scope. Sep 13 00:07:12.482538 sshd[4577]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:12.488990 systemd-logind[1800]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:07:12.489655 systemd[1]: sshd@19-172.31.31.19:22-139.178.89.65:58146.service: Deactivated successfully. Sep 13 00:07:12.491267 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:07:12.493679 systemd-logind[1800]: Removed session 20. Sep 13 00:07:17.512317 systemd[1]: Started sshd@20-172.31.31.19:22-139.178.89.65:58154.service. Sep 13 00:07:17.687737 sshd[4592]: Accepted publickey for core from 139.178.89.65 port 58154 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:17.690462 sshd[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:17.699696 systemd-logind[1800]: New session 21 of user core. Sep 13 00:07:17.700949 systemd[1]: Started session-21.scope. Sep 13 00:07:17.954670 sshd[4592]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:17.960651 systemd[1]: sshd@20-172.31.31.19:22-139.178.89.65:58154.service: Deactivated successfully. Sep 13 00:07:17.962121 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:07:17.964405 systemd-logind[1800]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:07:17.966854 systemd-logind[1800]: Removed session 21. Sep 13 00:07:22.984477 systemd[1]: Started sshd@21-172.31.31.19:22-139.178.89.65:34002.service. Sep 13 00:07:23.163738 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 34002 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:23.166496 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:23.174958 systemd-logind[1800]: New session 22 of user core. Sep 13 00:07:23.177338 systemd[1]: Started session-22.scope. Sep 13 00:07:23.433067 sshd[4604]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:23.438437 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:07:23.439659 systemd[1]: sshd@21-172.31.31.19:22-139.178.89.65:34002.service: Deactivated successfully. Sep 13 00:07:23.441586 systemd-logind[1800]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:07:23.443973 systemd-logind[1800]: Removed session 22. Sep 13 00:07:23.463052 systemd[1]: Started sshd@22-172.31.31.19:22-139.178.89.65:34010.service. Sep 13 00:07:23.645634 sshd[4616]: Accepted publickey for core from 139.178.89.65 port 34010 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:23.650098 sshd[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:23.659991 systemd-logind[1800]: New session 23 of user core. Sep 13 00:07:23.661057 systemd[1]: Started session-23.scope. Sep 13 00:07:26.385806 kubelet[2778]: I0913 00:07:26.385688 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5w79s" podStartSLOduration=143.385634032 podStartE2EDuration="2m23.385634032s" podCreationTimestamp="2025-09-13 00:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:41.285926981 +0000 UTC m=+42.810285941" watchObservedRunningTime="2025-09-13 00:07:26.385634032 +0000 UTC m=+147.909993004" Sep 13 00:07:26.443619 systemd[1]: run-containerd-runc-k8s.io-80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed-runc.rMBWmn.mount: Deactivated successfully. Sep 13 00:07:26.454436 env[1810]: time="2025-09-13T00:07:26.454355834Z" level=info msg="StopContainer for \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\" with timeout 30 (s)" Sep 13 00:07:26.456316 env[1810]: time="2025-09-13T00:07:26.456256523Z" level=info msg="Stop container \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\" with signal terminated" Sep 13 00:07:26.498186 systemd[1]: cri-containerd-5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1.scope: Deactivated successfully. Sep 13 00:07:26.501867 env[1810]: time="2025-09-13T00:07:26.501695261Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:07:26.525564 env[1810]: time="2025-09-13T00:07:26.525470106Z" level=info msg="StopContainer for \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\" with timeout 2 (s)" Sep 13 00:07:26.526900 env[1810]: time="2025-09-13T00:07:26.526696112Z" level=info msg="Stop container \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\" with signal terminated" Sep 13 00:07:26.557307 systemd-networkd[1527]: lxc_health: Link DOWN Sep 13 00:07:26.557323 systemd-networkd[1527]: lxc_health: Lost carrier Sep 13 00:07:26.578675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1-rootfs.mount: Deactivated successfully. Sep 13 00:07:26.592617 systemd[1]: cri-containerd-80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed.scope: Deactivated successfully. Sep 13 00:07:26.593280 systemd[1]: cri-containerd-80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed.scope: Consumed 16.733s CPU time. Sep 13 00:07:26.615281 env[1810]: time="2025-09-13T00:07:26.615137606Z" level=info msg="shim disconnected" id=5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1 Sep 13 00:07:26.615281 env[1810]: time="2025-09-13T00:07:26.615212797Z" level=warning msg="cleaning up after shim disconnected" id=5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1 namespace=k8s.io Sep 13 00:07:26.615281 env[1810]: time="2025-09-13T00:07:26.615236185Z" level=info msg="cleaning up dead shim" Sep 13 00:07:26.633525 env[1810]: time="2025-09-13T00:07:26.633445700Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4677 runtime=io.containerd.runc.v2\n" Sep 13 00:07:26.643271 env[1810]: time="2025-09-13T00:07:26.637947130Z" level=info msg="StopContainer for \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\" returns successfully" Sep 13 00:07:26.643271 env[1810]: time="2025-09-13T00:07:26.639360311Z" level=info msg="StopPodSandbox for \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\"" Sep 13 00:07:26.643271 env[1810]: time="2025-09-13T00:07:26.639452494Z" level=info msg="Container to stop \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:26.643443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174-shm.mount: Deactivated successfully. Sep 13 00:07:26.658859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed-rootfs.mount: Deactivated successfully. Sep 13 00:07:26.667305 systemd[1]: cri-containerd-1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174.scope: Deactivated successfully. Sep 13 00:07:26.677809 env[1810]: time="2025-09-13T00:07:26.677728162Z" level=info msg="shim disconnected" id=80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed Sep 13 00:07:26.678198 env[1810]: time="2025-09-13T00:07:26.677807697Z" level=warning msg="cleaning up after shim disconnected" id=80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed namespace=k8s.io Sep 13 00:07:26.678198 env[1810]: time="2025-09-13T00:07:26.677877189Z" level=info msg="cleaning up dead shim" Sep 13 00:07:26.696621 env[1810]: time="2025-09-13T00:07:26.696549588Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4704 runtime=io.containerd.runc.v2\n" Sep 13 00:07:26.701129 env[1810]: time="2025-09-13T00:07:26.700948035Z" level=info msg="StopContainer for \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\" returns successfully" Sep 13 00:07:26.701868 env[1810]: time="2025-09-13T00:07:26.701721417Z" level=info msg="StopPodSandbox for \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\"" Sep 13 00:07:26.702055 env[1810]: time="2025-09-13T00:07:26.701920615Z" level=info msg="Container to stop \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:26.702055 env[1810]: time="2025-09-13T00:07:26.701957527Z" level=info msg="Container to stop \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:26.702055 env[1810]: time="2025-09-13T00:07:26.702000271Z" level=info msg="Container to stop \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:26.702055 env[1810]: time="2025-09-13T00:07:26.702035130Z" level=info msg="Container to stop \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:26.702322 env[1810]: time="2025-09-13T00:07:26.702063534Z" level=info msg="Container to stop \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:26.714784 systemd[1]: cri-containerd-c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923.scope: Deactivated successfully. Sep 13 00:07:26.738985 env[1810]: time="2025-09-13T00:07:26.738894173Z" level=info msg="shim disconnected" id=1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174 Sep 13 00:07:26.738985 env[1810]: time="2025-09-13T00:07:26.738973900Z" level=warning msg="cleaning up after shim disconnected" id=1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174 namespace=k8s.io Sep 13 00:07:26.739439 env[1810]: time="2025-09-13T00:07:26.738999364Z" level=info msg="cleaning up dead shim" Sep 13 00:07:26.763129 env[1810]: time="2025-09-13T00:07:26.763072310Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4743 runtime=io.containerd.runc.v2\n" Sep 13 00:07:26.764087 env[1810]: time="2025-09-13T00:07:26.764023435Z" level=info msg="TearDown network for sandbox \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\" successfully" Sep 13 00:07:26.764347 env[1810]: time="2025-09-13T00:07:26.764291969Z" level=info msg="StopPodSandbox for \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\" returns successfully" Sep 13 00:07:26.788187 env[1810]: time="2025-09-13T00:07:26.788110350Z" level=info msg="shim disconnected" id=c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923 Sep 13 00:07:26.788675 env[1810]: time="2025-09-13T00:07:26.788593466Z" level=warning msg="cleaning up after shim disconnected" id=c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923 namespace=k8s.io Sep 13 00:07:26.788981 env[1810]: time="2025-09-13T00:07:26.788902440Z" level=info msg="cleaning up dead shim" Sep 13 00:07:26.810031 env[1810]: time="2025-09-13T00:07:26.809931225Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4764 runtime=io.containerd.runc.v2\n" Sep 13 00:07:26.811025 env[1810]: time="2025-09-13T00:07:26.810947845Z" level=info msg="TearDown network for sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" successfully" Sep 13 00:07:26.811464 env[1810]: time="2025-09-13T00:07:26.811391038Z" level=info msg="StopPodSandbox for \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" returns successfully" Sep 13 00:07:26.910103 kubelet[2778]: I0913 00:07:26.908163 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91309607-f47f-4763-a1ba-2b17fd81c84c-cilium-config-path\") pod \"91309607-f47f-4763-a1ba-2b17fd81c84c\" (UID: \"91309607-f47f-4763-a1ba-2b17fd81c84c\") " Sep 13 00:07:26.910103 kubelet[2778]: I0913 00:07:26.908280 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvclj\" (UniqueName: \"kubernetes.io/projected/91309607-f47f-4763-a1ba-2b17fd81c84c-kube-api-access-hvclj\") pod \"91309607-f47f-4763-a1ba-2b17fd81c84c\" (UID: \"91309607-f47f-4763-a1ba-2b17fd81c84c\") " Sep 13 00:07:26.914121 kubelet[2778]: I0913 00:07:26.914027 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91309607-f47f-4763-a1ba-2b17fd81c84c-kube-api-access-hvclj" (OuterVolumeSpecName: "kube-api-access-hvclj") pod "91309607-f47f-4763-a1ba-2b17fd81c84c" (UID: "91309607-f47f-4763-a1ba-2b17fd81c84c"). InnerVolumeSpecName "kube-api-access-hvclj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:07:26.918366 kubelet[2778]: I0913 00:07:26.918172 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91309607-f47f-4763-a1ba-2b17fd81c84c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "91309607-f47f-4763-a1ba-2b17fd81c84c" (UID: "91309607-f47f-4763-a1ba-2b17fd81c84c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:07:27.008942 kubelet[2778]: I0913 00:07:27.008871 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-xtables-lock\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.009308 kubelet[2778]: I0913 00:07:27.009261 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-run\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.009530 kubelet[2778]: I0913 00:07:27.009005 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.009530 kubelet[2778]: I0913 00:07:27.009332 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.009707 kubelet[2778]: I0913 00:07:27.009636 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.009908 kubelet[2778]: I0913 00:07:27.009492 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-lib-modules\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.010215 kubelet[2778]: I0913 00:07:27.010173 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af7df171-0caa-4e71-ac66-b9eb231a4ac5-clustermesh-secrets\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.010500 kubelet[2778]: I0913 00:07:27.010438 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-cgroup\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.010937 kubelet[2778]: I0913 00:07:27.010891 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-bpf-maps\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.011280 kubelet[2778]: I0913 00:07:27.010730 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.011779 kubelet[2778]: I0913 00:07:27.011731 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.012569 kubelet[2778]: I0913 00:07:27.011174 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-etc-cni-netd\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.013946 kubelet[2778]: I0913 00:07:27.012873 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.014444 kubelet[2778]: I0913 00:07:27.014311 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-host-proc-sys-kernel\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.014776 kubelet[2778]: I0913 00:07:27.014396 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.015220 kubelet[2778]: I0913 00:07:27.015148 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.015510 kubelet[2778]: I0913 00:07:27.015061 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-host-proc-sys-net\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.016525 kubelet[2778]: I0913 00:07:27.016443 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af7df171-0caa-4e71-ac66-b9eb231a4ac5-hubble-tls\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.017155 kubelet[2778]: I0913 00:07:27.017081 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fznq\" (UniqueName: \"kubernetes.io/projected/af7df171-0caa-4e71-ac66-b9eb231a4ac5-kube-api-access-9fznq\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.017732 kubelet[2778]: I0913 00:07:27.017657 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-config-path\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.022382 kubelet[2778]: I0913 00:07:27.022315 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-hostproc\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.022704 kubelet[2778]: I0913 00:07:27.022648 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cni-path\") pod \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\" (UID: \"af7df171-0caa-4e71-ac66-b9eb231a4ac5\") " Sep 13 00:07:27.028495 kubelet[2778]: I0913 00:07:27.022220 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7df171-0caa-4e71-ac66-b9eb231a4ac5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:07:27.028748 kubelet[2778]: I0913 00:07:27.022992 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cni-path" (OuterVolumeSpecName: "cni-path") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.028890 kubelet[2778]: I0913 00:07:27.023031 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-hostproc" (OuterVolumeSpecName: "hostproc") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:27.030132 kubelet[2778]: I0913 00:07:27.030057 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:07:27.032081 kubelet[2778]: I0913 00:07:27.032016 2778 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-xtables-lock\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.032350 kubelet[2778]: I0913 00:07:27.032301 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-run\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.032890 kubelet[2778]: I0913 00:07:27.032733 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7df171-0caa-4e71-ac66-b9eb231a4ac5-kube-api-access-9fznq" (OuterVolumeSpecName: "kube-api-access-9fznq") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "kube-api-access-9fznq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:07:27.033269 kubelet[2778]: I0913 00:07:27.033235 2778 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-lib-modules\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.033485 kubelet[2778]: I0913 00:07:27.033456 2778 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af7df171-0caa-4e71-ac66-b9eb231a4ac5-clustermesh-secrets\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.033685 kubelet[2778]: I0913 00:07:27.033660 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-cgroup\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.034239 kubelet[2778]: I0913 00:07:27.033959 2778 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-bpf-maps\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.034549 kubelet[2778]: I0913 00:07:27.034497 2778 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-etc-cni-netd\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.034799 kubelet[2778]: I0913 00:07:27.034774 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-host-proc-sys-kernel\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.035045 kubelet[2778]: I0913 00:07:27.035018 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-host-proc-sys-net\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.035308 kubelet[2778]: I0913 00:07:27.035280 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91309607-f47f-4763-a1ba-2b17fd81c84c-cilium-config-path\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.035506 kubelet[2778]: I0913 00:07:27.035478 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hvclj\" (UniqueName: \"kubernetes.io/projected/91309607-f47f-4763-a1ba-2b17fd81c84c-kube-api-access-hvclj\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.035709 kubelet[2778]: I0913 00:07:27.035678 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cilium-config-path\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.035915 kubelet[2778]: I0913 00:07:27.035890 2778 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-hostproc\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.036091 kubelet[2778]: I0913 00:07:27.036066 2778 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af7df171-0caa-4e71-ac66-b9eb231a4ac5-cni-path\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.039508 kubelet[2778]: I0913 00:07:27.039433 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7df171-0caa-4e71-ac66-b9eb231a4ac5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "af7df171-0caa-4e71-ac66-b9eb231a4ac5" (UID: "af7df171-0caa-4e71-ac66-b9eb231a4ac5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:07:27.137288 kubelet[2778]: I0913 00:07:27.137229 2778 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af7df171-0caa-4e71-ac66-b9eb231a4ac5-hubble-tls\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.137566 kubelet[2778]: I0913 00:07:27.137527 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9fznq\" (UniqueName: \"kubernetes.io/projected/af7df171-0caa-4e71-ac66-b9eb231a4ac5-kube-api-access-9fznq\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:27.423763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923-rootfs.mount: Deactivated successfully. Sep 13 00:07:27.424329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923-shm.mount: Deactivated successfully. Sep 13 00:07:27.424720 systemd[1]: var-lib-kubelet-pods-af7df171\x2d0caa\x2d4e71\x2dac66\x2db9eb231a4ac5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9fznq.mount: Deactivated successfully. Sep 13 00:07:27.425112 systemd[1]: var-lib-kubelet-pods-af7df171\x2d0caa\x2d4e71\x2dac66\x2db9eb231a4ac5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:07:27.425425 systemd[1]: var-lib-kubelet-pods-af7df171\x2d0caa\x2d4e71\x2dac66\x2db9eb231a4ac5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:07:27.425727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174-rootfs.mount: Deactivated successfully. Sep 13 00:07:27.426060 systemd[1]: var-lib-kubelet-pods-91309607\x2df47f\x2d4763\x2da1ba\x2d2b17fd81c84c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhvclj.mount: Deactivated successfully. Sep 13 00:07:27.499622 kubelet[2778]: I0913 00:07:27.499531 2778 scope.go:117] "RemoveContainer" containerID="5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1" Sep 13 00:07:27.506973 env[1810]: time="2025-09-13T00:07:27.506888395Z" level=info msg="RemoveContainer for \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\"" Sep 13 00:07:27.511243 systemd[1]: Removed slice kubepods-besteffort-pod91309607_f47f_4763_a1ba_2b17fd81c84c.slice. Sep 13 00:07:27.523537 env[1810]: time="2025-09-13T00:07:27.523446067Z" level=info msg="RemoveContainer for \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\" returns successfully" Sep 13 00:07:27.535216 systemd[1]: Removed slice kubepods-burstable-podaf7df171_0caa_4e71_ac66_b9eb231a4ac5.slice. Sep 13 00:07:27.536193 kubelet[2778]: I0913 00:07:27.535675 2778 scope.go:117] "RemoveContainer" containerID="5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1" Sep 13 00:07:27.536298 env[1810]: time="2025-09-13T00:07:27.536087999Z" level=error msg="ContainerStatus for \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\": not found" Sep 13 00:07:27.535461 systemd[1]: kubepods-burstable-podaf7df171_0caa_4e71_ac66_b9eb231a4ac5.slice: Consumed 17.043s CPU time. Sep 13 00:07:27.536500 kubelet[2778]: E0913 00:07:27.536468 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\": not found" containerID="5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1" Sep 13 00:07:27.536582 kubelet[2778]: I0913 00:07:27.536525 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1"} err="failed to get container status \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f03955716c89253a154933dfc3baa7df2a376e8acc73fe2ae8ce4a99f92e4d1\": not found" Sep 13 00:07:27.536656 kubelet[2778]: I0913 00:07:27.536589 2778 scope.go:117] "RemoveContainer" containerID="80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed" Sep 13 00:07:27.543075 env[1810]: time="2025-09-13T00:07:27.543013004Z" level=info msg="RemoveContainer for \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\"" Sep 13 00:07:27.550834 env[1810]: time="2025-09-13T00:07:27.550718184Z" level=info msg="RemoveContainer for \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\" returns successfully" Sep 13 00:07:27.552245 kubelet[2778]: I0913 00:07:27.551437 2778 scope.go:117] "RemoveContainer" containerID="bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045" Sep 13 00:07:27.556644 env[1810]: time="2025-09-13T00:07:27.556053225Z" level=info msg="RemoveContainer for \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\"" Sep 13 00:07:27.573313 env[1810]: time="2025-09-13T00:07:27.573208648Z" level=info msg="RemoveContainer for \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\" returns successfully" Sep 13 00:07:27.574136 kubelet[2778]: I0913 00:07:27.574049 2778 scope.go:117] "RemoveContainer" containerID="bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18" Sep 13 00:07:27.576479 env[1810]: time="2025-09-13T00:07:27.576420485Z" level=info msg="RemoveContainer for \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\"" Sep 13 00:07:27.587656 env[1810]: time="2025-09-13T00:07:27.587579804Z" level=info msg="RemoveContainer for \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\" returns successfully" Sep 13 00:07:27.589503 kubelet[2778]: I0913 00:07:27.589253 2778 scope.go:117] "RemoveContainer" containerID="736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e" Sep 13 00:07:27.592052 env[1810]: time="2025-09-13T00:07:27.591990264Z" level=info msg="RemoveContainer for \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\"" Sep 13 00:07:27.598074 env[1810]: time="2025-09-13T00:07:27.597986128Z" level=info msg="RemoveContainer for \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\" returns successfully" Sep 13 00:07:27.598549 kubelet[2778]: I0913 00:07:27.598489 2778 scope.go:117] "RemoveContainer" containerID="76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458" Sep 13 00:07:27.601052 env[1810]: time="2025-09-13T00:07:27.600980862Z" level=info msg="RemoveContainer for \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\"" Sep 13 00:07:27.607459 env[1810]: time="2025-09-13T00:07:27.607325780Z" level=info msg="RemoveContainer for \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\" returns successfully" Sep 13 00:07:27.607957 kubelet[2778]: I0913 00:07:27.607897 2778 scope.go:117] "RemoveContainer" containerID="80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed" Sep 13 00:07:27.608441 env[1810]: time="2025-09-13T00:07:27.608333605Z" level=error msg="ContainerStatus for \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\": not found" Sep 13 00:07:27.608687 kubelet[2778]: E0913 00:07:27.608634 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\": not found" containerID="80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed" Sep 13 00:07:27.608774 kubelet[2778]: I0913 00:07:27.608690 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed"} err="failed to get container status \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\": rpc error: code = NotFound desc = an error occurred when try to find container \"80e373d8bb6e0fffb5fc2e48e8d9fb3be61f740440ed09599b5799499ac0aeed\": not found" Sep 13 00:07:27.608774 kubelet[2778]: I0913 00:07:27.608727 2778 scope.go:117] "RemoveContainer" containerID="bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045" Sep 13 00:07:27.609180 env[1810]: time="2025-09-13T00:07:27.609089575Z" level=error msg="ContainerStatus for \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\": not found" Sep 13 00:07:27.609595 kubelet[2778]: E0913 00:07:27.609469 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\": not found" containerID="bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045" Sep 13 00:07:27.609595 kubelet[2778]: I0913 00:07:27.609543 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045"} err="failed to get container status \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb7cf9a9590fb5b5ceacb1065c75afa13ea68ef003bbf7f3285b4befc2a18045\": not found" Sep 13 00:07:27.609880 kubelet[2778]: I0913 00:07:27.609781 2778 scope.go:117] "RemoveContainer" containerID="bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18" Sep 13 00:07:27.610443 env[1810]: time="2025-09-13T00:07:27.610336702Z" level=error msg="ContainerStatus for \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\": not found" Sep 13 00:07:27.610751 kubelet[2778]: E0913 00:07:27.610708 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\": not found" containerID="bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18" Sep 13 00:07:27.610898 kubelet[2778]: I0913 00:07:27.610759 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18"} err="failed to get container status \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfc0768f7acf83de2a8928d39adc1b34327c09b854ed0b808ae92ff6260f6e18\": not found" Sep 13 00:07:27.610898 kubelet[2778]: I0913 00:07:27.610809 2778 scope.go:117] "RemoveContainer" containerID="736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e" Sep 13 00:07:27.611270 env[1810]: time="2025-09-13T00:07:27.611180584Z" level=error msg="ContainerStatus for \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\": not found" Sep 13 00:07:27.611510 kubelet[2778]: E0913 00:07:27.611460 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\": not found" containerID="736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e" Sep 13 00:07:27.611622 kubelet[2778]: I0913 00:07:27.611512 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e"} err="failed to get container status \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"736ca097df581f4cd2c8e1ec0b38f9a463770905fc1cce5e070aab67bb01ed6e\": not found" Sep 13 00:07:27.611622 kubelet[2778]: I0913 00:07:27.611547 2778 scope.go:117] "RemoveContainer" containerID="76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458" Sep 13 00:07:27.612000 env[1810]: time="2025-09-13T00:07:27.611903267Z" level=error msg="ContainerStatus for \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\": not found" Sep 13 00:07:27.612509 kubelet[2778]: E0913 00:07:27.612370 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\": not found" containerID="76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458" Sep 13 00:07:27.612509 kubelet[2778]: I0913 00:07:27.612466 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458"} err="failed to get container status \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\": rpc error: code = NotFound desc = an error occurred when try to find container \"76e5e7a0fbe101bcca2a365711ac43bbed694921eb49675c35234fb54b1d9458\": not found" Sep 13 00:07:28.350529 sshd[4616]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:28.355946 systemd-logind[1800]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:07:28.357451 systemd[1]: sshd@22-172.31.31.19:22-139.178.89.65:34010.service: Deactivated successfully. Sep 13 00:07:28.358733 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:07:28.359126 systemd[1]: session-23.scope: Consumed 1.945s CPU time. Sep 13 00:07:28.360601 systemd-logind[1800]: Removed session 23. Sep 13 00:07:28.377877 systemd[1]: Started sshd@23-172.31.31.19:22-139.178.89.65:34012.service. Sep 13 00:07:28.554651 sshd[4782]: Accepted publickey for core from 139.178.89.65 port 34012 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:28.557969 sshd[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:28.567122 systemd-logind[1800]: New session 24 of user core. Sep 13 00:07:28.567147 systemd[1]: Started session-24.scope. Sep 13 00:07:28.866895 kubelet[2778]: I0913 00:07:28.866792 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91309607-f47f-4763-a1ba-2b17fd81c84c" path="/var/lib/kubelet/pods/91309607-f47f-4763-a1ba-2b17fd81c84c/volumes" Sep 13 00:07:28.867951 kubelet[2778]: I0913 00:07:28.867899 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af7df171-0caa-4e71-ac66-b9eb231a4ac5" path="/var/lib/kubelet/pods/af7df171-0caa-4e71-ac66-b9eb231a4ac5/volumes" Sep 13 00:07:29.064553 kubelet[2778]: E0913 00:07:29.064489 2778 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:07:30.140852 sshd[4782]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:30.146010 systemd[1]: sshd@23-172.31.31.19:22-139.178.89.65:34012.service: Deactivated successfully. Sep 13 00:07:30.147379 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:07:30.147675 systemd[1]: session-24.scope: Consumed 1.300s CPU time. Sep 13 00:07:30.150216 systemd-logind[1800]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:07:30.152376 systemd-logind[1800]: Removed session 24. Sep 13 00:07:30.169683 systemd[1]: Started sshd@24-172.31.31.19:22-139.178.89.65:33288.service. Sep 13 00:07:30.223900 systemd[1]: Created slice kubepods-burstable-podebea98d7_8a09_4593_bde4_f82132aa4e21.slice. Sep 13 00:07:30.258209 kubelet[2778]: I0913 00:07:30.258137 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-hostproc\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258209 kubelet[2778]: I0913 00:07:30.258217 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-host-proc-sys-kernel\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258263 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-xtables-lock\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258301 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ebea98d7-8a09-4593-bde4-f82132aa4e21-clustermesh-secrets\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258347 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-cgroup\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258395 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cni-path\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258430 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-config-path\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258470 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-host-proc-sys-net\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258507 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-bpf-maps\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258541 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-ipsec-secrets\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258574 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ebea98d7-8a09-4593-bde4-f82132aa4e21-hubble-tls\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258612 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-etc-cni-netd\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258660 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nw82\" (UniqueName: \"kubernetes.io/projected/ebea98d7-8a09-4593-bde4-f82132aa4e21-kube-api-access-7nw82\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258708 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-run\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.258893 kubelet[2778]: I0913 00:07:30.258748 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-lib-modules\") pod \"cilium-52vjj\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " pod="kube-system/cilium-52vjj" Sep 13 00:07:30.349387 sshd[4793]: Accepted publickey for core from 139.178.89.65 port 33288 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:30.352060 sshd[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:30.361318 systemd[1]: Started session-25.scope. Sep 13 00:07:30.361944 systemd-logind[1800]: New session 25 of user core. Sep 13 00:07:30.530889 env[1810]: time="2025-09-13T00:07:30.530696091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52vjj,Uid:ebea98d7-8a09-4593-bde4-f82132aa4e21,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:30.577734 env[1810]: time="2025-09-13T00:07:30.577558810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:30.578865 env[1810]: time="2025-09-13T00:07:30.578157222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:30.578865 env[1810]: time="2025-09-13T00:07:30.578352689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:30.581257 env[1810]: time="2025-09-13T00:07:30.579185195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92 pid=4813 runtime=io.containerd.runc.v2 Sep 13 00:07:30.629186 systemd[1]: Started cri-containerd-934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92.scope. Sep 13 00:07:30.696297 env[1810]: time="2025-09-13T00:07:30.696236429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52vjj,Uid:ebea98d7-8a09-4593-bde4-f82132aa4e21,Namespace:kube-system,Attempt:0,} returns sandbox id \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\"" Sep 13 00:07:30.710457 env[1810]: time="2025-09-13T00:07:30.710363132Z" level=info msg="CreateContainer within sandbox \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:07:30.733492 sshd[4793]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:30.740431 systemd[1]: sshd@24-172.31.31.19:22-139.178.89.65:33288.service: Deactivated successfully. Sep 13 00:07:30.742548 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:07:30.744734 systemd-logind[1800]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:07:30.746869 systemd-logind[1800]: Removed session 25. Sep 13 00:07:30.751612 env[1810]: time="2025-09-13T00:07:30.751508008Z" level=info msg="CreateContainer within sandbox \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4\"" Sep 13 00:07:30.756580 env[1810]: time="2025-09-13T00:07:30.756448424Z" level=info msg="StartContainer for \"298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4\"" Sep 13 00:07:30.768608 systemd[1]: Started sshd@25-172.31.31.19:22-139.178.89.65:33298.service. Sep 13 00:07:30.817564 systemd[1]: Started cri-containerd-298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4.scope. Sep 13 00:07:30.857082 systemd[1]: cri-containerd-298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4.scope: Deactivated successfully. Sep 13 00:07:30.886759 env[1810]: time="2025-09-13T00:07:30.886674983Z" level=info msg="shim disconnected" id=298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4 Sep 13 00:07:30.887278 env[1810]: time="2025-09-13T00:07:30.887226992Z" level=warning msg="cleaning up after shim disconnected" id=298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4 namespace=k8s.io Sep 13 00:07:30.887445 env[1810]: time="2025-09-13T00:07:30.887414406Z" level=info msg="cleaning up dead shim" Sep 13 00:07:30.905128 env[1810]: time="2025-09-13T00:07:30.905050391Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4875 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:07:30Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 00:07:30.906350 env[1810]: time="2025-09-13T00:07:30.906226144Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Sep 13 00:07:30.908008 env[1810]: time="2025-09-13T00:07:30.907912421Z" level=error msg="Failed to pipe stdout of container \"298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4\"" error="reading from a closed fifo" Sep 13 00:07:30.911131 env[1810]: time="2025-09-13T00:07:30.911042576Z" level=error msg="Failed to pipe stderr of container \"298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4\"" error="reading from a closed fifo" Sep 13 00:07:30.917710 env[1810]: time="2025-09-13T00:07:30.917611538Z" level=error msg="StartContainer for \"298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 00:07:30.918844 kubelet[2778]: E0913 00:07:30.918248 2778 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4" Sep 13 00:07:30.918844 kubelet[2778]: E0913 00:07:30.918434 2778 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 13 00:07:30.918844 kubelet[2778]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 00:07:30.918844 kubelet[2778]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 00:07:30.918844 kubelet[2778]: rm /hostbin/cilium-mount Sep 13 00:07:30.918844 kubelet[2778]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nw82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-52vjj_kube-system(ebea98d7-8a09-4593-bde4-f82132aa4e21): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 00:07:30.918844 kubelet[2778]: > logger="UnhandledError" Sep 13 00:07:30.920056 kubelet[2778]: E0913 00:07:30.919953 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-52vjj" podUID="ebea98d7-8a09-4593-bde4-f82132aa4e21" Sep 13 00:07:30.954951 sshd[4853]: Accepted publickey for core from 139.178.89.65 port 33298 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:30.958006 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:30.967422 systemd-logind[1800]: New session 26 of user core. Sep 13 00:07:30.968483 systemd[1]: Started session-26.scope. Sep 13 00:07:31.542365 env[1810]: time="2025-09-13T00:07:31.542303695Z" level=info msg="StopPodSandbox for \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\"" Sep 13 00:07:31.548437 env[1810]: time="2025-09-13T00:07:31.542405514Z" level=info msg="Container to stop \"298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:31.546034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92-shm.mount: Deactivated successfully. Sep 13 00:07:31.563290 systemd[1]: cri-containerd-934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92.scope: Deactivated successfully. Sep 13 00:07:31.619859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92-rootfs.mount: Deactivated successfully. Sep 13 00:07:31.638128 env[1810]: time="2025-09-13T00:07:31.638057490Z" level=info msg="shim disconnected" id=934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92 Sep 13 00:07:31.638403 env[1810]: time="2025-09-13T00:07:31.638129010Z" level=warning msg="cleaning up after shim disconnected" id=934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92 namespace=k8s.io Sep 13 00:07:31.638403 env[1810]: time="2025-09-13T00:07:31.638151546Z" level=info msg="cleaning up dead shim" Sep 13 00:07:31.653239 env[1810]: time="2025-09-13T00:07:31.653154956Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4912 runtime=io.containerd.runc.v2\n" Sep 13 00:07:31.653794 env[1810]: time="2025-09-13T00:07:31.653707648Z" level=info msg="TearDown network for sandbox \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\" successfully" Sep 13 00:07:31.653794 env[1810]: time="2025-09-13T00:07:31.653762200Z" level=info msg="StopPodSandbox for \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\" returns successfully" Sep 13 00:07:31.773993 kubelet[2778]: I0913 00:07:31.773937 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-config-path\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774004 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-bpf-maps\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774044 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ebea98d7-8a09-4593-bde4-f82132aa4e21-hubble-tls\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774085 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-lib-modules\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774123 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cni-path\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774163 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-host-proc-sys-net\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774197 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-host-proc-sys-kernel\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774232 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-hostproc\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774264 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-xtables-lock\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774300 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nw82\" (UniqueName: \"kubernetes.io/projected/ebea98d7-8a09-4593-bde4-f82132aa4e21-kube-api-access-7nw82\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774333 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-run\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774372 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-cgroup\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774410 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-ipsec-secrets\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774450 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-etc-cni-netd\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.774686 kubelet[2778]: I0913 00:07:31.774502 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ebea98d7-8a09-4593-bde4-f82132aa4e21-clustermesh-secrets\") pod \"ebea98d7-8a09-4593-bde4-f82132aa4e21\" (UID: \"ebea98d7-8a09-4593-bde4-f82132aa4e21\") " Sep 13 00:07:31.775657 kubelet[2778]: I0913 00:07:31.775124 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.780571 kubelet[2778]: I0913 00:07:31.780474 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:07:31.780746 kubelet[2778]: I0913 00:07:31.780595 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.782352 kubelet[2778]: I0913 00:07:31.782297 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-hostproc" (OuterVolumeSpecName: "hostproc") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.782675 kubelet[2778]: I0913 00:07:31.782620 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.783469 kubelet[2778]: I0913 00:07:31.783392 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.783926 kubelet[2778]: I0913 00:07:31.783875 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cni-path" (OuterVolumeSpecName: "cni-path") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.784228 kubelet[2778]: I0913 00:07:31.784175 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.784518 kubelet[2778]: I0913 00:07:31.784463 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.784794 kubelet[2778]: I0913 00:07:31.784712 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.789520 systemd[1]: var-lib-kubelet-pods-ebea98d7\x2d8a09\x2d4593\x2dbde4\x2df82132aa4e21-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:07:31.800273 kubelet[2778]: I0913 00:07:31.792567 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:07:31.800273 kubelet[2778]: I0913 00:07:31.793664 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebea98d7-8a09-4593-bde4-f82132aa4e21-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:07:31.804475 systemd[1]: var-lib-kubelet-pods-ebea98d7\x2d8a09\x2d4593\x2dbde4\x2df82132aa4e21-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:07:31.809960 kubelet[2778]: I0913 00:07:31.809893 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebea98d7-8a09-4593-bde4-f82132aa4e21-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:07:31.810901 kubelet[2778]: I0913 00:07:31.810843 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:07:31.811135 kubelet[2778]: I0913 00:07:31.810775 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebea98d7-8a09-4593-bde4-f82132aa4e21-kube-api-access-7nw82" (OuterVolumeSpecName: "kube-api-access-7nw82") pod "ebea98d7-8a09-4593-bde4-f82132aa4e21" (UID: "ebea98d7-8a09-4593-bde4-f82132aa4e21"). InnerVolumeSpecName "kube-api-access-7nw82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:07:31.875389 kubelet[2778]: I0913 00:07:31.875323 2778 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cni-path\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875389 kubelet[2778]: I0913 00:07:31.875386 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-host-proc-sys-net\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875415 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-host-proc-sys-kernel\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875438 2778 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-hostproc\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875464 2778 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-xtables-lock\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875486 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7nw82\" (UniqueName: \"kubernetes.io/projected/ebea98d7-8a09-4593-bde4-f82132aa4e21-kube-api-access-7nw82\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875509 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-run\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875532 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-cgroup\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875553 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-ipsec-secrets\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875575 2778 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-etc-cni-netd\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875600 2778 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ebea98d7-8a09-4593-bde4-f82132aa4e21-clustermesh-secrets\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.875630 kubelet[2778]: I0913 00:07:31.875623 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebea98d7-8a09-4593-bde4-f82132aa4e21-cilium-config-path\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.876295 kubelet[2778]: I0913 00:07:31.875645 2778 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-bpf-maps\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.876295 kubelet[2778]: I0913 00:07:31.875668 2778 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ebea98d7-8a09-4593-bde4-f82132aa4e21-hubble-tls\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:31.876295 kubelet[2778]: I0913 00:07:31.875690 2778 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebea98d7-8a09-4593-bde4-f82132aa4e21-lib-modules\") on node \"ip-172-31-31-19\" DevicePath \"\"" Sep 13 00:07:32.371228 kubelet[2778]: I0913 00:07:32.371139 2778 setters.go:618] "Node became not ready" node="ip-172-31-31-19" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:07:32Z","lastTransitionTime":"2025-09-13T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:07:32.389607 systemd[1]: var-lib-kubelet-pods-ebea98d7\x2d8a09\x2d4593\x2dbde4\x2df82132aa4e21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7nw82.mount: Deactivated successfully. Sep 13 00:07:32.389793 systemd[1]: var-lib-kubelet-pods-ebea98d7\x2d8a09\x2d4593\x2dbde4\x2df82132aa4e21-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:07:32.546962 kubelet[2778]: I0913 00:07:32.546908 2778 scope.go:117] "RemoveContainer" containerID="298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4" Sep 13 00:07:32.553023 env[1810]: time="2025-09-13T00:07:32.552949993Z" level=info msg="RemoveContainer for \"298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4\"" Sep 13 00:07:32.558256 systemd[1]: Removed slice kubepods-burstable-podebea98d7_8a09_4593_bde4_f82132aa4e21.slice. Sep 13 00:07:32.566224 env[1810]: time="2025-09-13T00:07:32.565621341Z" level=info msg="RemoveContainer for \"298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4\" returns successfully" Sep 13 00:07:32.643121 systemd[1]: Created slice kubepods-burstable-pod4616f00c_3f95_4816_a4a1_a55a39295e23.slice. Sep 13 00:07:32.681430 kubelet[2778]: I0913 00:07:32.681382 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-hostproc\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.681681 kubelet[2778]: I0913 00:07:32.681649 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkgmm\" (UniqueName: \"kubernetes.io/projected/4616f00c-3f95-4816-a4a1-a55a39295e23-kube-api-access-bkgmm\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.681952 kubelet[2778]: I0913 00:07:32.681920 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-cni-path\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.682131 kubelet[2778]: I0913 00:07:32.682105 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-lib-modules\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.682304 kubelet[2778]: I0913 00:07:32.682263 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4616f00c-3f95-4816-a4a1-a55a39295e23-clustermesh-secrets\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.682463 kubelet[2778]: I0913 00:07:32.682434 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-cilium-run\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.682632 kubelet[2778]: I0913 00:07:32.682599 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4616f00c-3f95-4816-a4a1-a55a39295e23-cilium-ipsec-secrets\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.682811 kubelet[2778]: I0913 00:07:32.682784 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-bpf-maps\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.683021 kubelet[2778]: I0913 00:07:32.682993 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-cilium-cgroup\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.683175 kubelet[2778]: I0913 00:07:32.683149 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-host-proc-sys-net\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.683331 kubelet[2778]: I0913 00:07:32.683305 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-etc-cni-netd\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.683487 kubelet[2778]: I0913 00:07:32.683459 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-xtables-lock\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.683734 kubelet[2778]: I0913 00:07:32.683706 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4616f00c-3f95-4816-a4a1-a55a39295e23-host-proc-sys-kernel\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.683927 kubelet[2778]: I0913 00:07:32.683894 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4616f00c-3f95-4816-a4a1-a55a39295e23-cilium-config-path\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.684131 kubelet[2778]: I0913 00:07:32.684100 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4616f00c-3f95-4816-a4a1-a55a39295e23-hubble-tls\") pod \"cilium-24d6k\" (UID: \"4616f00c-3f95-4816-a4a1-a55a39295e23\") " pod="kube-system/cilium-24d6k" Sep 13 00:07:32.875727 kubelet[2778]: I0913 00:07:32.875665 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebea98d7-8a09-4593-bde4-f82132aa4e21" path="/var/lib/kubelet/pods/ebea98d7-8a09-4593-bde4-f82132aa4e21/volumes" Sep 13 00:07:32.950188 env[1810]: time="2025-09-13T00:07:32.950017374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24d6k,Uid:4616f00c-3f95-4816-a4a1-a55a39295e23,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:32.979433 env[1810]: time="2025-09-13T00:07:32.979041227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:32.979433 env[1810]: time="2025-09-13T00:07:32.979129415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:32.979433 env[1810]: time="2025-09-13T00:07:32.979155911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:32.981047 env[1810]: time="2025-09-13T00:07:32.979680019Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039 pid=4941 runtime=io.containerd.runc.v2 Sep 13 00:07:33.017083 systemd[1]: Started cri-containerd-5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039.scope. Sep 13 00:07:33.087743 env[1810]: time="2025-09-13T00:07:33.087677913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24d6k,Uid:4616f00c-3f95-4816-a4a1-a55a39295e23,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\"" Sep 13 00:07:33.102457 env[1810]: time="2025-09-13T00:07:33.102393259Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:07:33.125731 env[1810]: time="2025-09-13T00:07:33.125667389Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ce4340b151dd1f690959b2e102c8bfd0b6652de42ddcabc9f2260f925fd515b6\"" Sep 13 00:07:33.128526 env[1810]: time="2025-09-13T00:07:33.128429857Z" level=info msg="StartContainer for \"ce4340b151dd1f690959b2e102c8bfd0b6652de42ddcabc9f2260f925fd515b6\"" Sep 13 00:07:33.161371 systemd[1]: Started cri-containerd-ce4340b151dd1f690959b2e102c8bfd0b6652de42ddcabc9f2260f925fd515b6.scope. Sep 13 00:07:33.233218 env[1810]: time="2025-09-13T00:07:33.233068746Z" level=info msg="StartContainer for \"ce4340b151dd1f690959b2e102c8bfd0b6652de42ddcabc9f2260f925fd515b6\" returns successfully" Sep 13 00:07:33.260610 systemd[1]: cri-containerd-ce4340b151dd1f690959b2e102c8bfd0b6652de42ddcabc9f2260f925fd515b6.scope: Deactivated successfully. Sep 13 00:07:33.311797 env[1810]: time="2025-09-13T00:07:33.311720863Z" level=info msg="shim disconnected" id=ce4340b151dd1f690959b2e102c8bfd0b6652de42ddcabc9f2260f925fd515b6 Sep 13 00:07:33.312133 env[1810]: time="2025-09-13T00:07:33.311894994Z" level=warning msg="cleaning up after shim disconnected" id=ce4340b151dd1f690959b2e102c8bfd0b6652de42ddcabc9f2260f925fd515b6 namespace=k8s.io Sep 13 00:07:33.312133 env[1810]: time="2025-09-13T00:07:33.311925174Z" level=info msg="cleaning up dead shim" Sep 13 00:07:33.342181 env[1810]: time="2025-09-13T00:07:33.341118460Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5024 runtime=io.containerd.runc.v2\n" Sep 13 00:07:33.575005 env[1810]: time="2025-09-13T00:07:33.574803655Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:07:33.621584 env[1810]: time="2025-09-13T00:07:33.621519420Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677\"" Sep 13 00:07:33.623025 env[1810]: time="2025-09-13T00:07:33.622959712Z" level=info msg="StartContainer for \"db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677\"" Sep 13 00:07:33.687221 systemd[1]: Started cri-containerd-db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677.scope. Sep 13 00:07:33.808250 env[1810]: time="2025-09-13T00:07:33.808174439Z" level=info msg="StartContainer for \"db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677\" returns successfully" Sep 13 00:07:33.826987 systemd[1]: cri-containerd-db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677.scope: Deactivated successfully. Sep 13 00:07:33.875716 env[1810]: time="2025-09-13T00:07:33.875641240Z" level=info msg="shim disconnected" id=db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677 Sep 13 00:07:33.875716 env[1810]: time="2025-09-13T00:07:33.875712471Z" level=warning msg="cleaning up after shim disconnected" id=db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677 namespace=k8s.io Sep 13 00:07:33.876189 env[1810]: time="2025-09-13T00:07:33.875734767Z" level=info msg="cleaning up dead shim" Sep 13 00:07:33.897266 env[1810]: time="2025-09-13T00:07:33.897190199Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5086 runtime=io.containerd.runc.v2\n" Sep 13 00:07:33.999864 kubelet[2778]: W0913 00:07:33.997369 2778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebea98d7_8a09_4593_bde4_f82132aa4e21.slice/cri-containerd-298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4.scope WatchSource:0}: container "298b5a67aa1691568b0a46f2b971405ea37123b816135b08f18bcb177ec80df4" in namespace "k8s.io": not found Sep 13 00:07:34.066450 kubelet[2778]: E0913 00:07:34.066378 2778 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:07:34.389981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677-rootfs.mount: Deactivated successfully. Sep 13 00:07:34.568012 env[1810]: time="2025-09-13T00:07:34.567938158Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:07:34.612378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642184587.mount: Deactivated successfully. Sep 13 00:07:34.633853 env[1810]: time="2025-09-13T00:07:34.633734308Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535\"" Sep 13 00:07:34.637136 env[1810]: time="2025-09-13T00:07:34.635783429Z" level=info msg="StartContainer for \"7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535\"" Sep 13 00:07:34.671785 systemd[1]: Started cri-containerd-7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535.scope. Sep 13 00:07:34.752628 systemd[1]: cri-containerd-7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535.scope: Deactivated successfully. Sep 13 00:07:34.754974 env[1810]: time="2025-09-13T00:07:34.754802475Z" level=info msg="StartContainer for \"7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535\" returns successfully" Sep 13 00:07:34.813556 env[1810]: time="2025-09-13T00:07:34.813492001Z" level=info msg="shim disconnected" id=7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535 Sep 13 00:07:34.814084 env[1810]: time="2025-09-13T00:07:34.814025422Z" level=warning msg="cleaning up after shim disconnected" id=7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535 namespace=k8s.io Sep 13 00:07:34.814243 env[1810]: time="2025-09-13T00:07:34.814213965Z" level=info msg="cleaning up dead shim" Sep 13 00:07:34.828034 env[1810]: time="2025-09-13T00:07:34.827974564Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5146 runtime=io.containerd.runc.v2\n" Sep 13 00:07:35.390047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535-rootfs.mount: Deactivated successfully. Sep 13 00:07:35.588594 env[1810]: time="2025-09-13T00:07:35.588514032Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:07:35.617760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360646631.mount: Deactivated successfully. Sep 13 00:07:35.632100 env[1810]: time="2025-09-13T00:07:35.632035664Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47\"" Sep 13 00:07:35.634289 env[1810]: time="2025-09-13T00:07:35.634205768Z" level=info msg="StartContainer for \"e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47\"" Sep 13 00:07:35.678602 systemd[1]: Started cri-containerd-e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47.scope. Sep 13 00:07:35.747390 systemd[1]: cri-containerd-e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47.scope: Deactivated successfully. Sep 13 00:07:35.750030 env[1810]: time="2025-09-13T00:07:35.749970543Z" level=info msg="StartContainer for \"e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47\" returns successfully" Sep 13 00:07:35.796983 env[1810]: time="2025-09-13T00:07:35.796906313Z" level=info msg="shim disconnected" id=e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47 Sep 13 00:07:35.796983 env[1810]: time="2025-09-13T00:07:35.796981529Z" level=warning msg="cleaning up after shim disconnected" id=e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47 namespace=k8s.io Sep 13 00:07:35.797373 env[1810]: time="2025-09-13T00:07:35.797004293Z" level=info msg="cleaning up dead shim" Sep 13 00:07:35.812154 env[1810]: time="2025-09-13T00:07:35.812084864Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5201 runtime=io.containerd.runc.v2\n" Sep 13 00:07:36.390149 systemd[1]: run-containerd-runc-k8s.io-e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47-runc.e4DcpV.mount: Deactivated successfully. Sep 13 00:07:36.390949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47-rootfs.mount: Deactivated successfully. Sep 13 00:07:36.583628 env[1810]: time="2025-09-13T00:07:36.583565445Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:07:36.624891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73931958.mount: Deactivated successfully. Sep 13 00:07:36.635344 env[1810]: time="2025-09-13T00:07:36.635234806Z" level=info msg="CreateContainer within sandbox \"5ed2778105d31064e0da265c56aa728ca4fb1758a03cd8900de7a02acd78a039\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef6be4f7c8dcfbb57be26f0db6bd3fb337075e1a06d4cae5041ce95aa5b57668\"" Sep 13 00:07:36.636644 env[1810]: time="2025-09-13T00:07:36.636575163Z" level=info msg="StartContainer for \"ef6be4f7c8dcfbb57be26f0db6bd3fb337075e1a06d4cae5041ce95aa5b57668\"" Sep 13 00:07:36.675064 systemd[1]: Started cri-containerd-ef6be4f7c8dcfbb57be26f0db6bd3fb337075e1a06d4cae5041ce95aa5b57668.scope. Sep 13 00:07:36.751159 env[1810]: time="2025-09-13T00:07:36.749088737Z" level=info msg="StartContainer for \"ef6be4f7c8dcfbb57be26f0db6bd3fb337075e1a06d4cae5041ce95aa5b57668\" returns successfully" Sep 13 00:07:37.116540 kubelet[2778]: W0913 00:07:37.116403 2778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4616f00c_3f95_4816_a4a1_a55a39295e23.slice/cri-containerd-ce4340b151dd1f690959b2e102c8bfd0b6652de42ddcabc9f2260f925fd515b6.scope WatchSource:0}: task ce4340b151dd1f690959b2e102c8bfd0b6652de42ddcabc9f2260f925fd515b6 not found Sep 13 00:07:37.620397 kubelet[2778]: I0913 00:07:37.620307 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-24d6k" podStartSLOduration=5.620284106 podStartE2EDuration="5.620284106s" podCreationTimestamp="2025-09-13 00:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:37.616141139 +0000 UTC m=+159.140500099" watchObservedRunningTime="2025-09-13 00:07:37.620284106 +0000 UTC m=+159.144643066" Sep 13 00:07:37.683732 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 13 00:07:39.794507 systemd[1]: run-containerd-runc-k8s.io-ef6be4f7c8dcfbb57be26f0db6bd3fb337075e1a06d4cae5041ce95aa5b57668-runc.T3Tt4U.mount: Deactivated successfully. Sep 13 00:07:40.231359 kubelet[2778]: W0913 00:07:40.231286 2778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4616f00c_3f95_4816_a4a1_a55a39295e23.slice/cri-containerd-db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677.scope WatchSource:0}: task db57338cf0a4e4615926872cbbae74a578b85234107395e6653a964799fdf677 not found Sep 13 00:07:42.086692 systemd-networkd[1527]: lxc_health: Link UP Sep 13 00:07:42.096713 (udev-worker)[5764]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:07:42.136858 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:07:42.136248 systemd-networkd[1527]: lxc_health: Gained carrier Sep 13 00:07:42.154339 systemd[1]: run-containerd-runc-k8s.io-ef6be4f7c8dcfbb57be26f0db6bd3fb337075e1a06d4cae5041ce95aa5b57668-runc.mkTXcJ.mount: Deactivated successfully. Sep 13 00:07:43.347054 kubelet[2778]: W0913 00:07:43.346977 2778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4616f00c_3f95_4816_a4a1_a55a39295e23.slice/cri-containerd-7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535.scope WatchSource:0}: task 7d51827f8cfac6a1798005e3c584c257513569535a30d5d9d8dff845a02e7535 not found Sep 13 00:07:44.031159 systemd-networkd[1527]: lxc_health: Gained IPv6LL Sep 13 00:07:44.595939 systemd[1]: run-containerd-runc-k8s.io-ef6be4f7c8dcfbb57be26f0db6bd3fb337075e1a06d4cae5041ce95aa5b57668-runc.F8h7QD.mount: Deactivated successfully. Sep 13 00:07:46.458092 kubelet[2778]: W0913 00:07:46.458038 2778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4616f00c_3f95_4816_a4a1_a55a39295e23.slice/cri-containerd-e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47.scope WatchSource:0}: task e92deb0b96f6462ab34db03d80ec4a76bed9cd55170749d077a786b2358c5b47 not found Sep 13 00:07:46.989034 systemd[1]: run-containerd-runc-k8s.io-ef6be4f7c8dcfbb57be26f0db6bd3fb337075e1a06d4cae5041ce95aa5b57668-runc.bMlzYh.mount: Deactivated successfully. Sep 13 00:07:47.154569 sshd[4853]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:47.161742 systemd[1]: sshd@25-172.31.31.19:22-139.178.89.65:33298.service: Deactivated successfully. Sep 13 00:07:47.163194 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:07:47.165568 systemd-logind[1800]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:07:47.167927 systemd-logind[1800]: Removed session 26. Sep 13 00:07:58.725059 env[1810]: time="2025-09-13T00:07:58.724983132Z" level=info msg="StopPodSandbox for \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\"" Sep 13 00:07:58.726412 env[1810]: time="2025-09-13T00:07:58.725990135Z" level=info msg="TearDown network for sandbox \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\" successfully" Sep 13 00:07:58.726412 env[1810]: time="2025-09-13T00:07:58.726089687Z" level=info msg="StopPodSandbox for \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\" returns successfully" Sep 13 00:07:58.727031 env[1810]: time="2025-09-13T00:07:58.726968302Z" level=info msg="RemovePodSandbox for \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\"" Sep 13 00:07:58.727289 env[1810]: time="2025-09-13T00:07:58.727208458Z" level=info msg="Forcibly stopping sandbox \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\"" Sep 13 00:07:58.727557 env[1810]: time="2025-09-13T00:07:58.727522414Z" level=info msg="TearDown network for sandbox \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\" successfully" Sep 13 00:07:58.735009 env[1810]: time="2025-09-13T00:07:58.734908311Z" level=info msg="RemovePodSandbox \"1b36943b1f2f2336a5c3f2df1e1ead7efb8d44700cd906dc8ebccc11de0db174\" returns successfully" Sep 13 00:07:58.735984 env[1810]: time="2025-09-13T00:07:58.735928802Z" level=info msg="StopPodSandbox for \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\"" Sep 13 00:07:58.736146 env[1810]: time="2025-09-13T00:07:58.736081010Z" level=info msg="TearDown network for sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" successfully" Sep 13 00:07:58.736239 env[1810]: time="2025-09-13T00:07:58.736139738Z" level=info msg="StopPodSandbox for \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" returns successfully" Sep 13 00:07:58.736885 env[1810]: time="2025-09-13T00:07:58.736802005Z" level=info msg="RemovePodSandbox for \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\"" Sep 13 00:07:58.737125 env[1810]: time="2025-09-13T00:07:58.737064313Z" level=info msg="Forcibly stopping sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\"" Sep 13 00:07:58.737348 env[1810]: time="2025-09-13T00:07:58.737312629Z" level=info msg="TearDown network for sandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" successfully" Sep 13 00:07:58.743593 env[1810]: time="2025-09-13T00:07:58.743537635Z" level=info msg="RemovePodSandbox \"c9f7825e26d44c60046bbd6ae3aeb5f482ff71716467108387193bf244ab8923\" returns successfully" Sep 13 00:07:58.744555 env[1810]: time="2025-09-13T00:07:58.744415746Z" level=info msg="StopPodSandbox for \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\"" Sep 13 00:07:58.744907 env[1810]: time="2025-09-13T00:07:58.744811434Z" level=info msg="TearDown network for sandbox \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\" successfully" Sep 13 00:07:58.745040 env[1810]: time="2025-09-13T00:07:58.745007502Z" level=info msg="StopPodSandbox for \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\" returns successfully" Sep 13 00:07:58.745648 env[1810]: time="2025-09-13T00:07:58.745598201Z" level=info msg="RemovePodSandbox for \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\"" Sep 13 00:07:58.745793 env[1810]: time="2025-09-13T00:07:58.745655201Z" level=info msg="Forcibly stopping sandbox \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\"" Sep 13 00:07:58.745919 env[1810]: time="2025-09-13T00:07:58.745783721Z" level=info msg="TearDown network for sandbox \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\" successfully" Sep 13 00:07:58.752080 env[1810]: time="2025-09-13T00:07:58.752005547Z" level=info msg="RemovePodSandbox \"934826a8d900b9d9c3b048e5d8eeecc8f3386f1bb136a51f9e83223fea941f92\" returns successfully" Sep 13 00:08:23.044006 kubelet[2778]: E0913 00:08:23.043943 2778 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-19?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 13 00:08:23.968590 systemd[1]: cri-containerd-5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d.scope: Deactivated successfully. Sep 13 00:08:23.969180 systemd[1]: cri-containerd-5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d.scope: Consumed 5.793s CPU time. Sep 13 00:08:24.010115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d-rootfs.mount: Deactivated successfully. Sep 13 00:08:24.025267 env[1810]: time="2025-09-13T00:08:24.025149111Z" level=info msg="shim disconnected" id=5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d Sep 13 00:08:24.026005 env[1810]: time="2025-09-13T00:08:24.025285755Z" level=warning msg="cleaning up after shim disconnected" id=5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d namespace=k8s.io Sep 13 00:08:24.026005 env[1810]: time="2025-09-13T00:08:24.025314399Z" level=info msg="cleaning up dead shim" Sep 13 00:08:24.040653 env[1810]: time="2025-09-13T00:08:24.040568213Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5871 runtime=io.containerd.runc.v2\n" Sep 13 00:08:24.703755 kubelet[2778]: I0913 00:08:24.703419 2778 scope.go:117] "RemoveContainer" containerID="5454f0b2fd98410bd4bbe99d5e3eb21ea7d2916171358b3596651b394b4fd14d" Sep 13 00:08:24.707276 env[1810]: time="2025-09-13T00:08:24.707222652Z" level=info msg="CreateContainer within sandbox \"cbf3462b3d17cd303fb151fdefce0ea6fce9f89c93c5e7cee096d77938c78fe1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:08:24.733409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854467299.mount: Deactivated successfully. Sep 13 00:08:24.747359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268030090.mount: Deactivated successfully. Sep 13 00:08:24.750575 env[1810]: time="2025-09-13T00:08:24.750516119Z" level=info msg="CreateContainer within sandbox \"cbf3462b3d17cd303fb151fdefce0ea6fce9f89c93c5e7cee096d77938c78fe1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2e68ab5fae77e6e0647d51b7035cf3c21bbeba696aefc45ac61f4b63e19e3a89\"" Sep 13 00:08:24.751519 env[1810]: time="2025-09-13T00:08:24.751466773Z" level=info msg="StartContainer for \"2e68ab5fae77e6e0647d51b7035cf3c21bbeba696aefc45ac61f4b63e19e3a89\"" Sep 13 00:08:24.782122 systemd[1]: Started cri-containerd-2e68ab5fae77e6e0647d51b7035cf3c21bbeba696aefc45ac61f4b63e19e3a89.scope. Sep 13 00:08:24.875092 env[1810]: time="2025-09-13T00:08:24.874977667Z" level=info msg="StartContainer for \"2e68ab5fae77e6e0647d51b7035cf3c21bbeba696aefc45ac61f4b63e19e3a89\" returns successfully" Sep 13 00:08:28.930233 systemd[1]: cri-containerd-a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327.scope: Deactivated successfully. Sep 13 00:08:28.930793 systemd[1]: cri-containerd-a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327.scope: Consumed 7.416s CPU time. Sep 13 00:08:28.968993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327-rootfs.mount: Deactivated successfully. Sep 13 00:08:28.989131 env[1810]: time="2025-09-13T00:08:28.989055056Z" level=info msg="shim disconnected" id=a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327 Sep 13 00:08:28.989131 env[1810]: time="2025-09-13T00:08:28.989128808Z" level=warning msg="cleaning up after shim disconnected" id=a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327 namespace=k8s.io Sep 13 00:08:28.989908 env[1810]: time="2025-09-13T00:08:28.989151392Z" level=info msg="cleaning up dead shim" Sep 13 00:08:29.010536 env[1810]: time="2025-09-13T00:08:29.010361359Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5934 runtime=io.containerd.runc.v2\n" Sep 13 00:08:29.718506 kubelet[2778]: I0913 00:08:29.718469 2778 scope.go:117] "RemoveContainer" containerID="a67b8a528b8e296684f394b8dab90b67dc29eaaf4457d910133fdf74a98c3327" Sep 13 00:08:29.722347 env[1810]: time="2025-09-13T00:08:29.722279335Z" level=info msg="CreateContainer within sandbox \"d90c4aa3fcbcd7935d4b3d57f9d9d2d4083d98ae5040abdcc78cd05b0fd8bcc2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:08:29.745226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729385263.mount: Deactivated successfully. Sep 13 00:08:29.761783 env[1810]: time="2025-09-13T00:08:29.761716085Z" level=info msg="CreateContainer within sandbox \"d90c4aa3fcbcd7935d4b3d57f9d9d2d4083d98ae5040abdcc78cd05b0fd8bcc2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5acb3139573958099556c68195ec4e0f33ee13b5e616acce3bb782e250bc21ce\"" Sep 13 00:08:29.762830 env[1810]: time="2025-09-13T00:08:29.762770864Z" level=info msg="StartContainer for \"5acb3139573958099556c68195ec4e0f33ee13b5e616acce3bb782e250bc21ce\"" Sep 13 00:08:29.767758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94659611.mount: Deactivated successfully. Sep 13 00:08:29.805417 systemd[1]: Started cri-containerd-5acb3139573958099556c68195ec4e0f33ee13b5e616acce3bb782e250bc21ce.scope. Sep 13 00:08:29.898566 env[1810]: time="2025-09-13T00:08:29.898490465Z" level=info msg="StartContainer for \"5acb3139573958099556c68195ec4e0f33ee13b5e616acce3bb782e250bc21ce\" returns successfully" Sep 13 00:08:33.046040 kubelet[2778]: E0913 00:08:33.045943 2778 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-19?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 13 00:08:43.047060 kubelet[2778]: E0913 00:08:43.046233 2778 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-19?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"