Sep 13 00:04:51.073100 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 13 00:04:51.073136 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 00:04:51.073159 kernel: efi: EFI v2.70 by EDK II Sep 13 00:04:51.073174 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Sep 13 00:04:51.073187 kernel: ACPI: Early table checksum verification disabled Sep 13 00:04:51.073201 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 13 00:04:51.073216 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:04:51.073231 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:04:51.073244 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 13 00:04:51.075916 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:04:51.075949 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 13 00:04:51.075964 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 13 00:04:51.075978 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 13 00:04:51.075992 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:04:51.076009 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 13 00:04:51.076028 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 13 00:04:51.076042 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 13 00:04:51.076057 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 13 00:04:51.076071 kernel: printk: bootconsole [uart0] enabled Sep 13 00:04:51.076086 kernel: NUMA: Failed to initialise from firmware Sep 13 00:04:51.076100 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 13 00:04:51.076115 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Sep 13 00:04:51.076129 kernel: Zone ranges: Sep 13 00:04:51.076144 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 13 00:04:51.076158 kernel: DMA32 empty Sep 13 00:04:51.076172 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 13 00:04:51.076191 kernel: Movable zone start for each node Sep 13 00:04:51.076205 kernel: Early memory node ranges Sep 13 00:04:51.076220 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 13 00:04:51.076234 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 13 00:04:51.076248 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 13 00:04:51.080328 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 13 00:04:51.080355 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 13 00:04:51.080370 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 13 00:04:51.080386 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 13 00:04:51.080401 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 13 00:04:51.080415 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 13 00:04:51.080430 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 13 00:04:51.080454 kernel: psci: probing for conduit method from ACPI. Sep 13 00:04:51.080469 kernel: psci: PSCIv1.0 detected in firmware. Sep 13 00:04:51.080490 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:04:51.080505 kernel: psci: Trusted OS migration not required Sep 13 00:04:51.080520 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:04:51.080540 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 13 00:04:51.080555 kernel: ACPI: SRAT not present Sep 13 00:04:51.080572 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 00:04:51.080587 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 00:04:51.080603 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:04:51.080619 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:04:51.080634 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:04:51.080649 kernel: CPU features: detected: Spectre-v2 Sep 13 00:04:51.080664 kernel: CPU features: detected: Spectre-v3a Sep 13 00:04:51.080679 kernel: CPU features: detected: Spectre-BHB Sep 13 00:04:51.080694 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:04:51.080713 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:04:51.080728 kernel: CPU features: detected: ARM erratum 1742098 Sep 13 00:04:51.080743 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 13 00:04:51.080758 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 13 00:04:51.080773 kernel: Policy zone: Normal Sep 13 00:04:51.080791 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:04:51.080808 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:04:51.080823 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:04:51.080839 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:04:51.080854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:04:51.080873 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 13 00:04:51.080890 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Sep 13 00:04:51.080906 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:04:51.080921 kernel: trace event string verifier disabled Sep 13 00:04:51.080936 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:04:51.080952 kernel: rcu: RCU event tracing is enabled. Sep 13 00:04:51.080968 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:04:51.080983 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:04:51.080999 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:04:51.081014 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:04:51.081029 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:04:51.081044 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:04:51.081063 kernel: GICv3: 96 SPIs implemented Sep 13 00:04:51.081078 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:04:51.081093 kernel: GICv3: Distributor has no Range Selector support Sep 13 00:04:51.081108 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:04:51.081123 kernel: GICv3: 16 PPIs implemented Sep 13 00:04:51.081138 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 13 00:04:51.081153 kernel: ACPI: SRAT not present Sep 13 00:04:51.081167 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 13 00:04:51.081182 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:04:51.081197 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:04:51.081213 kernel: GICv3: using LPI property table @0x00000004000b0000 Sep 13 00:04:51.081231 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 13 00:04:51.081246 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Sep 13 00:04:51.081282 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 13 00:04:51.081300 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 13 00:04:51.081316 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 13 00:04:51.081332 kernel: Console: colour dummy device 80x25 Sep 13 00:04:51.081348 kernel: printk: console [tty1] enabled Sep 13 00:04:51.081363 kernel: ACPI: Core revision 20210730 Sep 13 00:04:51.081379 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 13 00:04:51.081395 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:04:51.081416 kernel: LSM: Security Framework initializing Sep 13 00:04:51.081432 kernel: SELinux: Initializing. Sep 13 00:04:51.081447 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:04:51.081463 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:04:51.081479 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:04:51.081494 kernel: Platform MSI: ITS@0x10080000 domain created Sep 13 00:04:51.081510 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 13 00:04:51.081525 kernel: Remapping and enabling EFI services. Sep 13 00:04:51.081541 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:04:51.081556 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:04:51.081576 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 13 00:04:51.081591 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Sep 13 00:04:51.081607 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 13 00:04:51.081622 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:04:51.081638 kernel: SMP: Total of 2 processors activated. Sep 13 00:04:51.081653 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:04:51.081668 kernel: CPU features: detected: 32-bit EL1 Support Sep 13 00:04:51.081684 kernel: CPU features: detected: CRC32 instructions Sep 13 00:04:51.081699 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:04:51.081718 kernel: alternatives: patching kernel code Sep 13 00:04:51.081734 kernel: devtmpfs: initialized Sep 13 00:04:51.081759 kernel: KASLR disabled due to lack of seed Sep 13 00:04:51.081779 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:04:51.081796 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:04:51.081812 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:04:51.081827 kernel: SMBIOS 3.0.0 present. Sep 13 00:04:51.081843 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 13 00:04:51.081859 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:04:51.081875 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:04:51.081892 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:04:51.081912 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:04:51.081928 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:04:51.081944 kernel: audit: type=2000 audit(0.292:1): state=initialized audit_enabled=0 res=1 Sep 13 00:04:51.081960 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:04:51.081976 kernel: cpuidle: using governor menu Sep 13 00:04:51.081995 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:04:51.082012 kernel: ASID allocator initialised with 32768 entries Sep 13 00:04:51.082028 kernel: ACPI: bus type PCI registered Sep 13 00:04:51.082044 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:04:51.082059 kernel: Serial: AMBA PL011 UART driver Sep 13 00:04:51.082076 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:04:51.082092 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:04:51.082108 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:04:51.082124 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:04:51.082143 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:04:51.082160 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:04:51.082176 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:04:51.082192 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:04:51.082208 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:04:51.082224 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:04:51.082240 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:04:51.082271 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:04:51.082293 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:04:51.082309 kernel: ACPI: Interpreter enabled Sep 13 00:04:51.082331 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:04:51.082347 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:04:51.082363 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 13 00:04:51.082668 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:04:51.082868 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:04:51.083061 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:04:51.083251 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 13 00:04:51.083474 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 13 00:04:51.083497 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 13 00:04:51.083514 kernel: acpiphp: Slot [1] registered Sep 13 00:04:51.083530 kernel: acpiphp: Slot [2] registered Sep 13 00:04:51.083546 kernel: acpiphp: Slot [3] registered Sep 13 00:04:51.083562 kernel: acpiphp: Slot [4] registered Sep 13 00:04:51.083578 kernel: acpiphp: Slot [5] registered Sep 13 00:04:51.083594 kernel: acpiphp: Slot [6] registered Sep 13 00:04:51.083610 kernel: acpiphp: Slot [7] registered Sep 13 00:04:51.083631 kernel: acpiphp: Slot [8] registered Sep 13 00:04:51.083665 kernel: acpiphp: Slot [9] registered Sep 13 00:04:51.083682 kernel: acpiphp: Slot [10] registered Sep 13 00:04:51.083698 kernel: acpiphp: Slot [11] registered Sep 13 00:04:51.083714 kernel: acpiphp: Slot [12] registered Sep 13 00:04:51.083730 kernel: acpiphp: Slot [13] registered Sep 13 00:04:51.083746 kernel: acpiphp: Slot [14] registered Sep 13 00:04:51.083762 kernel: acpiphp: Slot [15] registered Sep 13 00:04:51.083778 kernel: acpiphp: Slot [16] registered Sep 13 00:04:51.083798 kernel: acpiphp: Slot [17] registered Sep 13 00:04:51.083815 kernel: acpiphp: Slot [18] registered Sep 13 00:04:51.083831 kernel: acpiphp: Slot [19] registered Sep 13 00:04:51.083846 kernel: acpiphp: Slot [20] registered Sep 13 00:04:51.083862 kernel: acpiphp: Slot [21] registered Sep 13 00:04:51.083878 kernel: acpiphp: Slot [22] registered Sep 13 00:04:51.083894 kernel: acpiphp: Slot [23] registered Sep 13 00:04:51.083909 kernel: acpiphp: Slot [24] registered Sep 13 00:04:51.083925 kernel: acpiphp: Slot [25] registered Sep 13 00:04:51.083941 kernel: acpiphp: Slot [26] registered Sep 13 00:04:51.083961 kernel: acpiphp: Slot [27] registered Sep 13 00:04:51.083977 kernel: acpiphp: Slot [28] registered Sep 13 00:04:51.083992 kernel: acpiphp: Slot [29] registered Sep 13 00:04:51.084008 kernel: acpiphp: Slot [30] registered Sep 13 00:04:51.084024 kernel: acpiphp: Slot [31] registered Sep 13 00:04:51.084040 kernel: PCI host bridge to bus 0000:00 Sep 13 00:04:51.084237 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 13 00:04:51.098192 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:04:51.100532 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 13 00:04:51.101528 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 13 00:04:51.101771 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 13 00:04:51.101991 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 13 00:04:51.102226 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 13 00:04:51.105345 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:04:51.105589 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 13 00:04:51.105817 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:04:51.106036 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:04:51.106234 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 13 00:04:51.106455 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 13 00:04:51.106652 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 13 00:04:51.106848 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:04:51.107048 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 13 00:04:51.107245 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 13 00:04:51.110590 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 13 00:04:51.110804 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 13 00:04:51.111011 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 13 00:04:51.111195 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 13 00:04:51.114033 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:04:51.114251 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 13 00:04:51.114319 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:04:51.114337 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:04:51.114355 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:04:51.114371 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:04:51.114388 kernel: iommu: Default domain type: Translated Sep 13 00:04:51.114404 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:04:51.114421 kernel: vgaarb: loaded Sep 13 00:04:51.114437 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:04:51.114459 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:04:51.114476 kernel: PTP clock support registered Sep 13 00:04:51.114492 kernel: Registered efivars operations Sep 13 00:04:51.114508 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:04:51.114524 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:04:51.114541 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:04:51.114557 kernel: pnp: PnP ACPI init Sep 13 00:04:51.114776 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 13 00:04:51.114806 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:04:51.114823 kernel: NET: Registered PF_INET protocol family Sep 13 00:04:51.114840 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:04:51.114856 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:04:51.114873 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:04:51.114889 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:04:51.114906 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:04:51.114923 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:04:51.114939 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:04:51.114959 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:04:51.114976 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:04:51.114992 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:04:51.115008 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 13 00:04:51.115024 kernel: kvm [1]: HYP mode not available Sep 13 00:04:51.115040 kernel: Initialise system trusted keyrings Sep 13 00:04:51.115057 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:04:51.115073 kernel: Key type asymmetric registered Sep 13 00:04:51.115088 kernel: Asymmetric key parser 'x509' registered Sep 13 00:04:51.115109 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:04:51.115125 kernel: io scheduler mq-deadline registered Sep 13 00:04:51.115141 kernel: io scheduler kyber registered Sep 13 00:04:51.115157 kernel: io scheduler bfq registered Sep 13 00:04:51.116479 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 13 00:04:51.116517 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:04:51.116534 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:04:51.116550 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 13 00:04:51.116567 kernel: ACPI: button: Sleep Button [SLPB] Sep 13 00:04:51.116590 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:04:51.116607 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 13 00:04:51.116809 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 13 00:04:51.116833 kernel: printk: console [ttyS0] disabled Sep 13 00:04:51.116850 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 13 00:04:51.116867 kernel: printk: console [ttyS0] enabled Sep 13 00:04:51.116883 kernel: printk: bootconsole [uart0] disabled Sep 13 00:04:51.116899 kernel: thunder_xcv, ver 1.0 Sep 13 00:04:51.116915 kernel: thunder_bgx, ver 1.0 Sep 13 00:04:51.116936 kernel: nicpf, ver 1.0 Sep 13 00:04:51.116952 kernel: nicvf, ver 1.0 Sep 13 00:04:51.117152 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:04:51.121041 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:04:50 UTC (1757721890) Sep 13 00:04:51.121087 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:04:51.121105 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:04:51.121122 kernel: Segment Routing with IPv6 Sep 13 00:04:51.121140 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:04:51.121166 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:04:51.121183 kernel: Key type dns_resolver registered Sep 13 00:04:51.121200 kernel: registered taskstats version 1 Sep 13 00:04:51.121216 kernel: Loading compiled-in X.509 certificates Sep 13 00:04:51.121233 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 00:04:51.121249 kernel: Key type .fscrypt registered Sep 13 00:04:51.121336 kernel: Key type fscrypt-provisioning registered Sep 13 00:04:51.121354 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:04:51.121371 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:04:51.121393 kernel: ima: No architecture policies found Sep 13 00:04:51.121409 kernel: clk: Disabling unused clocks Sep 13 00:04:51.121425 kernel: Freeing unused kernel memory: 36416K Sep 13 00:04:51.121441 kernel: Run /init as init process Sep 13 00:04:51.121457 kernel: with arguments: Sep 13 00:04:51.121473 kernel: /init Sep 13 00:04:51.121489 kernel: with environment: Sep 13 00:04:51.121505 kernel: HOME=/ Sep 13 00:04:51.121521 kernel: TERM=linux Sep 13 00:04:51.121541 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:04:51.121563 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:04:51.121584 systemd[1]: Detected virtualization amazon. Sep 13 00:04:51.121602 systemd[1]: Detected architecture arm64. Sep 13 00:04:51.121620 systemd[1]: Running in initrd. Sep 13 00:04:51.121637 systemd[1]: No hostname configured, using default hostname. Sep 13 00:04:51.121654 systemd[1]: Hostname set to . Sep 13 00:04:51.121677 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:04:51.121695 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:04:51.121713 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:04:51.121730 systemd[1]: Reached target cryptsetup.target. Sep 13 00:04:51.121747 systemd[1]: Reached target paths.target. Sep 13 00:04:51.121765 systemd[1]: Reached target slices.target. Sep 13 00:04:51.121782 systemd[1]: Reached target swap.target. Sep 13 00:04:51.121799 systemd[1]: Reached target timers.target. Sep 13 00:04:51.121822 systemd[1]: Listening on iscsid.socket. Sep 13 00:04:51.121840 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:04:51.121857 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:04:51.121874 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:04:51.121892 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:04:51.121909 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:04:51.121928 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:04:51.121946 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:04:51.121968 systemd[1]: Reached target sockets.target. Sep 13 00:04:51.121985 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:04:51.122003 systemd[1]: Finished network-cleanup.service. Sep 13 00:04:51.122021 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:04:51.122038 systemd[1]: Starting systemd-journald.service... Sep 13 00:04:51.122056 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:04:51.122073 systemd[1]: Starting systemd-resolved.service... Sep 13 00:04:51.122091 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:04:51.122108 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:04:51.122129 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:04:51.122147 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:04:51.122164 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:04:51.122181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:04:51.122198 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:04:51.122217 kernel: audit: type=1130 audit(1757721891.060:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.122235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:04:51.122252 kernel: audit: type=1130 audit(1757721891.083:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.123012 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:04:51.123039 systemd-journald[310]: Journal started Sep 13 00:04:51.123140 systemd-journald[310]: Runtime Journal (/run/log/journal/ec223b99394181074a689222df1fb52e) is 8.0M, max 75.4M, 67.4M free. Sep 13 00:04:51.127403 systemd[1]: Started systemd-journald.service. Sep 13 00:04:51.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.012179 systemd-modules-load[311]: Inserted module 'overlay' Sep 13 00:04:51.062219 systemd-resolved[312]: Positive Trust Anchors: Sep 13 00:04:51.143945 kernel: audit: type=1130 audit(1757721891.124:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.062233 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:04:51.062364 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:04:51.189249 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:04:51.189777 dracut-cmdline[327]: dracut-dracut-053 Sep 13 00:04:51.191993 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:04:51.210055 systemd-modules-load[311]: Inserted module 'br_netfilter' Sep 13 00:04:51.212692 kernel: Bridge firewalling registered Sep 13 00:04:51.234315 kernel: SCSI subsystem initialized Sep 13 00:04:51.251205 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:04:51.251307 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:04:51.252392 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:04:51.263440 systemd-modules-load[311]: Inserted module 'dm_multipath' Sep 13 00:04:51.267822 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:04:51.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.282274 kernel: audit: type=1130 audit(1757721891.269:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.284889 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:04:51.307520 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:04:51.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.322931 kernel: audit: type=1130 audit(1757721891.312:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.405284 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:04:51.425302 kernel: iscsi: registered transport (tcp) Sep 13 00:04:51.453019 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:04:51.453102 kernel: QLogic iSCSI HBA Driver Sep 13 00:04:51.593295 kernel: random: crng init done Sep 13 00:04:51.593447 systemd-resolved[312]: Defaulting to hostname 'linux'. Sep 13 00:04:51.597633 systemd[1]: Started systemd-resolved.service. Sep 13 00:04:51.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.597925 systemd[1]: Reached target nss-lookup.target. Sep 13 00:04:51.613692 kernel: audit: type=1130 audit(1757721891.596:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.622748 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:04:51.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.628302 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:04:51.640299 kernel: audit: type=1130 audit(1757721891.625:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:51.697327 kernel: raid6: neonx8 gen() 6407 MB/s Sep 13 00:04:51.715303 kernel: raid6: neonx8 xor() 4601 MB/s Sep 13 00:04:51.733295 kernel: raid6: neonx4 gen() 6606 MB/s Sep 13 00:04:51.751290 kernel: raid6: neonx4 xor() 4773 MB/s Sep 13 00:04:51.769288 kernel: raid6: neonx2 gen() 5835 MB/s Sep 13 00:04:51.787289 kernel: raid6: neonx2 xor() 4400 MB/s Sep 13 00:04:51.805289 kernel: raid6: neonx1 gen() 4510 MB/s Sep 13 00:04:51.823289 kernel: raid6: neonx1 xor() 3596 MB/s Sep 13 00:04:51.841289 kernel: raid6: int64x8 gen() 3446 MB/s Sep 13 00:04:51.859289 kernel: raid6: int64x8 xor() 2059 MB/s Sep 13 00:04:51.877289 kernel: raid6: int64x4 gen() 3864 MB/s Sep 13 00:04:51.895288 kernel: raid6: int64x4 xor() 2165 MB/s Sep 13 00:04:51.913289 kernel: raid6: int64x2 gen() 3621 MB/s Sep 13 00:04:51.931289 kernel: raid6: int64x2 xor() 1924 MB/s Sep 13 00:04:51.949289 kernel: raid6: int64x1 gen() 2765 MB/s Sep 13 00:04:51.968691 kernel: raid6: int64x1 xor() 1437 MB/s Sep 13 00:04:51.968731 kernel: raid6: using algorithm neonx4 gen() 6606 MB/s Sep 13 00:04:51.968756 kernel: raid6: .... xor() 4773 MB/s, rmw enabled Sep 13 00:04:51.970460 kernel: raid6: using neon recovery algorithm Sep 13 00:04:51.990705 kernel: xor: measuring software checksum speed Sep 13 00:04:51.990767 kernel: 8regs : 9298 MB/sec Sep 13 00:04:51.992580 kernel: 32regs : 11106 MB/sec Sep 13 00:04:51.994505 kernel: arm64_neon : 9561 MB/sec Sep 13 00:04:51.994535 kernel: xor: using function: 32regs (11106 MB/sec) Sep 13 00:04:52.092308 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 00:04:52.109476 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:04:52.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:52.126000 audit: BPF prog-id=7 op=LOAD Sep 13 00:04:52.129948 kernel: audit: type=1130 audit(1757721892.118:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:52.130007 kernel: audit: type=1334 audit(1757721892.126:10): prog-id=7 op=LOAD Sep 13 00:04:52.128176 systemd[1]: Starting systemd-udevd.service... Sep 13 00:04:52.126000 audit: BPF prog-id=8 op=LOAD Sep 13 00:04:52.161138 systemd-udevd[509]: Using default interface naming scheme 'v252'. Sep 13 00:04:52.172590 systemd[1]: Started systemd-udevd.service. Sep 13 00:04:52.176592 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:04:52.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:52.209231 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Sep 13 00:04:52.268782 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:04:52.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:52.278397 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:04:52.382546 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:04:52.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:52.499295 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:04:52.504331 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 13 00:04:52.515416 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:04:52.515671 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:04:52.515886 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:87:fa:bd:ff:1f Sep 13 00:04:52.516090 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 13 00:04:52.520289 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:04:52.523097 (udev-worker)[562]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:04:52.532290 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:04:52.540843 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:04:52.540905 kernel: GPT:9289727 != 16777215 Sep 13 00:04:52.543162 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:04:52.546978 kernel: GPT:9289727 != 16777215 Sep 13 00:04:52.549734 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:04:52.551285 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:04:52.617302 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (555) Sep 13 00:04:52.680117 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:04:52.712079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:04:52.739410 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:04:52.744492 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:04:52.760325 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:04:52.774920 systemd[1]: Starting disk-uuid.service... Sep 13 00:04:52.786984 disk-uuid[661]: Primary Header is updated. Sep 13 00:04:52.786984 disk-uuid[661]: Secondary Entries is updated. Sep 13 00:04:52.786984 disk-uuid[661]: Secondary Header is updated. Sep 13 00:04:52.797289 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:04:52.805291 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:04:52.812292 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:04:53.822828 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:04:53.822900 disk-uuid[662]: The operation has completed successfully. Sep 13 00:04:53.984824 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:04:53.985037 systemd[1]: Finished disk-uuid.service. Sep 13 00:04:53.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:53.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.012719 systemd[1]: Starting verity-setup.service... Sep 13 00:04:54.047303 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:04:54.140516 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:04:54.150458 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:04:54.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.157890 systemd[1]: Finished verity-setup.service. Sep 13 00:04:54.249306 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:04:54.250146 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:04:54.253294 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:04:54.254557 systemd[1]: Starting ignition-setup.service... Sep 13 00:04:54.271956 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:04:54.292315 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:04:54.292382 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:04:54.292406 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:04:54.321290 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:04:54.340818 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:04:54.371907 systemd[1]: Finished ignition-setup.service. Sep 13 00:04:54.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.381514 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:04:54.437321 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:04:54.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.441000 audit: BPF prog-id=9 op=LOAD Sep 13 00:04:54.443771 systemd[1]: Starting systemd-networkd.service... Sep 13 00:04:54.495378 systemd-networkd[1185]: lo: Link UP Sep 13 00:04:54.495401 systemd-networkd[1185]: lo: Gained carrier Sep 13 00:04:54.496982 systemd-networkd[1185]: Enumeration completed Sep 13 00:04:54.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.497892 systemd-networkd[1185]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:04:54.498686 systemd[1]: Started systemd-networkd.service. Sep 13 00:04:54.505162 systemd-networkd[1185]: eth0: Link UP Sep 13 00:04:54.505209 systemd-networkd[1185]: eth0: Gained carrier Sep 13 00:04:54.508412 systemd[1]: Reached target network.target. Sep 13 00:04:54.517616 systemd[1]: Starting iscsiuio.service... Sep 13 00:04:54.545437 systemd-networkd[1185]: eth0: DHCPv4 address 172.31.24.134/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:04:54.551702 systemd[1]: Started iscsiuio.service. Sep 13 00:04:54.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.560176 systemd[1]: Starting iscsid.service... Sep 13 00:04:54.571627 iscsid[1190]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:04:54.571627 iscsid[1190]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:04:54.571627 iscsid[1190]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:04:54.571627 iscsid[1190]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:04:54.571627 iscsid[1190]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:04:54.598245 iscsid[1190]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:04:54.607853 systemd[1]: Started iscsid.service. Sep 13 00:04:54.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.615975 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:04:54.641640 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:04:54.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.647217 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:04:54.651490 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:04:54.661982 systemd[1]: Reached target remote-fs.target. Sep 13 00:04:54.665508 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:04:54.687362 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:04:54.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.159650 ignition[1147]: Ignition 2.14.0 Sep 13 00:04:55.159680 ignition[1147]: Stage: fetch-offline Sep 13 00:04:55.163499 ignition[1147]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:04:55.163625 ignition[1147]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:04:55.190211 ignition[1147]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:04:55.191438 ignition[1147]: Ignition finished successfully Sep 13 00:04:55.197429 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:04:55.212538 kernel: kauditd_printk_skb: 15 callbacks suppressed Sep 13 00:04:55.212576 kernel: audit: type=1130 audit(1757721895.198:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.212651 systemd[1]: Starting ignition-fetch.service... Sep 13 00:04:55.229849 ignition[1209]: Ignition 2.14.0 Sep 13 00:04:55.229876 ignition[1209]: Stage: fetch Sep 13 00:04:55.230174 ignition[1209]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:04:55.230232 ignition[1209]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:04:55.245896 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:04:55.249352 ignition[1209]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:04:55.260571 ignition[1209]: INFO : PUT result: OK Sep 13 00:04:55.264616 ignition[1209]: DEBUG : parsed url from cmdline: "" Sep 13 00:04:55.271781 ignition[1209]: INFO : no config URL provided Sep 13 00:04:55.271781 ignition[1209]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:04:55.271781 ignition[1209]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 13 00:04:55.271781 ignition[1209]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:04:55.283750 ignition[1209]: INFO : PUT result: OK Sep 13 00:04:55.283750 ignition[1209]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:04:55.288852 ignition[1209]: INFO : GET result: OK Sep 13 00:04:55.290819 ignition[1209]: DEBUG : parsing config with SHA512: 09675743fc922465913ba5e3175b305256bb79b406787c6967dc61230aa245cc1d4b2a07377f9961a19d9998f58f82385394998e486d54c67bc795ebd18ca472 Sep 13 00:04:55.300342 unknown[1209]: fetched base config from "system" Sep 13 00:04:55.300977 unknown[1209]: fetched base config from "system" Sep 13 00:04:55.301147 unknown[1209]: fetched user config from "aws" Sep 13 00:04:55.308548 ignition[1209]: fetch: fetch complete Sep 13 00:04:55.308716 ignition[1209]: fetch: fetch passed Sep 13 00:04:55.308812 ignition[1209]: Ignition finished successfully Sep 13 00:04:55.316228 systemd[1]: Finished ignition-fetch.service. Sep 13 00:04:55.327195 kernel: audit: type=1130 audit(1757721895.315:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.328619 systemd[1]: Starting ignition-kargs.service... Sep 13 00:04:55.347013 ignition[1215]: Ignition 2.14.0 Sep 13 00:04:55.347040 ignition[1215]: Stage: kargs Sep 13 00:04:55.347378 ignition[1215]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:04:55.347434 ignition[1215]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:04:55.362962 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:04:55.366160 ignition[1215]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:04:55.369080 ignition[1215]: INFO : PUT result: OK Sep 13 00:04:55.379313 ignition[1215]: kargs: kargs passed Sep 13 00:04:55.379422 ignition[1215]: Ignition finished successfully Sep 13 00:04:55.384503 systemd[1]: Finished ignition-kargs.service. Sep 13 00:04:55.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.389798 systemd[1]: Starting ignition-disks.service... Sep 13 00:04:55.399495 kernel: audit: type=1130 audit(1757721895.387:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.407054 ignition[1221]: Ignition 2.14.0 Sep 13 00:04:55.409029 ignition[1221]: Stage: disks Sep 13 00:04:55.410806 ignition[1221]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:04:55.413699 ignition[1221]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:04:55.427465 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:04:55.430655 ignition[1221]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:04:55.433865 ignition[1221]: INFO : PUT result: OK Sep 13 00:04:55.443625 ignition[1221]: disks: disks passed Sep 13 00:04:55.443746 ignition[1221]: Ignition finished successfully Sep 13 00:04:55.448985 systemd[1]: Finished ignition-disks.service. Sep 13 00:04:55.452881 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:04:55.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.474321 kernel: audit: type=1130 audit(1757721895.451:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.453055 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:04:55.453799 systemd[1]: Reached target local-fs.target. Sep 13 00:04:55.454188 systemd[1]: Reached target sysinit.target. Sep 13 00:04:55.454586 systemd[1]: Reached target basic.target. Sep 13 00:04:55.466899 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:04:55.524276 systemd-fsck[1229]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 13 00:04:55.530110 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:04:55.548217 kernel: audit: type=1130 audit(1757721895.529:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.533859 systemd[1]: Mounting sysroot.mount... Sep 13 00:04:55.569281 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:04:55.570970 systemd[1]: Mounted sysroot.mount. Sep 13 00:04:55.575913 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:04:55.585183 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:04:55.588493 systemd-networkd[1185]: eth0: Gained IPv6LL Sep 13 00:04:55.589959 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:04:55.590082 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:04:55.590151 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:04:55.615732 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:04:55.642089 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:04:55.648052 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:04:55.666799 initrd-setup-root[1251]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:04:55.683304 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1246) Sep 13 00:04:55.687299 initrd-setup-root[1259]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:04:55.695385 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:04:55.695425 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:04:55.695448 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:04:55.703791 initrd-setup-root[1283]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:04:55.714528 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:04:55.724326 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:04:55.736227 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:04:55.959786 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:04:55.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.961524 systemd[1]: Starting ignition-mount.service... Sep 13 00:04:55.982618 systemd[1]: Starting sysroot-boot.service... Sep 13 00:04:55.987301 kernel: audit: type=1130 audit(1757721895.958:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:55.992936 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:04:55.993409 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:04:56.030807 ignition[1312]: INFO : Ignition 2.14.0 Sep 13 00:04:56.030807 ignition[1312]: INFO : Stage: mount Sep 13 00:04:56.037546 ignition[1312]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:04:56.043756 ignition[1312]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:04:56.058630 kernel: audit: type=1130 audit(1757721896.042:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:56.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:56.045633 systemd[1]: Finished sysroot-boot.service. Sep 13 00:04:56.068306 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:04:56.071917 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:04:56.076050 ignition[1312]: INFO : PUT result: OK Sep 13 00:04:56.082380 ignition[1312]: INFO : mount: mount passed Sep 13 00:04:56.082380 ignition[1312]: INFO : Ignition finished successfully Sep 13 00:04:56.087058 systemd[1]: Finished ignition-mount.service. Sep 13 00:04:56.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:56.098402 systemd[1]: Starting ignition-files.service... Sep 13 00:04:56.101022 kernel: audit: type=1130 audit(1757721896.089:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:56.117111 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:04:56.144295 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1322) Sep 13 00:04:56.149952 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:04:56.150004 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:04:56.150029 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:04:56.165297 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:04:56.170292 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:04:56.190156 ignition[1341]: INFO : Ignition 2.14.0 Sep 13 00:04:56.190156 ignition[1341]: INFO : Stage: files Sep 13 00:04:56.194456 ignition[1341]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:04:56.194456 ignition[1341]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:04:56.213874 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:04:56.216981 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:04:56.220721 ignition[1341]: INFO : PUT result: OK Sep 13 00:04:56.228051 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:04:56.234559 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:04:56.238208 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:04:56.272132 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:04:56.275730 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:04:56.280790 unknown[1341]: wrote ssh authorized keys file for user: core Sep 13 00:04:56.283517 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:04:56.288084 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:04:56.292716 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:04:56.297345 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:04:56.302220 ignition[1341]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:04:56.361147 ignition[1341]: INFO : GET result: OK Sep 13 00:04:56.563663 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:04:56.568494 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:04:56.573056 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:04:56.577639 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:04:56.584665 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:04:56.591396 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:04:56.596371 ignition[1341]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:04:56.611483 ignition[1341]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem206597102" Sep 13 00:04:56.619734 ignition[1341]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem206597102": device or resource busy Sep 13 00:04:56.619734 ignition[1341]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem206597102", trying btrfs: device or resource busy Sep 13 00:04:56.619734 ignition[1341]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem206597102" Sep 13 00:04:56.619734 ignition[1341]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem206597102" Sep 13 00:04:56.646548 ignition[1341]: INFO : op(3): [started] unmounting "/mnt/oem206597102" Sep 13 00:04:56.649767 ignition[1341]: INFO : op(3): [finished] unmounting "/mnt/oem206597102" Sep 13 00:04:56.649163 systemd[1]: mnt-oem206597102.mount: Deactivated successfully. Sep 13 00:04:56.655851 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:04:56.655851 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:04:56.665814 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:04:56.665814 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:04:56.675016 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:04:56.679915 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:04:56.685000 ignition[1341]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 00:04:56.905830 ignition[1341]: INFO : GET result: OK Sep 13 00:04:57.080223 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:04:57.088401 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:04:57.099787 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:04:57.107898 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:04:57.113788 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:04:57.118242 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:04:57.122981 ignition[1341]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:04:57.141419 ignition[1341]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem740755486" Sep 13 00:04:57.144871 ignition[1341]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem740755486": device or resource busy Sep 13 00:04:57.144871 ignition[1341]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem740755486", trying btrfs: device or resource busy Sep 13 00:04:57.144871 ignition[1341]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem740755486" Sep 13 00:04:57.157247 ignition[1341]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem740755486" Sep 13 00:04:57.157247 ignition[1341]: INFO : op(6): [started] unmounting "/mnt/oem740755486" Sep 13 00:04:57.164914 ignition[1341]: INFO : op(6): [finished] unmounting "/mnt/oem740755486" Sep 13 00:04:57.164914 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:04:57.164914 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:04:57.164914 ignition[1341]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:04:57.556655 ignition[1341]: INFO : GET result: OK Sep 13 00:04:58.166326 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:04:58.172212 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:04:58.172212 ignition[1341]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:04:58.198729 ignition[1341]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem165855941" Sep 13 00:04:58.202175 ignition[1341]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem165855941": device or resource busy Sep 13 00:04:58.202175 ignition[1341]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem165855941", trying btrfs: device or resource busy Sep 13 00:04:58.202175 ignition[1341]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem165855941" Sep 13 00:04:58.220240 ignition[1341]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem165855941" Sep 13 00:04:58.220240 ignition[1341]: INFO : op(9): [started] unmounting "/mnt/oem165855941" Sep 13 00:04:58.228867 ignition[1341]: INFO : op(9): [finished] unmounting "/mnt/oem165855941" Sep 13 00:04:58.228867 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:04:58.228867 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:04:58.228867 ignition[1341]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:04:58.263717 ignition[1341]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem842155198" Sep 13 00:04:58.267750 ignition[1341]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem842155198": device or resource busy Sep 13 00:04:58.267750 ignition[1341]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem842155198", trying btrfs: device or resource busy Sep 13 00:04:58.267750 ignition[1341]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem842155198" Sep 13 00:04:58.284220 ignition[1341]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem842155198" Sep 13 00:04:58.284220 ignition[1341]: INFO : op(c): [started] unmounting "/mnt/oem842155198" Sep 13 00:04:58.284220 ignition[1341]: INFO : op(c): [finished] unmounting "/mnt/oem842155198" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: op(11): [started] processing unit "nvidia.service" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: op(11): [finished] processing unit "nvidia.service" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: op(12): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: op(12): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: op(13): [started] processing unit "amazon-ssm-agent.service" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: op(13): op(14): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: op(13): op(14): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: op(13): [finished] processing unit "amazon-ssm-agent.service" Sep 13 00:04:58.284220 ignition[1341]: INFO : files: op(15): [started] processing unit "containerd.service" Sep 13 00:04:58.302320 systemd[1]: mnt-oem842155198.mount: Deactivated successfully. Sep 13 00:04:58.348111 ignition[1341]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:04:58.355912 ignition[1341]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:04:58.355912 ignition[1341]: INFO : files: op(15): [finished] processing unit "containerd.service" Sep 13 00:04:58.355912 ignition[1341]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Sep 13 00:04:58.368620 ignition[1341]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:04:58.368620 ignition[1341]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:04:58.368620 ignition[1341]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Sep 13 00:04:58.368620 ignition[1341]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:04:58.385640 ignition[1341]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:04:58.385640 ignition[1341]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:04:58.385640 ignition[1341]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:04:58.385640 ignition[1341]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Sep 13 00:04:58.385640 ignition[1341]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:04:58.385640 ignition[1341]: INFO : files: op(1c): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:04:58.385640 ignition[1341]: INFO : files: op(1c): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:04:58.417046 ignition[1341]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:04:58.424450 ignition[1341]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:04:58.424450 ignition[1341]: INFO : files: files passed Sep 13 00:04:58.424450 ignition[1341]: INFO : Ignition finished successfully Sep 13 00:04:58.431605 systemd[1]: Finished ignition-files.service. Sep 13 00:04:58.464238 kernel: audit: type=1130 audit(1757721898.434:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.440679 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:04:58.457369 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:04:58.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.458984 systemd[1]: Starting ignition-quench.service... Sep 13 00:04:58.509732 initrd-setup-root-after-ignition[1366]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:58.529069 kernel: audit: type=1130 audit(1757721898.469:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.472708 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:04:58.473121 systemd[1]: Reached target ignition-complete.target. Sep 13 00:04:58.475323 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:04:58.513729 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:04:58.513969 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:04:58.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.520887 systemd[1]: Reached target initrd-fs.target. Sep 13 00:04:58.523440 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:04:58.523971 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:04:58.524186 systemd[1]: Finished ignition-quench.service. Sep 13 00:04:58.526811 systemd[1]: Reached target initrd.target. Sep 13 00:04:58.531078 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:04:58.559850 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:04:58.581326 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:04:58.604222 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:04:58.608636 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:04:58.617181 systemd[1]: Stopped target timers.target. Sep 13 00:04:58.621146 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:04:58.624116 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:04:58.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.628793 systemd[1]: Stopped target initrd.target. Sep 13 00:04:58.632900 systemd[1]: Stopped target basic.target. Sep 13 00:04:58.636935 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:04:58.642047 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:04:58.646910 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:04:58.651804 systemd[1]: Stopped target remote-fs.target. Sep 13 00:04:58.656162 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:04:58.660772 systemd[1]: Stopped target sysinit.target. Sep 13 00:04:58.664848 systemd[1]: Stopped target local-fs.target. Sep 13 00:04:58.669063 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:04:58.673608 systemd[1]: Stopped target swap.target. Sep 13 00:04:58.677571 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:04:58.680358 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:04:58.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.684818 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:04:58.688917 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:04:58.691789 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:04:58.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.696140 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:04:58.699294 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:04:58.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.704381 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:04:58.706861 systemd[1]: Stopped ignition-files.service. Sep 13 00:04:58.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.713196 systemd[1]: Stopping ignition-mount.service... Sep 13 00:04:58.732333 ignition[1380]: INFO : Ignition 2.14.0 Sep 13 00:04:58.732333 ignition[1380]: INFO : Stage: umount Sep 13 00:04:58.732333 ignition[1380]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:04:58.732333 ignition[1380]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:04:58.750038 iscsid[1190]: iscsid shutting down. Sep 13 00:04:58.743711 systemd[1]: Stopping iscsid.service... Sep 13 00:04:58.760822 ignition[1380]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:04:58.764184 ignition[1380]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:04:58.768577 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:04:58.780670 ignition[1380]: INFO : PUT result: OK Sep 13 00:04:58.786430 ignition[1380]: INFO : umount: umount passed Sep 13 00:04:58.788862 ignition[1380]: INFO : Ignition finished successfully Sep 13 00:04:58.791446 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:04:58.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.791824 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:04:58.794514 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:04:58.794779 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:04:58.802887 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:04:58.805620 systemd[1]: Stopped iscsid.service. Sep 13 00:04:58.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.819413 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:04:58.820387 systemd[1]: Stopped ignition-mount.service. Sep 13 00:04:58.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.830379 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:04:58.830616 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:04:58.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.846363 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:04:58.852171 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:04:58.852356 systemd[1]: Stopped ignition-disks.service. Sep 13 00:04:58.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.859487 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:04:58.859617 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:04:58.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.867953 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:04:58.868065 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:04:58.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.878847 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:04:58.878961 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:04:58.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.886227 systemd[1]: Stopped target paths.target. Sep 13 00:04:58.890086 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:04:58.894405 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:04:58.899041 systemd[1]: Stopped target slices.target. Sep 13 00:04:58.902887 systemd[1]: Stopped target sockets.target. Sep 13 00:04:58.906906 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:04:58.907029 systemd[1]: Closed iscsid.socket. Sep 13 00:04:58.912701 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:04:58.912860 systemd[1]: Stopped ignition-setup.service. Sep 13 00:04:58.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.920909 systemd[1]: Stopping iscsiuio.service... Sep 13 00:04:58.926871 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:04:58.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.927101 systemd[1]: Stopped iscsiuio.service. Sep 13 00:04:58.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.931497 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:04:58.931711 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:04:58.934028 systemd[1]: Stopped target network.target. Sep 13 00:04:58.937195 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:04:58.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.938237 systemd[1]: Closed iscsiuio.socket. Sep 13 00:04:58.940522 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:04:58.940634 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:04:58.944746 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:04:58.946991 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:04:58.954394 systemd-networkd[1185]: eth0: DHCPv6 lease lost Sep 13 00:04:58.972000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:04:58.956409 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:04:58.956635 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:04:58.961978 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:04:58.962061 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:04:58.988450 systemd[1]: Stopping network-cleanup.service... Sep 13 00:04:58.992333 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:04:58.992462 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:04:58.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.999183 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:04:58.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:58.999298 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:04:59.003470 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:04:59.005303 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:04:59.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.015806 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:04:59.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.020523 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:04:59.021451 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:04:59.032000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:04:59.021652 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:04:59.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.034322 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:04:59.034659 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:04:59.038416 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:04:59.038625 systemd[1]: Stopped network-cleanup.service. Sep 13 00:04:59.042520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:04:59.042601 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:04:59.044794 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:04:59.044862 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:04:59.046995 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:04:59.047076 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:04:59.049218 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:04:59.049316 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:04:59.051371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:04:59.051447 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:04:59.054830 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:04:59.076631 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:04:59.076756 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:04:59.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.101917 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:04:59.102024 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:04:59.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.108166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:04:59.108314 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:04:59.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.115750 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:04:59.118485 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:04:59.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:59.123076 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:04:59.128629 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:04:59.164570 systemd[1]: Switching root. Sep 13 00:04:59.166000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:04:59.166000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:04:59.166000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:04:59.168000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:04:59.168000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:04:59.190476 systemd-journald[310]: Journal stopped Sep 13 00:05:05.819817 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Sep 13 00:05:05.819943 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:05:05.819987 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:05:05.820020 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:05:05.820055 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:05:05.820085 kernel: SELinux: policy capability open_perms=1 Sep 13 00:05:05.820117 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:05:05.820149 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:05:05.820178 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:05:05.820209 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:05:05.820237 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:05:05.820293 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:05:05.820333 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 13 00:05:05.820365 kernel: audit: type=1403 audit(1757721900.399:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:05:05.820401 systemd[1]: Successfully loaded SELinux policy in 134.677ms. Sep 13 00:05:05.820452 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.217ms. Sep 13 00:05:05.820494 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:05:05.820529 systemd[1]: Detected virtualization amazon. Sep 13 00:05:05.820562 systemd[1]: Detected architecture arm64. Sep 13 00:05:05.820595 systemd[1]: Detected first boot. Sep 13 00:05:05.820631 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:05:05.820665 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:05:05.820697 kernel: audit: type=1400 audit(1757721900.857:84): avc: denied { associate } for pid=1430 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:05:05.820730 kernel: audit: type=1300 audit(1757721900.857:84): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022174 a1=4000028180 a2=4000026340 a3=32 items=0 ppid=1413 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:05.820771 kernel: audit: type=1327 audit(1757721900.857:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:05:05.820804 kernel: audit: type=1400 audit(1757721900.861:85): avc: denied { associate } for pid=1430 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:05:05.820839 kernel: audit: type=1300 audit(1757721900.861:85): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022249 a2=1ed a3=0 items=2 ppid=1413 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:05.820870 kernel: audit: type=1307 audit(1757721900.861:85): cwd="/" Sep 13 00:05:05.820902 kernel: audit: type=1302 audit(1757721900.861:85): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:05.820940 kernel: audit: type=1302 audit(1757721900.861:85): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:05.820972 kernel: audit: type=1327 audit(1757721900.861:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:05:05.821011 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:05:05.821050 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:05:05.821083 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:05:05.821120 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:05.821150 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:05:05.821182 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 13 00:05:05.821216 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:05:05.821247 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:05:05.821300 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:05:05.821340 systemd[1]: Created slice system-getty.slice. Sep 13 00:05:05.821374 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:05:05.821405 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:05:05.821438 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:05:05.821472 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:05:05.821504 systemd[1]: Created slice user.slice. Sep 13 00:05:05.821534 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:05:05.821566 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:05:05.821599 systemd[1]: Set up automount boot.automount. Sep 13 00:05:05.821631 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:05:05.821663 systemd[1]: Reached target integritysetup.target. Sep 13 00:05:05.821693 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:05:05.821724 systemd[1]: Reached target remote-fs.target. Sep 13 00:05:05.821757 systemd[1]: Reached target slices.target. Sep 13 00:05:05.821789 systemd[1]: Reached target swap.target. Sep 13 00:05:05.821820 systemd[1]: Reached target torcx.target. Sep 13 00:05:05.821852 systemd[1]: Reached target veritysetup.target. Sep 13 00:05:05.821888 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:05:05.821924 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:05:05.821959 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:05:05.821993 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:05:05.822026 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:05:05.822058 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:05:05.822090 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:05:05.822121 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:05:05.822152 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:05:05.822185 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:05:05.822220 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:05:05.822251 systemd[1]: Mounting media.mount... Sep 13 00:05:05.822330 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:05:05.822362 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:05:05.822391 systemd[1]: Mounting tmp.mount... Sep 13 00:05:05.822421 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:05:05.822453 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:05:05.822483 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:05:05.822515 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:05:05.822550 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:05:05.822580 systemd[1]: Starting modprobe@drm.service... Sep 13 00:05:05.822609 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:05:05.822653 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:05:05.822684 systemd[1]: Starting modprobe@loop.service... Sep 13 00:05:05.822716 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:05:05.822747 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:05:05.822777 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:05:05.822811 systemd[1]: Starting systemd-journald.service... Sep 13 00:05:05.822841 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:05:05.822870 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:05:05.822899 kernel: fuse: init (API version 7.34) Sep 13 00:05:05.822931 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:05:05.822960 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:05:05.822992 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:05:05.823021 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:05:05.823050 systemd[1]: Mounted media.mount. Sep 13 00:05:05.823080 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:05:05.823113 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:05:05.823144 systemd[1]: Mounted tmp.mount. Sep 13 00:05:05.823174 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:05:05.823203 kernel: kauditd_printk_skb: 2 callbacks suppressed Sep 13 00:05:05.823232 kernel: audit: type=1130 audit(1757721905.685:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.823280 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:05:05.823318 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:05:05.823350 kernel: loop: module loaded Sep 13 00:05:05.823385 kernel: audit: type=1130 audit(1757721905.707:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.823419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:05.823453 kernel: audit: type=1131 audit(1757721905.707:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.823483 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:05:05.823514 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:05.823559 kernel: audit: type=1130 audit(1757721905.733:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.823593 systemd[1]: Finished modprobe@drm.service. Sep 13 00:05:05.823625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:05.823661 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:05:05.823691 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:05:05.823721 kernel: audit: type=1131 audit(1757721905.733:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.823753 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:05:05.823784 kernel: audit: type=1130 audit(1757721905.751:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.823818 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:05.823850 systemd[1]: Finished modprobe@loop.service. Sep 13 00:05:05.823880 kernel: audit: type=1131 audit(1757721905.751:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.823913 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:05:05.823949 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:05:05.823980 kernel: audit: type=1130 audit(1757721905.760:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.824016 systemd-journald[1526]: Journal started Sep 13 00:05:05.824122 systemd-journald[1526]: Runtime Journal (/run/log/journal/ec223b99394181074a689222df1fb52e) is 8.0M, max 75.4M, 67.4M free. Sep 13 00:05:05.364000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:05:05.364000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:05:05.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.836152 systemd[1]: Started systemd-journald.service. Sep 13 00:05:05.837768 kernel: audit: type=1131 audit(1757721905.760:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.834241 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:05:05.837698 systemd[1]: Reached target network-pre.target. Sep 13 00:05:05.871806 kernel: audit: type=1130 audit(1757721905.781:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.850622 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:05:05.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.815000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:05:05.815000 audit[1526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffedf35190 a2=4000 a3=1 items=0 ppid=1 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:05.815000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:05:05.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.873066 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:05:05.877504 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:05:05.887449 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:05:05.892769 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:05:05.895079 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:05.897705 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:05:05.899898 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:05:05.907222 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:05:05.919146 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:05:05.925445 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:05:05.942750 systemd-journald[1526]: Time spent on flushing to /var/log/journal/ec223b99394181074a689222df1fb52e is 94.417ms for 1090 entries. Sep 13 00:05:05.942750 systemd-journald[1526]: System Journal (/var/log/journal/ec223b99394181074a689222df1fb52e) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:05:06.060338 systemd-journald[1526]: Received client request to flush runtime journal. Sep 13 00:05:05.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:06.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:05.970527 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:05:05.973681 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:05:05.976039 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:05:05.981163 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:05:06.022006 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:05:06.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:06.063809 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:05:06.099411 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:05:06.105953 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:05:06.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:06.123867 udevadm[1579]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:05:06.165856 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:05:06.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:06.172691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:05:06.315468 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:05:06.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:06.884313 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:05:06.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:06.888951 systemd[1]: Starting systemd-udevd.service... Sep 13 00:05:06.930078 systemd-udevd[1585]: Using default interface naming scheme 'v252'. Sep 13 00:05:06.986925 systemd[1]: Started systemd-udevd.service. Sep 13 00:05:06.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:06.993997 systemd[1]: Starting systemd-networkd.service... Sep 13 00:05:07.008694 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:05:07.083339 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:05:07.109652 (udev-worker)[1592]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:07.128228 systemd[1]: Started systemd-userdbd.service. Sep 13 00:05:07.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:07.336598 systemd-networkd[1591]: lo: Link UP Sep 13 00:05:07.337199 systemd-networkd[1591]: lo: Gained carrier Sep 13 00:05:07.338375 systemd-networkd[1591]: Enumeration completed Sep 13 00:05:07.338609 systemd-networkd[1591]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:07.338611 systemd[1]: Started systemd-networkd.service. Sep 13 00:05:07.344912 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:05:07.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:07.354304 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:05:07.354695 systemd-networkd[1591]: eth0: Link UP Sep 13 00:05:07.355001 systemd-networkd[1591]: eth0: Gained carrier Sep 13 00:05:07.368506 systemd-networkd[1591]: eth0: DHCPv4 address 172.31.24.134/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:05:07.538947 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:05:07.542627 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:05:07.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:07.547876 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:05:07.617042 lvm[1705]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:05:07.656172 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:05:07.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:07.661862 systemd[1]: Reached target cryptsetup.target. Sep 13 00:05:07.666865 systemd[1]: Starting lvm2-activation.service... Sep 13 00:05:07.675122 lvm[1707]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:05:07.714324 systemd[1]: Finished lvm2-activation.service. Sep 13 00:05:07.718808 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:05:07.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:07.721975 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:05:07.722043 systemd[1]: Reached target local-fs.target. Sep 13 00:05:07.724234 systemd[1]: Reached target machines.target. Sep 13 00:05:07.729146 systemd[1]: Starting ldconfig.service... Sep 13 00:05:07.732524 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:05:07.732676 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:05:07.735208 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:05:07.739747 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:05:07.753397 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:05:07.760140 systemd[1]: Starting systemd-sysext.service... Sep 13 00:05:07.764910 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1710 (bootctl) Sep 13 00:05:07.768734 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:05:07.813280 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:05:07.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:07.817186 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:05:07.832496 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:05:07.833441 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:05:07.863289 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 00:05:07.926832 systemd-fsck[1721]: fsck.fat 4.2 (2021-01-31) Sep 13 00:05:07.926832 systemd-fsck[1721]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Sep 13 00:05:07.933185 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:05:07.940984 systemd[1]: Mounting boot.mount... Sep 13 00:05:07.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:07.977990 systemd[1]: Mounted boot.mount. Sep 13 00:05:07.985853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:05:07.993913 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:05:07.997494 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:05:08.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.021546 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:05:08.022559 kernel: loop1: detected capacity change from 0 to 203944 Sep 13 00:05:08.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.047615 (sd-sysext)[1742]: Using extensions 'kubernetes'. Sep 13 00:05:08.048489 (sd-sysext)[1742]: Merged extensions into '/usr'. Sep 13 00:05:08.088425 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:05:08.091392 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:05:08.096945 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:05:08.102491 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:05:08.110748 systemd[1]: Starting modprobe@loop.service... Sep 13 00:05:08.115661 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:05:08.115963 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:05:08.130397 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:05:08.134978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:08.135418 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:05:08.139997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:08.140390 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:05:08.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.145848 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:08.146967 systemd[1]: Finished modprobe@loop.service. Sep 13 00:05:08.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.152197 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:08.152524 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:05:08.154819 systemd[1]: Finished systemd-sysext.service. Sep 13 00:05:08.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.162340 systemd[1]: Starting ensure-sysext.service... Sep 13 00:05:08.169571 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:05:08.191406 systemd[1]: Reloading. Sep 13 00:05:08.200561 systemd-tmpfiles[1757]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:05:08.212885 systemd-tmpfiles[1757]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:05:08.225732 systemd-tmpfiles[1757]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:05:08.310742 /usr/lib/systemd/system-generators/torcx-generator[1776]: time="2025-09-13T00:05:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:05:08.311465 /usr/lib/systemd/system-generators/torcx-generator[1776]: time="2025-09-13T00:05:08Z" level=info msg="torcx already run" Sep 13 00:05:08.452561 systemd-networkd[1591]: eth0: Gained IPv6LL Sep 13 00:05:08.562713 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:05:08.562974 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:05:08.610997 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:08.772197 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:05:08.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.811548 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:05:08.814739 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:05:08.823079 systemd[1]: Starting modprobe@drm.service... Sep 13 00:05:08.830477 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:05:08.836505 systemd[1]: Starting modprobe@loop.service... Sep 13 00:05:08.840028 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:05:08.840379 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:05:08.861811 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:05:08.867499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:08.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.868585 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:05:08.872705 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:08.873320 systemd[1]: Finished modprobe@drm.service. Sep 13 00:05:08.876936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:08.877547 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:05:08.881277 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:08.881870 systemd[1]: Finished modprobe@loop.service. Sep 13 00:05:08.887005 systemd[1]: Finished ensure-sysext.service. Sep 13 00:05:08.893699 systemd[1]: Starting audit-rules.service... Sep 13 00:05:08.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.900929 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:05:08.918745 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:05:08.924592 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:08.924762 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:05:08.931732 systemd[1]: Starting systemd-resolved.service... Sep 13 00:05:08.942135 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:05:08.956195 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:05:08.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:08.963602 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:05:08.968752 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:05:08.994000 audit[1859]: SYSTEM_BOOT pid=1859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:05:09.000226 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:05:09.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:09.040780 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:05:09.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:09.130000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:05:09.130000 audit[1875]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffa0d9df0 a2=420 a3=0 items=0 ppid=1849 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:09.130000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:05:09.132024 augenrules[1875]: No rules Sep 13 00:05:09.134004 systemd[1]: Finished audit-rules.service. Sep 13 00:05:09.174354 systemd-resolved[1856]: Positive Trust Anchors: Sep 13 00:05:09.174935 systemd-resolved[1856]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:05:09.175113 systemd-resolved[1856]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:05:09.202524 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:05:09.205120 systemd[1]: Reached target time-set.target. Sep 13 00:05:09.240218 systemd-resolved[1856]: Defaulting to hostname 'linux'. Sep 13 00:05:09.244867 systemd[1]: Started systemd-resolved.service. Sep 13 00:05:09.247199 systemd[1]: Reached target network.target. Sep 13 00:05:09.249298 systemd[1]: Reached target network-online.target. Sep 13 00:05:09.251589 systemd[1]: Reached target nss-lookup.target. Sep 13 00:05:09.289664 systemd-timesyncd[1858]: Contacted time server 45.79.214.107:123 (0.flatcar.pool.ntp.org). Sep 13 00:05:09.290632 systemd-timesyncd[1858]: Initial clock synchronization to Sat 2025-09-13 00:05:09.584721 UTC. Sep 13 00:05:09.304366 ldconfig[1709]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:05:09.315803 systemd[1]: Finished ldconfig.service. Sep 13 00:05:09.321538 systemd[1]: Starting systemd-update-done.service... Sep 13 00:05:09.339611 systemd[1]: Finished systemd-update-done.service. Sep 13 00:05:09.342618 systemd[1]: Reached target sysinit.target. Sep 13 00:05:09.345297 systemd[1]: Started motdgen.path. Sep 13 00:05:09.347793 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:05:09.351085 systemd[1]: Started logrotate.timer. Sep 13 00:05:09.353408 systemd[1]: Started mdadm.timer. Sep 13 00:05:09.355301 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:05:09.357594 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:05:09.357664 systemd[1]: Reached target paths.target. Sep 13 00:05:09.359870 systemd[1]: Reached target timers.target. Sep 13 00:05:09.362463 systemd[1]: Listening on dbus.socket. Sep 13 00:05:09.367801 systemd[1]: Starting docker.socket... Sep 13 00:05:09.376427 systemd[1]: Listening on sshd.socket. Sep 13 00:05:09.378995 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:05:09.380076 systemd[1]: Listening on docker.socket. Sep 13 00:05:09.382618 systemd[1]: Reached target sockets.target. Sep 13 00:05:09.385246 systemd[1]: Reached target basic.target. Sep 13 00:05:09.388100 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:05:09.388221 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:05:09.388365 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:05:09.391715 systemd[1]: Started amazon-ssm-agent.service. Sep 13 00:05:09.400200 systemd[1]: Starting containerd.service... Sep 13 00:05:09.411720 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:05:09.420006 systemd[1]: Starting dbus.service... Sep 13 00:05:09.425215 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:05:09.432031 systemd[1]: Starting extend-filesystems.service... Sep 13 00:05:09.445660 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:05:09.458589 systemd[1]: Starting kubelet.service... Sep 13 00:05:09.474768 systemd[1]: Starting motdgen.service... Sep 13 00:05:09.488632 systemd[1]: Started nvidia.service. Sep 13 00:05:09.501535 systemd[1]: Starting prepare-helm.service... Sep 13 00:05:09.550947 jq[1892]: false Sep 13 00:05:09.524357 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:05:09.532747 systemd[1]: Starting sshd-keygen.service... Sep 13 00:05:09.540461 systemd[1]: Starting systemd-logind.service... Sep 13 00:05:09.546489 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:05:09.546693 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:05:09.551934 systemd[1]: Starting update-engine.service... Sep 13 00:05:09.557493 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:05:09.592839 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:05:09.593498 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:05:09.616112 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:05:09.616822 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:05:09.695854 jq[1908]: true Sep 13 00:05:09.740880 tar[1916]: linux-arm64/helm Sep 13 00:05:09.755775 jq[1928]: true Sep 13 00:05:09.828738 extend-filesystems[1893]: Found loop1 Sep 13 00:05:09.835466 extend-filesystems[1893]: Found nvme0n1 Sep 13 00:05:09.837983 extend-filesystems[1893]: Found nvme0n1p1 Sep 13 00:05:09.845712 extend-filesystems[1893]: Found nvme0n1p2 Sep 13 00:05:09.845712 extend-filesystems[1893]: Found nvme0n1p3 Sep 13 00:05:09.845712 extend-filesystems[1893]: Found usr Sep 13 00:05:09.845712 extend-filesystems[1893]: Found nvme0n1p4 Sep 13 00:05:09.845712 extend-filesystems[1893]: Found nvme0n1p6 Sep 13 00:05:09.845712 extend-filesystems[1893]: Found nvme0n1p7 Sep 13 00:05:09.845712 extend-filesystems[1893]: Found nvme0n1p9 Sep 13 00:05:09.845712 extend-filesystems[1893]: Checking size of /dev/nvme0n1p9 Sep 13 00:05:09.907236 dbus-daemon[1891]: [system] SELinux support is enabled Sep 13 00:05:09.907858 systemd[1]: Started dbus.service. Sep 13 00:05:09.914619 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:05:09.914712 systemd[1]: Reached target system-config.target. Sep 13 00:05:09.917485 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:05:09.917527 systemd[1]: Reached target user-config.target. Sep 13 00:05:09.943011 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:05:09.943825 systemd[1]: Finished motdgen.service. Sep 13 00:05:09.959056 dbus-daemon[1891]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1591 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:05:09.965465 bash[1955]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:05:09.966673 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:05:09.976231 dbus-daemon[1891]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:05:09.983537 systemd[1]: Starting systemd-hostnamed.service... Sep 13 00:05:10.042562 extend-filesystems[1893]: Resized partition /dev/nvme0n1p9 Sep 13 00:05:10.074688 extend-filesystems[1965]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:05:10.132326 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:05:10.157196 amazon-ssm-agent[1887]: 2025/09/13 00:05:10 Failed to load instance info from vault. RegistrationKey does not exist. Sep 13 00:05:10.186901 amazon-ssm-agent[1887]: Initializing new seelog logger Sep 13 00:05:10.188083 amazon-ssm-agent[1887]: New Seelog Logger Creation Complete Sep 13 00:05:10.189512 update_engine[1907]: I0913 00:05:10.188908 1907 main.cc:92] Flatcar Update Engine starting Sep 13 00:05:10.190455 amazon-ssm-agent[1887]: 2025/09/13 00:05:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:10.190455 amazon-ssm-agent[1887]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:10.192279 amazon-ssm-agent[1887]: 2025/09/13 00:05:10 processing appconfig overrides Sep 13 00:05:10.215662 systemd[1]: Started update-engine.service. Sep 13 00:05:10.216696 update_engine[1907]: I0913 00:05:10.216470 1907 update_check_scheduler.cc:74] Next update check in 11m49s Sep 13 00:05:10.223374 systemd[1]: Started locksmithd.service. Sep 13 00:05:10.242181 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:05:10.260809 env[1922]: time="2025-09-13T00:05:10.260701825Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:05:10.265539 extend-filesystems[1965]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:05:10.265539 extend-filesystems[1965]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:05:10.265539 extend-filesystems[1965]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:05:10.283019 extend-filesystems[1893]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:05:10.287401 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:05:10.288427 systemd[1]: Finished extend-filesystems.service. Sep 13 00:05:10.385931 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:05:10.409006 systemd-logind[1905]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:05:10.409080 systemd-logind[1905]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 13 00:05:10.413581 systemd-logind[1905]: New seat seat0. Sep 13 00:05:10.435518 systemd[1]: Started systemd-logind.service. Sep 13 00:05:10.561429 env[1922]: time="2025-09-13T00:05:10.561334355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:05:10.561735 env[1922]: time="2025-09-13T00:05:10.561675215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:10.568516 env[1922]: time="2025-09-13T00:05:10.568125598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:10.568677 env[1922]: time="2025-09-13T00:05:10.568509162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:10.569260 env[1922]: time="2025-09-13T00:05:10.569177593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:10.569260 env[1922]: time="2025-09-13T00:05:10.569245867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:10.577153 env[1922]: time="2025-09-13T00:05:10.569284316Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:05:10.577337 env[1922]: time="2025-09-13T00:05:10.577161163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:10.577677 env[1922]: time="2025-09-13T00:05:10.577612478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:10.587490 env[1922]: time="2025-09-13T00:05:10.578421663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:10.594680 env[1922]: time="2025-09-13T00:05:10.588170191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:10.594884 env[1922]: time="2025-09-13T00:05:10.594668243Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:05:10.595080 env[1922]: time="2025-09-13T00:05:10.595001501Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:05:10.595165 env[1922]: time="2025-09-13T00:05:10.595080923Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:05:10.611247 env[1922]: time="2025-09-13T00:05:10.611133930Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:05:10.611463 env[1922]: time="2025-09-13T00:05:10.611278405Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:05:10.611463 env[1922]: time="2025-09-13T00:05:10.611379801Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:05:10.611739 env[1922]: time="2025-09-13T00:05:10.611527809Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:05:10.611818 env[1922]: time="2025-09-13T00:05:10.611745659Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:05:10.611879 env[1922]: time="2025-09-13T00:05:10.611804178Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:05:10.611879 env[1922]: time="2025-09-13T00:05:10.611848039Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:05:10.612588 env[1922]: time="2025-09-13T00:05:10.612518610Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:05:10.612685 env[1922]: time="2025-09-13T00:05:10.612604877Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:05:10.612746 env[1922]: time="2025-09-13T00:05:10.612667004Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:05:10.612805 env[1922]: time="2025-09-13T00:05:10.612734544Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:05:10.612805 env[1922]: time="2025-09-13T00:05:10.612782922Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:05:10.613228 env[1922]: time="2025-09-13T00:05:10.613156880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:05:10.613597 env[1922]: time="2025-09-13T00:05:10.613536201Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:05:10.614601 env[1922]: time="2025-09-13T00:05:10.614542630Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:05:10.614691 env[1922]: time="2025-09-13T00:05:10.614636026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.614753 env[1922]: time="2025-09-13T00:05:10.614694059Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:05:10.615054 env[1922]: time="2025-09-13T00:05:10.614996222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.615128 env[1922]: time="2025-09-13T00:05:10.615073169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.615188 env[1922]: time="2025-09-13T00:05:10.615137026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.615274 env[1922]: time="2025-09-13T00:05:10.615172724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.615274 env[1922]: time="2025-09-13T00:05:10.615240202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.628731 env[1922]: time="2025-09-13T00:05:10.628644619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.628905 env[1922]: time="2025-09-13T00:05:10.628784676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.628905 env[1922]: time="2025-09-13T00:05:10.628821930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.628905 env[1922]: time="2025-09-13T00:05:10.628863253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:05:10.629325 env[1922]: time="2025-09-13T00:05:10.629249331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.629442 env[1922]: time="2025-09-13T00:05:10.629387048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.629535 env[1922]: time="2025-09-13T00:05:10.629457587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.629535 env[1922]: time="2025-09-13T00:05:10.629515372Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:05:10.629648 env[1922]: time="2025-09-13T00:05:10.629563326Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:05:10.629648 env[1922]: time="2025-09-13T00:05:10.629618871Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:05:10.629756 env[1922]: time="2025-09-13T00:05:10.629691028Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:05:10.629856 env[1922]: time="2025-09-13T00:05:10.629801209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:05:10.642914 env[1922]: time="2025-09-13T00:05:10.642702922Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:05:10.645018 env[1922]: time="2025-09-13T00:05:10.642929943Z" level=info msg="Connect containerd service" Sep 13 00:05:10.645018 env[1922]: time="2025-09-13T00:05:10.643070447Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:05:10.645403 env[1922]: time="2025-09-13T00:05:10.645265631Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:05:10.654361 env[1922]: time="2025-09-13T00:05:10.654227210Z" level=info msg="Start subscribing containerd event" Sep 13 00:05:10.654538 env[1922]: time="2025-09-13T00:05:10.654378689Z" level=info msg="Start recovering state" Sep 13 00:05:10.654885 env[1922]: time="2025-09-13T00:05:10.654802556Z" level=info msg="Start event monitor" Sep 13 00:05:10.655008 env[1922]: time="2025-09-13T00:05:10.654929996Z" level=info msg="Start snapshots syncer" Sep 13 00:05:10.655008 env[1922]: time="2025-09-13T00:05:10.654978100Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:05:10.655122 env[1922]: time="2025-09-13T00:05:10.655023379Z" level=info msg="Start streaming server" Sep 13 00:05:10.660635 env[1922]: time="2025-09-13T00:05:10.660467122Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:05:10.660822 env[1922]: time="2025-09-13T00:05:10.660757452Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:05:10.661185 systemd[1]: Started containerd.service. Sep 13 00:05:10.701832 env[1922]: time="2025-09-13T00:05:10.701676016Z" level=info msg="containerd successfully booted in 0.464062s" Sep 13 00:05:10.820832 dbus-daemon[1891]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:05:10.821952 systemd[1]: Started systemd-hostnamed.service. Sep 13 00:05:10.828386 dbus-daemon[1891]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1958 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:05:10.837732 systemd[1]: Starting polkit.service... Sep 13 00:05:10.912581 polkitd[2004]: Started polkitd version 121 Sep 13 00:05:11.005742 polkitd[2004]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:05:11.014581 polkitd[2004]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:05:11.025571 polkitd[2004]: Finished loading, compiling and executing 2 rules Sep 13 00:05:11.033175 dbus-daemon[1891]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:05:11.033634 systemd[1]: Started polkit.service. Sep 13 00:05:11.045610 polkitd[2004]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:05:11.108653 systemd-hostnamed[1958]: Hostname set to (transient) Sep 13 00:05:11.108881 systemd-resolved[1856]: System hostname changed to 'ip-172-31-24-134'. Sep 13 00:05:11.239447 coreos-metadata[1890]: Sep 13 00:05:11.235 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:05:11.240622 coreos-metadata[1890]: Sep 13 00:05:11.240 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 13 00:05:11.240986 coreos-metadata[1890]: Sep 13 00:05:11.240 INFO Fetch successful Sep 13 00:05:11.243137 coreos-metadata[1890]: Sep 13 00:05:11.241 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:05:11.248996 coreos-metadata[1890]: Sep 13 00:05:11.243 INFO Fetch successful Sep 13 00:05:11.261634 unknown[1890]: wrote ssh authorized keys file for user: core Sep 13 00:05:11.299784 update-ssh-keys[2072]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:05:11.302729 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:05:11.327840 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Create new startup processor Sep 13 00:05:11.338383 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [LongRunningPluginsManager] registered plugins: {} Sep 13 00:05:11.338534 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Initializing bookkeeping folders Sep 13 00:05:11.338534 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO removing the completed state files Sep 13 00:05:11.338534 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Initializing bookkeeping folders for long running plugins Sep 13 00:05:11.338694 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 13 00:05:11.338694 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Initializing healthcheck folders for long running plugins Sep 13 00:05:11.338694 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Initializing locations for inventory plugin Sep 13 00:05:11.338694 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Initializing default location for custom inventory Sep 13 00:05:11.339032 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Initializing default location for file inventory Sep 13 00:05:11.339032 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Initializing default location for role inventory Sep 13 00:05:11.339032 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Init the cloudwatchlogs publisher Sep 13 00:05:11.339032 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 13 00:05:11.339032 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform independent plugin aws:refreshAssociation Sep 13 00:05:11.339032 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform independent plugin aws:configurePackage Sep 13 00:05:11.339032 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform independent plugin aws:downloadContent Sep 13 00:05:11.339032 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform independent plugin aws:runDocument Sep 13 00:05:11.339032 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform independent plugin aws:softwareInventory Sep 13 00:05:11.339532 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 13 00:05:11.339532 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform independent plugin aws:configureDocker Sep 13 00:05:11.339532 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform independent plugin aws:runDockerAction Sep 13 00:05:11.339532 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Successfully loaded platform dependent plugin aws:runShellScript Sep 13 00:05:11.339532 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 13 00:05:11.339532 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO OS: linux, Arch: arm64 Sep 13 00:05:11.341656 amazon-ssm-agent[1887]: datastore file /var/lib/amazon/ssm/i-031e33d35a88880b0/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 13 00:05:11.427444 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessageGatewayService] Starting session document processing engine... Sep 13 00:05:11.522673 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 13 00:05:11.617722 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 13 00:05:11.711556 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-031e33d35a88880b0, requestId: 950fa643-a44a-4d58-a46f-df889ae21830 Sep 13 00:05:11.806458 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] Starting document processing engine... Sep 13 00:05:11.901471 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 13 00:05:11.997493 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 13 00:05:12.092183 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] Starting message polling Sep 13 00:05:12.189365 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 13 00:05:12.245914 tar[1916]: linux-arm64/LICENSE Sep 13 00:05:12.247644 tar[1916]: linux-arm64/README.md Sep 13 00:05:12.264336 systemd[1]: Finished prepare-helm.service. Sep 13 00:05:12.283858 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [instanceID=i-031e33d35a88880b0] Starting association polling Sep 13 00:05:12.379592 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 13 00:05:12.476045 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 13 00:05:12.497555 locksmithd[1972]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:05:12.572058 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 13 00:05:12.668681 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 13 00:05:12.765371 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 13 00:05:12.837831 systemd[1]: Started kubelet.service. Sep 13 00:05:12.862459 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessageGatewayService] listening reply. Sep 13 00:05:12.959742 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:05:13.056923 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [OfflineService] Starting document processing engine... Sep 13 00:05:13.154563 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [OfflineService] [EngineProcessor] Starting Sep 13 00:05:13.252359 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [OfflineService] [EngineProcessor] Initial processing Sep 13 00:05:13.350252 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [OfflineService] Starting message polling Sep 13 00:05:13.448409 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [OfflineService] Starting send replies to MDS Sep 13 00:05:13.546899 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 13 00:05:13.645514 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 13 00:05:13.744174 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 13 00:05:13.843129 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [StartupProcessor] Executing startup processor tasks Sep 13 00:05:13.885375 kubelet[2117]: E0913 00:05:13.885260 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:13.889262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:13.889722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:13.942442 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 13 00:05:14.041821 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 13 00:05:14.141668 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 13 00:05:14.241536 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-031e33d35a88880b0?role=subscribe&stream=input Sep 13 00:05:14.341588 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-031e33d35a88880b0?role=subscribe&stream=input Sep 13 00:05:14.441724 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessageGatewayService] Starting receiving message from control channel Sep 13 00:05:14.541926 amazon-ssm-agent[1887]: 2025-09-13 00:05:11 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 13 00:05:15.312271 sshd_keygen[1933]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:05:15.349650 systemd[1]: Finished sshd-keygen.service. Sep 13 00:05:15.355953 systemd[1]: Starting issuegen.service... Sep 13 00:05:15.369794 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:05:15.370391 systemd[1]: Finished issuegen.service. Sep 13 00:05:15.376421 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:05:15.394051 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:05:15.402134 systemd[1]: Started getty@tty1.service. Sep 13 00:05:15.407756 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:05:15.410585 systemd[1]: Reached target getty.target. Sep 13 00:05:15.412886 systemd[1]: Reached target multi-user.target. Sep 13 00:05:15.418375 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:05:15.434611 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:05:15.435400 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:05:15.439349 systemd[1]: Startup finished in 10.836s (kernel) + 15.190s (userspace) = 26.026s. Sep 13 00:05:18.599096 systemd[1]: Created slice system-sshd.slice. Sep 13 00:05:18.601590 systemd[1]: Started sshd@0-172.31.24.134:22-139.178.89.65:50978.service. Sep 13 00:05:18.930384 sshd[2142]: Accepted publickey for core from 139.178.89.65 port 50978 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:05:18.935752 sshd[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:18.957589 systemd[1]: Created slice user-500.slice. Sep 13 00:05:18.959973 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:05:18.970410 systemd-logind[1905]: New session 1 of user core. Sep 13 00:05:18.985658 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:05:18.990139 systemd[1]: Starting user@500.service... Sep 13 00:05:19.002806 (systemd)[2147]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:19.209066 systemd[2147]: Queued start job for default target default.target. Sep 13 00:05:19.210686 systemd[2147]: Reached target paths.target. Sep 13 00:05:19.210929 systemd[2147]: Reached target sockets.target. Sep 13 00:05:19.211071 systemd[2147]: Reached target timers.target. Sep 13 00:05:19.211205 systemd[2147]: Reached target basic.target. Sep 13 00:05:19.211451 systemd[2147]: Reached target default.target. Sep 13 00:05:19.211600 systemd[1]: Started user@500.service. Sep 13 00:05:19.212994 systemd[2147]: Startup finished in 197ms. Sep 13 00:05:19.213466 systemd[1]: Started session-1.scope. Sep 13 00:05:19.361130 systemd[1]: Started sshd@1-172.31.24.134:22-139.178.89.65:50992.service. Sep 13 00:05:19.543303 sshd[2156]: Accepted publickey for core from 139.178.89.65 port 50992 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:05:19.545948 sshd[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:19.555361 systemd[1]: Started session-2.scope. Sep 13 00:05:19.555363 systemd-logind[1905]: New session 2 of user core. Sep 13 00:05:19.690784 sshd[2156]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:19.695776 systemd-logind[1905]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:05:19.696877 systemd[1]: sshd@1-172.31.24.134:22-139.178.89.65:50992.service: Deactivated successfully. Sep 13 00:05:19.698954 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:05:19.700008 systemd-logind[1905]: Removed session 2. Sep 13 00:05:19.714995 systemd[1]: Started sshd@2-172.31.24.134:22-139.178.89.65:51002.service. Sep 13 00:05:19.886623 sshd[2163]: Accepted publickey for core from 139.178.89.65 port 51002 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:05:19.889246 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:19.898219 systemd[1]: Started session-3.scope. Sep 13 00:05:19.900682 systemd-logind[1905]: New session 3 of user core. Sep 13 00:05:20.022226 sshd[2163]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:20.026796 systemd[1]: sshd@2-172.31.24.134:22-139.178.89.65:51002.service: Deactivated successfully. Sep 13 00:05:20.028134 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:05:20.030378 systemd-logind[1905]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:05:20.032443 systemd-logind[1905]: Removed session 3. Sep 13 00:05:20.048429 systemd[1]: Started sshd@3-172.31.24.134:22-139.178.89.65:51422.service. Sep 13 00:05:20.228916 sshd[2170]: Accepted publickey for core from 139.178.89.65 port 51422 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:05:20.231255 sshd[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:20.241609 systemd[1]: Started session-4.scope. Sep 13 00:05:20.242354 systemd-logind[1905]: New session 4 of user core. Sep 13 00:05:20.377026 sshd[2170]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:20.382104 systemd-logind[1905]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:05:20.383758 systemd[1]: sshd@3-172.31.24.134:22-139.178.89.65:51422.service: Deactivated successfully. Sep 13 00:05:20.385212 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:05:20.387677 systemd-logind[1905]: Removed session 4. Sep 13 00:05:20.403145 systemd[1]: Started sshd@4-172.31.24.134:22-139.178.89.65:51434.service. Sep 13 00:05:20.581893 sshd[2177]: Accepted publickey for core from 139.178.89.65 port 51434 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:05:20.585043 sshd[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:20.594218 systemd[1]: Started session-5.scope. Sep 13 00:05:20.594854 systemd-logind[1905]: New session 5 of user core. Sep 13 00:05:20.781456 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:05:20.782546 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:05:20.866896 systemd[1]: Starting docker.service... Sep 13 00:05:20.999217 env[2191]: time="2025-09-13T00:05:20.999148120Z" level=info msg="Starting up" Sep 13 00:05:21.004649 env[2191]: time="2025-09-13T00:05:21.004600707Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:05:21.004826 env[2191]: time="2025-09-13T00:05:21.004796601Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:05:21.004989 env[2191]: time="2025-09-13T00:05:21.004956504Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:05:21.005097 env[2191]: time="2025-09-13T00:05:21.005069863Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:05:21.009598 env[2191]: time="2025-09-13T00:05:21.009531656Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:05:21.009598 env[2191]: time="2025-09-13T00:05:21.009577837Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:05:21.009798 env[2191]: time="2025-09-13T00:05:21.009612376Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:05:21.009798 env[2191]: time="2025-09-13T00:05:21.009634039Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:05:21.023505 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3078991096-merged.mount: Deactivated successfully. Sep 13 00:05:21.286348 env[2191]: time="2025-09-13T00:05:21.286250429Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:05:21.286348 env[2191]: time="2025-09-13T00:05:21.286333279Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:05:21.286658 env[2191]: time="2025-09-13T00:05:21.286628942Z" level=info msg="Loading containers: start." Sep 13 00:05:21.566308 kernel: Initializing XFRM netlink socket Sep 13 00:05:21.638998 env[2191]: time="2025-09-13T00:05:21.638930847Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:05:21.642147 (udev-worker)[2201]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:21.784238 systemd-networkd[1591]: docker0: Link UP Sep 13 00:05:21.810931 env[2191]: time="2025-09-13T00:05:21.810885420Z" level=info msg="Loading containers: done." Sep 13 00:05:21.845072 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck977723018-merged.mount: Deactivated successfully. Sep 13 00:05:21.857027 env[2191]: time="2025-09-13T00:05:21.856970702Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:05:21.857727 env[2191]: time="2025-09-13T00:05:21.857693854Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:05:21.858089 env[2191]: time="2025-09-13T00:05:21.858061765Z" level=info msg="Daemon has completed initialization" Sep 13 00:05:21.884795 systemd[1]: Started docker.service. Sep 13 00:05:21.902790 env[2191]: time="2025-09-13T00:05:21.902197873Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:05:23.059749 env[1922]: time="2025-09-13T00:05:23.059686800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:05:23.674204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530310076.mount: Deactivated successfully. Sep 13 00:05:24.026871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:05:24.027334 systemd[1]: Stopped kubelet.service. Sep 13 00:05:24.031332 systemd[1]: Starting kubelet.service... Sep 13 00:05:24.500805 systemd[1]: Started kubelet.service. Sep 13 00:05:24.621555 kubelet[2322]: E0913 00:05:24.621492 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:24.628836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:24.629258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:25.702437 env[1922]: time="2025-09-13T00:05:25.702360822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:25.707211 env[1922]: time="2025-09-13T00:05:25.707152984Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:25.712221 env[1922]: time="2025-09-13T00:05:25.712165345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:25.716834 env[1922]: time="2025-09-13T00:05:25.716774801Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:25.719509 env[1922]: time="2025-09-13T00:05:25.719434726Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:05:25.722464 env[1922]: time="2025-09-13T00:05:25.722393057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:05:27.608018 env[1922]: time="2025-09-13T00:05:27.607945032Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:27.610819 env[1922]: time="2025-09-13T00:05:27.610751018Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:27.614453 env[1922]: time="2025-09-13T00:05:27.614391269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:27.618021 env[1922]: time="2025-09-13T00:05:27.617948283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:27.620121 env[1922]: time="2025-09-13T00:05:27.620039907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:05:27.620958 env[1922]: time="2025-09-13T00:05:27.620912706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:05:29.174234 env[1922]: time="2025-09-13T00:05:29.174177372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:29.177404 env[1922]: time="2025-09-13T00:05:29.177352646Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:29.180789 env[1922]: time="2025-09-13T00:05:29.180724091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:29.184323 env[1922]: time="2025-09-13T00:05:29.184238319Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:29.186184 env[1922]: time="2025-09-13T00:05:29.186134763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:05:29.186998 env[1922]: time="2025-09-13T00:05:29.186951701Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:05:30.518431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510051689.mount: Deactivated successfully. Sep 13 00:05:31.366789 amazon-ssm-agent[1887]: 2025-09-13 00:05:31 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 13 00:05:31.484194 env[1922]: time="2025-09-13T00:05:31.484131377Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:31.506844 env[1922]: time="2025-09-13T00:05:31.506745478Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:31.514364 env[1922]: time="2025-09-13T00:05:31.514298549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:31.519632 env[1922]: time="2025-09-13T00:05:31.519562712Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:31.522063 env[1922]: time="2025-09-13T00:05:31.521993753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:05:31.523124 env[1922]: time="2025-09-13T00:05:31.523038198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:05:31.990305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515181998.mount: Deactivated successfully. Sep 13 00:05:33.617752 env[1922]: time="2025-09-13T00:05:33.617654375Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:33.671189 env[1922]: time="2025-09-13T00:05:33.671112476Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:33.734456 env[1922]: time="2025-09-13T00:05:33.734399431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:33.809444 env[1922]: time="2025-09-13T00:05:33.809333453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:33.811756 env[1922]: time="2025-09-13T00:05:33.811671675Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:05:33.812619 env[1922]: time="2025-09-13T00:05:33.812560365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:05:34.635927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584997299.mount: Deactivated successfully. Sep 13 00:05:34.638123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:05:34.638491 systemd[1]: Stopped kubelet.service. Sep 13 00:05:34.641349 systemd[1]: Starting kubelet.service... Sep 13 00:05:34.659221 env[1922]: time="2025-09-13T00:05:34.659143319Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:34.668535 env[1922]: time="2025-09-13T00:05:34.668461473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:34.674245 env[1922]: time="2025-09-13T00:05:34.674168890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:34.678804 env[1922]: time="2025-09-13T00:05:34.678731871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:34.681479 env[1922]: time="2025-09-13T00:05:34.680227892Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:05:34.682170 env[1922]: time="2025-09-13T00:05:34.682115410Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:05:35.093741 systemd[1]: Started kubelet.service. Sep 13 00:05:35.189676 kubelet[2336]: E0913 00:05:35.189509 2336 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:35.196380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:35.196803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:35.281634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245288082.mount: Deactivated successfully. Sep 13 00:05:38.387310 env[1922]: time="2025-09-13T00:05:38.387228466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:38.390583 env[1922]: time="2025-09-13T00:05:38.390497925Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:38.400820 env[1922]: time="2025-09-13T00:05:38.400745155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:38.403594 env[1922]: time="2025-09-13T00:05:38.403542708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:38.405474 env[1922]: time="2025-09-13T00:05:38.405422849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:05:41.141971 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:05:45.276686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:05:45.277012 systemd[1]: Stopped kubelet.service. Sep 13 00:05:45.281191 systemd[1]: Starting kubelet.service... Sep 13 00:05:45.591637 systemd[1]: Started kubelet.service. Sep 13 00:05:45.685434 kubelet[2373]: E0913 00:05:45.685365 2373 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:45.688856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:45.689239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:46.459172 systemd[1]: Stopped kubelet.service. Sep 13 00:05:46.466356 systemd[1]: Starting kubelet.service... Sep 13 00:05:46.525163 systemd[1]: Reloading. Sep 13 00:05:46.713207 /usr/lib/systemd/system-generators/torcx-generator[2407]: time="2025-09-13T00:05:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:05:46.713395 /usr/lib/systemd/system-generators/torcx-generator[2407]: time="2025-09-13T00:05:46Z" level=info msg="torcx already run" Sep 13 00:05:46.924869 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:05:46.924908 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:05:46.967578 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:47.176862 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:05:47.177088 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:05:47.177782 systemd[1]: Stopped kubelet.service. Sep 13 00:05:47.181464 systemd[1]: Starting kubelet.service... Sep 13 00:05:47.482152 systemd[1]: Started kubelet.service. Sep 13 00:05:47.579844 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:47.580480 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:05:47.580594 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:47.580881 kubelet[2482]: I0913 00:05:47.580836 2482 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:05:49.464958 kubelet[2482]: I0913 00:05:49.464895 2482 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:05:49.464958 kubelet[2482]: I0913 00:05:49.464952 2482 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:05:49.465686 kubelet[2482]: I0913 00:05:49.465429 2482 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:05:49.548821 kubelet[2482]: E0913 00:05:49.548750 2482 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:49.555800 kubelet[2482]: I0913 00:05:49.555751 2482 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:05:49.569478 kubelet[2482]: E0913 00:05:49.569406 2482 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:05:49.569478 kubelet[2482]: I0913 00:05:49.569465 2482 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:05:49.576472 kubelet[2482]: I0913 00:05:49.576434 2482 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:05:49.577636 kubelet[2482]: I0913 00:05:49.577608 2482 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:05:49.578067 kubelet[2482]: I0913 00:05:49.578028 2482 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:05:49.578491 kubelet[2482]: I0913 00:05:49.578169 2482 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-134","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:05:49.578862 kubelet[2482]: I0913 00:05:49.578837 2482 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:05:49.578975 kubelet[2482]: I0913 00:05:49.578955 2482 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:05:49.579393 kubelet[2482]: I0913 00:05:49.579371 2482 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:49.588054 kubelet[2482]: W0913 00:05:49.587976 2482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-134&limit=500&resourceVersion=0": dial tcp 172.31.24.134:6443: connect: connection refused Sep 13 00:05:49.588198 kubelet[2482]: E0913 00:05:49.588076 2482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-134&limit=500&resourceVersion=0\": dial tcp 172.31.24.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:49.589068 kubelet[2482]: I0913 00:05:49.589043 2482 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:05:49.589217 kubelet[2482]: I0913 00:05:49.589196 2482 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:05:49.589371 kubelet[2482]: I0913 00:05:49.589351 2482 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:05:49.589647 kubelet[2482]: I0913 00:05:49.589626 2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:05:49.598155 kubelet[2482]: I0913 00:05:49.598048 2482 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:05:49.599405 kubelet[2482]: I0913 00:05:49.599362 2482 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:05:49.599743 kubelet[2482]: W0913 00:05:49.599710 2482 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:05:49.602051 kubelet[2482]: W0913 00:05:49.601941 2482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.134:6443: connect: connection refused Sep 13 00:05:49.602234 kubelet[2482]: E0913 00:05:49.602063 2482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:49.603309 kubelet[2482]: I0913 00:05:49.603214 2482 server.go:1274] "Started kubelet" Sep 13 00:05:49.608854 kubelet[2482]: I0913 00:05:49.608797 2482 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:05:49.612426 kubelet[2482]: I0913 00:05:49.612321 2482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:05:49.614102 kubelet[2482]: I0913 00:05:49.614063 2482 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:05:49.614593 kubelet[2482]: I0913 00:05:49.614544 2482 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:05:49.622672 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:05:49.623144 kubelet[2482]: I0913 00:05:49.623115 2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:05:49.627494 kubelet[2482]: E0913 00:05:49.624907 2482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.134:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.134:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-134.1864aecce3fea8d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-134,UID:ip-172-31-24-134,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-134,},FirstTimestamp:2025-09-13 00:05:49.603014871 +0000 UTC m=+2.101054961,LastTimestamp:2025-09-13 00:05:49.603014871 +0000 UTC m=+2.101054961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-134,}" Sep 13 00:05:49.628804 kubelet[2482]: I0913 00:05:49.628768 2482 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:05:49.634443 kubelet[2482]: I0913 00:05:49.634411 2482 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:05:49.634824 kubelet[2482]: I0913 00:05:49.634798 2482 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:05:49.635027 kubelet[2482]: I0913 00:05:49.635009 2482 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:05:49.636344 kubelet[2482]: W0913 00:05:49.636239 2482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.134:6443: connect: connection refused Sep 13 00:05:49.636573 kubelet[2482]: E0913 00:05:49.636539 2482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:49.637014 kubelet[2482]: I0913 00:05:49.636985 2482 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:05:49.637480 kubelet[2482]: I0913 00:05:49.637416 2482 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:05:49.638900 kubelet[2482]: E0913 00:05:49.637451 2482 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-24-134\" not found" Sep 13 00:05:49.639106 kubelet[2482]: E0913 00:05:49.638199 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-134?timeout=10s\": dial tcp 172.31.24.134:6443: connect: connection refused" interval="200ms" Sep 13 00:05:49.640214 kubelet[2482]: E0913 00:05:49.640160 2482 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:05:49.642197 kubelet[2482]: I0913 00:05:49.642150 2482 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:05:49.682061 kubelet[2482]: I0913 00:05:49.681981 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:05:49.684092 kubelet[2482]: I0913 00:05:49.684032 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:05:49.684092 kubelet[2482]: I0913 00:05:49.684084 2482 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:05:49.684431 kubelet[2482]: I0913 00:05:49.684119 2482 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:05:49.684431 kubelet[2482]: E0913 00:05:49.684196 2482 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:05:49.691015 kubelet[2482]: W0913 00:05:49.690944 2482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.134:6443: connect: connection refused Sep 13 00:05:49.691238 kubelet[2482]: E0913 00:05:49.691206 2482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:49.700890 kubelet[2482]: I0913 00:05:49.700835 2482 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:05:49.700890 kubelet[2482]: I0913 00:05:49.700870 2482 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:05:49.701083 kubelet[2482]: I0913 00:05:49.700907 2482 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:49.703088 kubelet[2482]: I0913 00:05:49.703031 2482 policy_none.go:49] "None policy: Start" Sep 13 00:05:49.704628 kubelet[2482]: I0913 00:05:49.704590 2482 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:05:49.704774 kubelet[2482]: I0913 00:05:49.704640 2482 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:05:49.713989 kubelet[2482]: I0913 00:05:49.713948 2482 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:05:49.714474 kubelet[2482]: I0913 00:05:49.714450 2482 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:05:49.714649 kubelet[2482]: I0913 00:05:49.714598 2482 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:05:49.720923 kubelet[2482]: I0913 00:05:49.718344 2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:05:49.723319 kubelet[2482]: E0913 00:05:49.723282 2482 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-134\" not found" Sep 13 00:05:49.817398 kubelet[2482]: I0913 00:05:49.817360 2482 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-134" Sep 13 00:05:49.818513 kubelet[2482]: E0913 00:05:49.818473 2482 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.134:6443/api/v1/nodes\": dial tcp 172.31.24.134:6443: connect: connection refused" node="ip-172-31-24-134" Sep 13 00:05:49.836517 kubelet[2482]: I0913 00:05:49.836455 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e2e201242b847d41deef3b7377ea3a5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-134\" (UID: \"7e2e201242b847d41deef3b7377ea3a5\") " pod="kube-system/kube-apiserver-ip-172-31-24-134" Sep 13 00:05:49.836651 kubelet[2482]: I0913 00:05:49.836546 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a82193a56beafbe2ad62291d7808e6ed-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-134\" (UID: \"a82193a56beafbe2ad62291d7808e6ed\") " pod="kube-system/kube-scheduler-ip-172-31-24-134" Sep 13 00:05:49.836651 kubelet[2482]: I0913 00:05:49.836616 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e2e201242b847d41deef3b7377ea3a5-ca-certs\") pod \"kube-apiserver-ip-172-31-24-134\" (UID: \"7e2e201242b847d41deef3b7377ea3a5\") " pod="kube-system/kube-apiserver-ip-172-31-24-134" Sep 13 00:05:49.836781 kubelet[2482]: I0913 00:05:49.836655 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e2e201242b847d41deef3b7377ea3a5-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-134\" (UID: \"7e2e201242b847d41deef3b7377ea3a5\") " pod="kube-system/kube-apiserver-ip-172-31-24-134" Sep 13 00:05:49.839902 kubelet[2482]: E0913 00:05:49.839857 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-134?timeout=10s\": dial tcp 172.31.24.134:6443: connect: connection refused" interval="400ms" Sep 13 00:05:49.937629 kubelet[2482]: I0913 00:05:49.937563 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:49.937854 kubelet[2482]: I0913 00:05:49.937786 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:49.937928 kubelet[2482]: I0913 00:05:49.937912 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:49.937991 kubelet[2482]: I0913 00:05:49.937953 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:49.938101 kubelet[2482]: I0913 00:05:49.938042 2482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:50.022333 kubelet[2482]: I0913 00:05:50.021464 2482 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-134" Sep 13 00:05:50.023244 kubelet[2482]: E0913 00:05:50.023151 2482 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.134:6443/api/v1/nodes\": dial tcp 172.31.24.134:6443: connect: connection refused" node="ip-172-31-24-134" Sep 13 00:05:50.096860 env[1922]: time="2025-09-13T00:05:50.096493484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-134,Uid:a82193a56beafbe2ad62291d7808e6ed,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:50.101154 env[1922]: time="2025-09-13T00:05:50.100807876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-134,Uid:7e2e201242b847d41deef3b7377ea3a5,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:50.115036 env[1922]: time="2025-09-13T00:05:50.114940526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-134,Uid:fae5b7d9d96934c9b78712f8443c095c,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:50.241519 kubelet[2482]: E0913 00:05:50.241454 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-134?timeout=10s\": dial tcp 172.31.24.134:6443: connect: connection refused" interval="800ms" Sep 13 00:05:50.427195 kubelet[2482]: I0913 00:05:50.426699 2482 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-134" Sep 13 00:05:50.427195 kubelet[2482]: E0913 00:05:50.427147 2482 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.134:6443/api/v1/nodes\": dial tcp 172.31.24.134:6443: connect: connection refused" node="ip-172-31-24-134" Sep 13 00:05:50.442511 kubelet[2482]: W0913 00:05:50.442370 2482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-134&limit=500&resourceVersion=0": dial tcp 172.31.24.134:6443: connect: connection refused Sep 13 00:05:50.442511 kubelet[2482]: E0913 00:05:50.442460 2482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-134&limit=500&resourceVersion=0\": dial tcp 172.31.24.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:50.565635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445396361.mount: Deactivated successfully. Sep 13 00:05:50.579145 env[1922]: time="2025-09-13T00:05:50.579065729Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.588803 env[1922]: time="2025-09-13T00:05:50.588737980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.591583 env[1922]: time="2025-09-13T00:05:50.591538553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.605043 env[1922]: time="2025-09-13T00:05:50.604964061Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.608920 env[1922]: time="2025-09-13T00:05:50.608849243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.611220 env[1922]: time="2025-09-13T00:05:50.611153676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.613806 env[1922]: time="2025-09-13T00:05:50.613743762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.617744 env[1922]: time="2025-09-13T00:05:50.617671022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.619442 env[1922]: time="2025-09-13T00:05:50.619378557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.628825 env[1922]: time="2025-09-13T00:05:50.628750979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.630314 env[1922]: time="2025-09-13T00:05:50.630226742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.645956 env[1922]: time="2025-09-13T00:05:50.645897786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:50.690376 env[1922]: time="2025-09-13T00:05:50.688506920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:50.690376 env[1922]: time="2025-09-13T00:05:50.688655125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:50.690376 env[1922]: time="2025-09-13T00:05:50.688684069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:50.690376 env[1922]: time="2025-09-13T00:05:50.689172933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85e1d56bd08950c15d2417f026a04e06b93e16c7d1e4a3799bb6a84270902858 pid=2522 runtime=io.containerd.runc.v2 Sep 13 00:05:50.715781 env[1922]: time="2025-09-13T00:05:50.715623876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:50.716123 env[1922]: time="2025-09-13T00:05:50.716065379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:50.716350 env[1922]: time="2025-09-13T00:05:50.716244028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:50.717082 env[1922]: time="2025-09-13T00:05:50.717001388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9de39b785988e079e548eef407c1284952d1719110cc8799c794f18e0bbe20a5 pid=2547 runtime=io.containerd.runc.v2 Sep 13 00:05:50.761638 env[1922]: time="2025-09-13T00:05:50.761462360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:50.761998 env[1922]: time="2025-09-13T00:05:50.761916229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:50.762236 env[1922]: time="2025-09-13T00:05:50.762166789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:50.762766 env[1922]: time="2025-09-13T00:05:50.762682233Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5e05bc054e7ecc909a66acd9b1ae6c3772705b21e7b5d9a3fc382ae69733ab7 pid=2570 runtime=io.containerd.runc.v2 Sep 13 00:05:50.860494 env[1922]: time="2025-09-13T00:05:50.859035286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-134,Uid:7e2e201242b847d41deef3b7377ea3a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"85e1d56bd08950c15d2417f026a04e06b93e16c7d1e4a3799bb6a84270902858\"" Sep 13 00:05:50.865755 env[1922]: time="2025-09-13T00:05:50.865698375Z" level=info msg="CreateContainer within sandbox \"85e1d56bd08950c15d2417f026a04e06b93e16c7d1e4a3799bb6a84270902858\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:05:50.895861 env[1922]: time="2025-09-13T00:05:50.895779569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-134,Uid:fae5b7d9d96934c9b78712f8443c095c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9de39b785988e079e548eef407c1284952d1719110cc8799c794f18e0bbe20a5\"" Sep 13 00:05:50.911601 env[1922]: time="2025-09-13T00:05:50.911327008Z" level=info msg="CreateContainer within sandbox \"9de39b785988e079e548eef407c1284952d1719110cc8799c794f18e0bbe20a5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:05:50.921132 env[1922]: time="2025-09-13T00:05:50.921043703Z" level=info msg="CreateContainer within sandbox \"85e1d56bd08950c15d2417f026a04e06b93e16c7d1e4a3799bb6a84270902858\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"edb32d248eaa79e48f243acda361fd8c870d6c6e9c5c9fe5de390d9fbe8affdf\"" Sep 13 00:05:50.923184 env[1922]: time="2025-09-13T00:05:50.923130266Z" level=info msg="StartContainer for \"edb32d248eaa79e48f243acda361fd8c870d6c6e9c5c9fe5de390d9fbe8affdf\"" Sep 13 00:05:50.954414 env[1922]: time="2025-09-13T00:05:50.953245904Z" level=info msg="CreateContainer within sandbox \"9de39b785988e079e548eef407c1284952d1719110cc8799c794f18e0bbe20a5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9148a3ec48381f0d1db13222c141d4b52f1cb543c6bcbb3ae55d37204320caf\"" Sep 13 00:05:50.955497 env[1922]: time="2025-09-13T00:05:50.955437477Z" level=info msg="StartContainer for \"f9148a3ec48381f0d1db13222c141d4b52f1cb543c6bcbb3ae55d37204320caf\"" Sep 13 00:05:50.972463 env[1922]: time="2025-09-13T00:05:50.972367566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-134,Uid:a82193a56beafbe2ad62291d7808e6ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5e05bc054e7ecc909a66acd9b1ae6c3772705b21e7b5d9a3fc382ae69733ab7\"" Sep 13 00:05:50.990318 env[1922]: time="2025-09-13T00:05:50.988614248Z" level=info msg="CreateContainer within sandbox \"f5e05bc054e7ecc909a66acd9b1ae6c3772705b21e7b5d9a3fc382ae69733ab7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:05:51.033457 env[1922]: time="2025-09-13T00:05:51.033393935Z" level=info msg="CreateContainer within sandbox \"f5e05bc054e7ecc909a66acd9b1ae6c3772705b21e7b5d9a3fc382ae69733ab7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"afaf537f515b2182998dbe51e9f2324b4c2b5efe383617528073a166abd99572\"" Sep 13 00:05:51.034612 env[1922]: time="2025-09-13T00:05:51.034565568Z" level=info msg="StartContainer for \"afaf537f515b2182998dbe51e9f2324b4c2b5efe383617528073a166abd99572\"" Sep 13 00:05:51.043022 kubelet[2482]: E0913 00:05:51.042918 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-134?timeout=10s\": dial tcp 172.31.24.134:6443: connect: connection refused" interval="1.6s" Sep 13 00:05:51.058020 kubelet[2482]: W0913 00:05:51.057876 2482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.134:6443: connect: connection refused Sep 13 00:05:51.058020 kubelet[2482]: E0913 00:05:51.057976 2482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:51.116460 env[1922]: time="2025-09-13T00:05:51.116390010Z" level=info msg="StartContainer for \"edb32d248eaa79e48f243acda361fd8c870d6c6e9c5c9fe5de390d9fbe8affdf\" returns successfully" Sep 13 00:05:51.154684 kubelet[2482]: W0913 00:05:51.154472 2482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.134:6443: connect: connection refused Sep 13 00:05:51.154684 kubelet[2482]: E0913 00:05:51.154598 2482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:51.187712 kubelet[2482]: W0913 00:05:51.187596 2482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.134:6443: connect: connection refused Sep 13 00:05:51.188125 kubelet[2482]: E0913 00:05:51.188083 2482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:51.214697 env[1922]: time="2025-09-13T00:05:51.214507746Z" level=info msg="StartContainer for \"f9148a3ec48381f0d1db13222c141d4b52f1cb543c6bcbb3ae55d37204320caf\" returns successfully" Sep 13 00:05:51.232697 kubelet[2482]: I0913 00:05:51.232631 2482 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-134" Sep 13 00:05:51.233665 kubelet[2482]: E0913 00:05:51.233146 2482 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.134:6443/api/v1/nodes\": dial tcp 172.31.24.134:6443: connect: connection refused" node="ip-172-31-24-134" Sep 13 00:05:51.324525 env[1922]: time="2025-09-13T00:05:51.324459652Z" level=info msg="StartContainer for \"afaf537f515b2182998dbe51e9f2324b4c2b5efe383617528073a166abd99572\" returns successfully" Sep 13 00:05:52.836542 kubelet[2482]: I0913 00:05:52.836509 2482 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-134" Sep 13 00:05:54.593167 kubelet[2482]: I0913 00:05:54.593124 2482 apiserver.go:52] "Watching apiserver" Sep 13 00:05:54.635902 kubelet[2482]: I0913 00:05:54.635856 2482 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:05:54.666234 kubelet[2482]: E0913 00:05:54.666182 2482 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-134\" not found" node="ip-172-31-24-134" Sep 13 00:05:54.778549 kubelet[2482]: I0913 00:05:54.778487 2482 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-24-134" Sep 13 00:05:55.420935 update_engine[1907]: I0913 00:05:55.420356 1907 update_attempter.cc:509] Updating boot flags... Sep 13 00:05:56.996879 systemd[1]: Reloading. Sep 13 00:05:57.143798 /usr/lib/systemd/system-generators/torcx-generator[2957]: time="2025-09-13T00:05:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:05:57.143865 /usr/lib/systemd/system-generators/torcx-generator[2957]: time="2025-09-13T00:05:57Z" level=info msg="torcx already run" Sep 13 00:05:57.338785 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:05:57.338822 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:05:57.385508 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:57.620061 systemd[1]: Stopping kubelet.service... Sep 13 00:05:57.645347 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:05:57.646080 systemd[1]: Stopped kubelet.service. Sep 13 00:05:57.651086 systemd[1]: Starting kubelet.service... Sep 13 00:05:57.987319 systemd[1]: Started kubelet.service. Sep 13 00:05:58.125768 kubelet[3027]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:58.126331 kubelet[3027]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:05:58.126439 kubelet[3027]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:58.126691 kubelet[3027]: I0913 00:05:58.126643 3027 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:05:58.139957 kubelet[3027]: I0913 00:05:58.139886 3027 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:05:58.139957 kubelet[3027]: I0913 00:05:58.139939 3027 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:05:58.144071 kubelet[3027]: I0913 00:05:58.140506 3027 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:05:58.144071 kubelet[3027]: I0913 00:05:58.143301 3027 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:05:58.143238 sudo[3040]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:05:58.143835 sudo[3040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:05:58.153021 kubelet[3027]: I0913 00:05:58.152979 3027 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:05:58.167545 kubelet[3027]: E0913 00:05:58.167488 3027 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:05:58.168315 kubelet[3027]: I0913 00:05:58.168290 3027 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:05:58.173298 kubelet[3027]: I0913 00:05:58.173191 3027 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:05:58.174195 kubelet[3027]: I0913 00:05:58.174163 3027 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:05:58.174652 kubelet[3027]: I0913 00:05:58.174605 3027 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:05:58.175050 kubelet[3027]: I0913 00:05:58.174758 3027 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-134","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:05:58.175342 kubelet[3027]: I0913 00:05:58.175319 3027 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:05:58.175468 kubelet[3027]: I0913 00:05:58.175449 3027 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:05:58.175628 kubelet[3027]: I0913 00:05:58.175608 3027 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:58.175911 kubelet[3027]: I0913 00:05:58.175892 3027 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:05:58.177215 kubelet[3027]: I0913 00:05:58.177184 3027 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:05:58.177448 kubelet[3027]: I0913 00:05:58.177428 3027 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:05:58.184369 kubelet[3027]: I0913 00:05:58.184332 3027 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:05:58.195413 kubelet[3027]: I0913 00:05:58.195376 3027 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:05:58.196475 kubelet[3027]: I0913 00:05:58.196447 3027 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:05:58.205877 kubelet[3027]: I0913 00:05:58.205843 3027 server.go:1274] "Started kubelet" Sep 13 00:05:58.220170 kubelet[3027]: I0913 00:05:58.218671 3027 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:05:58.220170 kubelet[3027]: I0913 00:05:58.219407 3027 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:05:58.220170 kubelet[3027]: I0913 00:05:58.219580 3027 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:05:58.221425 kubelet[3027]: I0913 00:05:58.221391 3027 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:05:58.224302 kubelet[3027]: I0913 00:05:58.222737 3027 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:05:58.227406 kubelet[3027]: I0913 00:05:58.227353 3027 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:05:58.232335 kubelet[3027]: I0913 00:05:58.232288 3027 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:05:58.248799 kubelet[3027]: I0913 00:05:58.248669 3027 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:05:58.255415 kubelet[3027]: I0913 00:05:58.255370 3027 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:05:58.270679 kubelet[3027]: E0913 00:05:58.260309 3027 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:05:58.270679 kubelet[3027]: I0913 00:05:58.267978 3027 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:05:58.286306 kubelet[3027]: I0913 00:05:58.280737 3027 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:05:58.286306 kubelet[3027]: I0913 00:05:58.280777 3027 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:05:58.412352 kubelet[3027]: I0913 00:05:58.410743 3027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:05:58.434981 kubelet[3027]: I0913 00:05:58.434941 3027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:05:58.435215 kubelet[3027]: I0913 00:05:58.435194 3027 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:05:58.435440 kubelet[3027]: I0913 00:05:58.435408 3027 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:05:58.435662 kubelet[3027]: E0913 00:05:58.435627 3027 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:05:58.540367 kubelet[3027]: E0913 00:05:58.540327 3027 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:05:58.563507 kubelet[3027]: I0913 00:05:58.563461 3027 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:05:58.564315 kubelet[3027]: I0913 00:05:58.564236 3027 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:05:58.564474 kubelet[3027]: I0913 00:05:58.564453 3027 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:58.564912 kubelet[3027]: I0913 00:05:58.564887 3027 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:05:58.565394 kubelet[3027]: I0913 00:05:58.565332 3027 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:05:58.565546 kubelet[3027]: I0913 00:05:58.565527 3027 policy_none.go:49] "None policy: Start" Sep 13 00:05:58.570191 kubelet[3027]: I0913 00:05:58.570158 3027 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:05:58.570415 kubelet[3027]: I0913 00:05:58.570394 3027 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:05:58.570848 kubelet[3027]: I0913 00:05:58.570828 3027 state_mem.go:75] "Updated machine memory state" Sep 13 00:05:58.579101 kubelet[3027]: I0913 00:05:58.579066 3027 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:05:58.579579 kubelet[3027]: I0913 00:05:58.579556 3027 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:05:58.579788 kubelet[3027]: I0913 00:05:58.579685 3027 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:05:58.585966 kubelet[3027]: I0913 00:05:58.585933 3027 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:05:58.704616 kubelet[3027]: I0913 00:05:58.704579 3027 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-134" Sep 13 00:05:58.720342 kubelet[3027]: I0913 00:05:58.720138 3027 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-24-134" Sep 13 00:05:58.720656 kubelet[3027]: I0913 00:05:58.720633 3027 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-24-134" Sep 13 00:05:58.774239 kubelet[3027]: E0913 00:05:58.774199 3027 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-134\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-134" Sep 13 00:05:58.788927 kubelet[3027]: I0913 00:05:58.788885 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e2e201242b847d41deef3b7377ea3a5-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-134\" (UID: \"7e2e201242b847d41deef3b7377ea3a5\") " pod="kube-system/kube-apiserver-ip-172-31-24-134" Sep 13 00:05:58.789331 kubelet[3027]: I0913 00:05:58.789223 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e2e201242b847d41deef3b7377ea3a5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-134\" (UID: \"7e2e201242b847d41deef3b7377ea3a5\") " pod="kube-system/kube-apiserver-ip-172-31-24-134" Sep 13 00:05:58.789595 kubelet[3027]: I0913 00:05:58.789520 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:58.789783 kubelet[3027]: I0913 00:05:58.789745 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:58.790016 kubelet[3027]: I0913 00:05:58.789947 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:58.790195 kubelet[3027]: I0913 00:05:58.790158 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:58.791522 kubelet[3027]: I0913 00:05:58.790358 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fae5b7d9d96934c9b78712f8443c095c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-134\" (UID: \"fae5b7d9d96934c9b78712f8443c095c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-134" Sep 13 00:05:58.791813 kubelet[3027]: I0913 00:05:58.791770 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e2e201242b847d41deef3b7377ea3a5-ca-certs\") pod \"kube-apiserver-ip-172-31-24-134\" (UID: \"7e2e201242b847d41deef3b7377ea3a5\") " pod="kube-system/kube-apiserver-ip-172-31-24-134" Sep 13 00:05:58.792124 kubelet[3027]: I0913 00:05:58.792098 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a82193a56beafbe2ad62291d7808e6ed-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-134\" (UID: \"a82193a56beafbe2ad62291d7808e6ed\") " pod="kube-system/kube-scheduler-ip-172-31-24-134" Sep 13 00:05:59.186372 kubelet[3027]: I0913 00:05:59.186188 3027 apiserver.go:52] "Watching apiserver" Sep 13 00:05:59.269820 kubelet[3027]: I0913 00:05:59.269765 3027 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:05:59.349231 sudo[3040]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:59.568952 kubelet[3027]: I0913 00:05:59.568871 3027 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-134" podStartSLOduration=1.5688491070000001 podStartE2EDuration="1.568849107s" podCreationTimestamp="2025-09-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:59.554281921 +0000 UTC m=+1.540230210" watchObservedRunningTime="2025-09-13 00:05:59.568849107 +0000 UTC m=+1.554797396" Sep 13 00:05:59.569413 kubelet[3027]: I0913 00:05:59.569358 3027 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-134" podStartSLOduration=4.56934219 podStartE2EDuration="4.56934219s" podCreationTimestamp="2025-09-13 00:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:59.56883205 +0000 UTC m=+1.554780339" watchObservedRunningTime="2025-09-13 00:05:59.56934219 +0000 UTC m=+1.555290479" Sep 13 00:06:01.397166 amazon-ssm-agent[1887]: 2025-09-13 00:06:01 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 13 00:06:02.151206 sudo[2181]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:02.175390 sshd[2177]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:02.183623 systemd-logind[1905]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:06:02.185239 systemd[1]: sshd@4-172.31.24.134:22-139.178.89.65:51434.service: Deactivated successfully. Sep 13 00:06:02.186658 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:06:02.189006 systemd-logind[1905]: Removed session 5. Sep 13 00:06:02.901505 kubelet[3027]: I0913 00:06:02.901460 3027 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:06:02.902403 env[1922]: time="2025-09-13T00:06:02.902316695Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:06:02.903122 kubelet[3027]: I0913 00:06:02.903035 3027 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:06:03.121242 kubelet[3027]: I0913 00:06:03.121165 3027 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-134" podStartSLOduration=5.12110737 podStartE2EDuration="5.12110737s" podCreationTimestamp="2025-09-13 00:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:59.601566813 +0000 UTC m=+1.587515114" watchObservedRunningTime="2025-09-13 00:06:03.12110737 +0000 UTC m=+5.107055659" Sep 13 00:06:03.428299 kubelet[3027]: I0913 00:06:03.421799 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrs7\" (UniqueName: \"kubernetes.io/projected/65e7f71a-29b8-4584-bf2e-963c989d1a92-kube-api-access-hlrs7\") pod \"kube-proxy-pvr4j\" (UID: \"65e7f71a-29b8-4584-bf2e-963c989d1a92\") " pod="kube-system/kube-proxy-pvr4j" Sep 13 00:06:03.428299 kubelet[3027]: I0913 00:06:03.421875 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65e7f71a-29b8-4584-bf2e-963c989d1a92-xtables-lock\") pod \"kube-proxy-pvr4j\" (UID: \"65e7f71a-29b8-4584-bf2e-963c989d1a92\") " pod="kube-system/kube-proxy-pvr4j" Sep 13 00:06:03.428299 kubelet[3027]: I0913 00:06:03.421923 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65e7f71a-29b8-4584-bf2e-963c989d1a92-kube-proxy\") pod \"kube-proxy-pvr4j\" (UID: \"65e7f71a-29b8-4584-bf2e-963c989d1a92\") " pod="kube-system/kube-proxy-pvr4j" Sep 13 00:06:03.428299 kubelet[3027]: I0913 00:06:03.421980 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65e7f71a-29b8-4584-bf2e-963c989d1a92-lib-modules\") pod \"kube-proxy-pvr4j\" (UID: \"65e7f71a-29b8-4584-bf2e-963c989d1a92\") " pod="kube-system/kube-proxy-pvr4j" Sep 13 00:06:03.522579 kubelet[3027]: I0913 00:06:03.522515 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-host-proc-sys-kernel\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.522971 kubelet[3027]: I0913 00:06:03.522931 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-bpf-maps\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523072 kubelet[3027]: I0913 00:06:03.522985 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-hostproc\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523072 kubelet[3027]: I0913 00:06:03.523026 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldz24\" (UniqueName: \"kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-kube-api-access-ldz24\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523223 kubelet[3027]: I0913 00:06:03.523102 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-etc-cni-netd\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523223 kubelet[3027]: I0913 00:06:03.523137 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-hubble-tls\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523223 kubelet[3027]: I0913 00:06:03.523174 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cilium-run\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523223 kubelet[3027]: I0913 00:06:03.523212 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cilium-cgroup\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523564 kubelet[3027]: I0913 00:06:03.523295 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/615030d2-cea6-4cb0-b51a-872174570eee-clustermesh-secrets\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523564 kubelet[3027]: I0913 00:06:03.523349 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-host-proc-sys-net\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523564 kubelet[3027]: I0913 00:06:03.523388 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/615030d2-cea6-4cb0-b51a-872174570eee-cilium-config-path\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523564 kubelet[3027]: I0913 00:06:03.523429 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cni-path\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523564 kubelet[3027]: I0913 00:06:03.523471 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-xtables-lock\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.523564 kubelet[3027]: I0913 00:06:03.523510 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-lib-modules\") pod \"cilium-4j57n\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " pod="kube-system/cilium-4j57n" Sep 13 00:06:03.539602 kubelet[3027]: E0913 00:06:03.539557 3027 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:06:03.539823 kubelet[3027]: E0913 00:06:03.539801 3027 projected.go:194] Error preparing data for projected volume kube-api-access-hlrs7 for pod kube-system/kube-proxy-pvr4j: configmap "kube-root-ca.crt" not found Sep 13 00:06:03.540192 kubelet[3027]: E0913 00:06:03.540145 3027 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65e7f71a-29b8-4584-bf2e-963c989d1a92-kube-api-access-hlrs7 podName:65e7f71a-29b8-4584-bf2e-963c989d1a92 nodeName:}" failed. No retries permitted until 2025-09-13 00:06:04.040111765 +0000 UTC m=+6.026060030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hlrs7" (UniqueName: "kubernetes.io/projected/65e7f71a-29b8-4584-bf2e-963c989d1a92-kube-api-access-hlrs7") pod "kube-proxy-pvr4j" (UID: "65e7f71a-29b8-4584-bf2e-963c989d1a92") : configmap "kube-root-ca.crt" not found Sep 13 00:06:03.625940 kubelet[3027]: I0913 00:06:03.625881 3027 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:06:03.660751 kubelet[3027]: E0913 00:06:03.660702 3027 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:06:03.660952 kubelet[3027]: E0913 00:06:03.660930 3027 projected.go:194] Error preparing data for projected volume kube-api-access-ldz24 for pod kube-system/cilium-4j57n: configmap "kube-root-ca.crt" not found Sep 13 00:06:03.661165 kubelet[3027]: E0913 00:06:03.661142 3027 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-kube-api-access-ldz24 podName:615030d2-cea6-4cb0-b51a-872174570eee nodeName:}" failed. No retries permitted until 2025-09-13 00:06:04.161092907 +0000 UTC m=+6.147041184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ldz24" (UniqueName: "kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-kube-api-access-ldz24") pod "cilium-4j57n" (UID: "615030d2-cea6-4cb0-b51a-872174570eee") : configmap "kube-root-ca.crt" not found Sep 13 00:06:04.128704 kubelet[3027]: I0913 00:06:04.128658 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9-cilium-config-path\") pod \"cilium-operator-5d85765b45-6wcrl\" (UID: \"1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9\") " pod="kube-system/cilium-operator-5d85765b45-6wcrl" Sep 13 00:06:04.129454 kubelet[3027]: I0913 00:06:04.129420 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rthj5\" (UniqueName: \"kubernetes.io/projected/1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9-kube-api-access-rthj5\") pod \"cilium-operator-5d85765b45-6wcrl\" (UID: \"1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9\") " pod="kube-system/cilium-operator-5d85765b45-6wcrl" Sep 13 00:06:04.294168 env[1922]: time="2025-09-13T00:06:04.294071968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvr4j,Uid:65e7f71a-29b8-4584-bf2e-963c989d1a92,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:04.315246 env[1922]: time="2025-09-13T00:06:04.314764773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4j57n,Uid:615030d2-cea6-4cb0-b51a-872174570eee,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:04.321902 env[1922]: time="2025-09-13T00:06:04.321847243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6wcrl,Uid:1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:04.325743 env[1922]: time="2025-09-13T00:06:04.325630476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:04.325904 env[1922]: time="2025-09-13T00:06:04.325762373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:04.325904 env[1922]: time="2025-09-13T00:06:04.325824090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:04.326360 env[1922]: time="2025-09-13T00:06:04.326193244Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3423914012c271aa367f167f514c52863778b99e25e573f0895b9abc099b8305 pid=3111 runtime=io.containerd.runc.v2 Sep 13 00:06:04.383406 env[1922]: time="2025-09-13T00:06:04.379594374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:04.386833 env[1922]: time="2025-09-13T00:06:04.379697525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:04.387169 env[1922]: time="2025-09-13T00:06:04.387074516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:04.389161 env[1922]: time="2025-09-13T00:06:04.388928708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:04.389538 env[1922]: time="2025-09-13T00:06:04.389473160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:04.389738 env[1922]: time="2025-09-13T00:06:04.389679497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:04.390719 env[1922]: time="2025-09-13T00:06:04.390599272Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478 pid=3146 runtime=io.containerd.runc.v2 Sep 13 00:06:04.391733 env[1922]: time="2025-09-13T00:06:04.391635484Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a pid=3144 runtime=io.containerd.runc.v2 Sep 13 00:06:04.494655 env[1922]: time="2025-09-13T00:06:04.494590154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvr4j,Uid:65e7f71a-29b8-4584-bf2e-963c989d1a92,Namespace:kube-system,Attempt:0,} returns sandbox id \"3423914012c271aa367f167f514c52863778b99e25e573f0895b9abc099b8305\"" Sep 13 00:06:04.499825 env[1922]: time="2025-09-13T00:06:04.499770014Z" level=info msg="CreateContainer within sandbox \"3423914012c271aa367f167f514c52863778b99e25e573f0895b9abc099b8305\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:06:04.544945 env[1922]: time="2025-09-13T00:06:04.544892191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4j57n,Uid:615030d2-cea6-4cb0-b51a-872174570eee,Namespace:kube-system,Attempt:0,} returns sandbox id \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\"" Sep 13 00:06:04.549666 env[1922]: time="2025-09-13T00:06:04.549518669Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:06:04.556201 env[1922]: time="2025-09-13T00:06:04.556079925Z" level=info msg="CreateContainer within sandbox \"3423914012c271aa367f167f514c52863778b99e25e573f0895b9abc099b8305\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99da6ac1a1537f18eb246017e4d1b6609a45180b909886d35d2f0caec9e40129\"" Sep 13 00:06:04.557417 env[1922]: time="2025-09-13T00:06:04.557247998Z" level=info msg="StartContainer for \"99da6ac1a1537f18eb246017e4d1b6609a45180b909886d35d2f0caec9e40129\"" Sep 13 00:06:04.587886 env[1922]: time="2025-09-13T00:06:04.587829887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6wcrl,Uid:1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9,Namespace:kube-system,Attempt:0,} returns sandbox id \"983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a\"" Sep 13 00:06:04.722319 env[1922]: time="2025-09-13T00:06:04.721345659Z" level=info msg="StartContainer for \"99da6ac1a1537f18eb246017e4d1b6609a45180b909886d35d2f0caec9e40129\" returns successfully" Sep 13 00:06:05.593875 kubelet[3027]: I0913 00:06:05.593689 3027 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pvr4j" podStartSLOduration=2.593552967 podStartE2EDuration="2.593552967s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:05.56694993 +0000 UTC m=+7.552898219" watchObservedRunningTime="2025-09-13 00:06:05.593552967 +0000 UTC m=+7.579501244" Sep 13 00:06:11.552727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531262541.mount: Deactivated successfully. Sep 13 00:06:15.535044 env[1922]: time="2025-09-13T00:06:15.534986567Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:15.539386 env[1922]: time="2025-09-13T00:06:15.539337995Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:15.543145 env[1922]: time="2025-09-13T00:06:15.543097485Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:15.548317 env[1922]: time="2025-09-13T00:06:15.548025274Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 00:06:15.557854 env[1922]: time="2025-09-13T00:06:15.557785223Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:06:15.559148 env[1922]: time="2025-09-13T00:06:15.559079072Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:06:15.587408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35622907.mount: Deactivated successfully. Sep 13 00:06:15.601830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3720852486.mount: Deactivated successfully. Sep 13 00:06:15.605960 env[1922]: time="2025-09-13T00:06:15.605875420Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\"" Sep 13 00:06:15.607190 env[1922]: time="2025-09-13T00:06:15.607136372Z" level=info msg="StartContainer for \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\"" Sep 13 00:06:15.723419 env[1922]: time="2025-09-13T00:06:15.723355486Z" level=info msg="StartContainer for \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\" returns successfully" Sep 13 00:06:16.579700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c-rootfs.mount: Deactivated successfully. Sep 13 00:06:16.953159 env[1922]: time="2025-09-13T00:06:16.952823701Z" level=info msg="shim disconnected" id=e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c Sep 13 00:06:16.953159 env[1922]: time="2025-09-13T00:06:16.952917266Z" level=warning msg="cleaning up after shim disconnected" id=e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c namespace=k8s.io Sep 13 00:06:16.953159 env[1922]: time="2025-09-13T00:06:16.952973734Z" level=info msg="cleaning up dead shim" Sep 13 00:06:16.967784 env[1922]: time="2025-09-13T00:06:16.967704888Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:06:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3437 runtime=io.containerd.runc.v2\n" Sep 13 00:06:17.472715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699670705.mount: Deactivated successfully. Sep 13 00:06:17.621580 env[1922]: time="2025-09-13T00:06:17.621522174Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:06:17.649250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount191748079.mount: Deactivated successfully. Sep 13 00:06:17.671542 env[1922]: time="2025-09-13T00:06:17.671480061Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\"" Sep 13 00:06:17.674241 env[1922]: time="2025-09-13T00:06:17.674183769Z" level=info msg="StartContainer for \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\"" Sep 13 00:06:17.837072 env[1922]: time="2025-09-13T00:06:17.836010836Z" level=info msg="StartContainer for \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\" returns successfully" Sep 13 00:06:17.853657 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:17.858478 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:06:17.858844 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:06:17.867456 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:06:17.902561 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:06:17.957959 env[1922]: time="2025-09-13T00:06:17.957880104Z" level=info msg="shim disconnected" id=cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76 Sep 13 00:06:17.958922 env[1922]: time="2025-09-13T00:06:17.958875653Z" level=warning msg="cleaning up after shim disconnected" id=cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76 namespace=k8s.io Sep 13 00:06:17.959068 env[1922]: time="2025-09-13T00:06:17.959036811Z" level=info msg="cleaning up dead shim" Sep 13 00:06:17.975915 env[1922]: time="2025-09-13T00:06:17.975843038Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:06:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3505 runtime=io.containerd.runc.v2\n" Sep 13 00:06:18.604055 env[1922]: time="2025-09-13T00:06:18.602907482Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:06:18.641648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76-rootfs.mount: Deactivated successfully. Sep 13 00:06:18.647542 env[1922]: time="2025-09-13T00:06:18.647451861Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\"" Sep 13 00:06:18.650877 env[1922]: time="2025-09-13T00:06:18.650823424Z" level=info msg="StartContainer for \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\"" Sep 13 00:06:18.665030 env[1922]: time="2025-09-13T00:06:18.664617674Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:18.675301 env[1922]: time="2025-09-13T00:06:18.675197245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:18.680392 env[1922]: time="2025-09-13T00:06:18.680343857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:18.694282 env[1922]: time="2025-09-13T00:06:18.690170172Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 00:06:18.711293 env[1922]: time="2025-09-13T00:06:18.711173222Z" level=info msg="CreateContainer within sandbox \"983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:06:18.735569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741130303.mount: Deactivated successfully. Sep 13 00:06:18.762300 env[1922]: time="2025-09-13T00:06:18.761135233Z" level=info msg="CreateContainer within sandbox \"983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d\"" Sep 13 00:06:18.765630 env[1922]: time="2025-09-13T00:06:18.763927239Z" level=info msg="StartContainer for \"249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d\"" Sep 13 00:06:18.861220 env[1922]: time="2025-09-13T00:06:18.861084976Z" level=info msg="StartContainer for \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\" returns successfully" Sep 13 00:06:18.951441 env[1922]: time="2025-09-13T00:06:18.951349843Z" level=info msg="StartContainer for \"249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d\" returns successfully" Sep 13 00:06:19.017898 env[1922]: time="2025-09-13T00:06:19.017836201Z" level=info msg="shim disconnected" id=5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0 Sep 13 00:06:19.018695 env[1922]: time="2025-09-13T00:06:19.018659040Z" level=warning msg="cleaning up after shim disconnected" id=5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0 namespace=k8s.io Sep 13 00:06:19.018814 env[1922]: time="2025-09-13T00:06:19.018787097Z" level=info msg="cleaning up dead shim" Sep 13 00:06:19.038844 env[1922]: time="2025-09-13T00:06:19.038789583Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:06:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3601 runtime=io.containerd.runc.v2\n" Sep 13 00:06:19.633065 env[1922]: time="2025-09-13T00:06:19.632988293Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:06:19.643415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0-rootfs.mount: Deactivated successfully. Sep 13 00:06:19.668816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035700457.mount: Deactivated successfully. Sep 13 00:06:19.701659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320792186.mount: Deactivated successfully. Sep 13 00:06:19.727083 env[1922]: time="2025-09-13T00:06:19.727016315Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\"" Sep 13 00:06:19.728328 env[1922]: time="2025-09-13T00:06:19.728245311Z" level=info msg="StartContainer for \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\"" Sep 13 00:06:19.773909 kubelet[3027]: I0913 00:06:19.773735 3027 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6wcrl" podStartSLOduration=2.6588576550000003 podStartE2EDuration="16.773691443s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="2025-09-13 00:06:04.590195404 +0000 UTC m=+6.576143669" lastFinishedPulling="2025-09-13 00:06:18.705029204 +0000 UTC m=+20.690977457" observedRunningTime="2025-09-13 00:06:19.773445663 +0000 UTC m=+21.759393940" watchObservedRunningTime="2025-09-13 00:06:19.773691443 +0000 UTC m=+21.759639720" Sep 13 00:06:20.029010 env[1922]: time="2025-09-13T00:06:20.028949402Z" level=info msg="StartContainer for \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\" returns successfully" Sep 13 00:06:20.100291 env[1922]: time="2025-09-13T00:06:20.100201598Z" level=info msg="shim disconnected" id=880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15 Sep 13 00:06:20.100291 env[1922]: time="2025-09-13T00:06:20.100287649Z" level=warning msg="cleaning up after shim disconnected" id=880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15 namespace=k8s.io Sep 13 00:06:20.100637 env[1922]: time="2025-09-13T00:06:20.100310848Z" level=info msg="cleaning up dead shim" Sep 13 00:06:20.135953 env[1922]: time="2025-09-13T00:06:20.135879093Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:06:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3656 runtime=io.containerd.runc.v2\n" Sep 13 00:06:20.642783 env[1922]: time="2025-09-13T00:06:20.642705471Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:06:20.685973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1266153243.mount: Deactivated successfully. Sep 13 00:06:20.700857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1770354398.mount: Deactivated successfully. Sep 13 00:06:20.704647 env[1922]: time="2025-09-13T00:06:20.704566875Z" level=info msg="CreateContainer within sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\"" Sep 13 00:06:20.705835 env[1922]: time="2025-09-13T00:06:20.705754365Z" level=info msg="StartContainer for \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\"" Sep 13 00:06:20.827086 env[1922]: time="2025-09-13T00:06:20.827022021Z" level=info msg="StartContainer for \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\" returns successfully" Sep 13 00:06:21.124873 kubelet[3027]: I0913 00:06:21.124512 3027 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:06:21.275759 kubelet[3027]: I0913 00:06:21.275713 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzw96\" (UniqueName: \"kubernetes.io/projected/26d9c823-b906-4526-86c3-899f911cd489-kube-api-access-hzw96\") pod \"coredns-7c65d6cfc9-2fgvt\" (UID: \"26d9c823-b906-4526-86c3-899f911cd489\") " pod="kube-system/coredns-7c65d6cfc9-2fgvt" Sep 13 00:06:21.276026 kubelet[3027]: I0913 00:06:21.275998 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74152694-8bc7-408d-9647-384fec6df54e-config-volume\") pod \"coredns-7c65d6cfc9-4h54h\" (UID: \"74152694-8bc7-408d-9647-384fec6df54e\") " pod="kube-system/coredns-7c65d6cfc9-4h54h" Sep 13 00:06:21.276183 kubelet[3027]: I0913 00:06:21.276157 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26d9c823-b906-4526-86c3-899f911cd489-config-volume\") pod \"coredns-7c65d6cfc9-2fgvt\" (UID: \"26d9c823-b906-4526-86c3-899f911cd489\") " pod="kube-system/coredns-7c65d6cfc9-2fgvt" Sep 13 00:06:21.276384 kubelet[3027]: I0913 00:06:21.276347 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4gm5\" (UniqueName: \"kubernetes.io/projected/74152694-8bc7-408d-9647-384fec6df54e-kube-api-access-b4gm5\") pod \"coredns-7c65d6cfc9-4h54h\" (UID: \"74152694-8bc7-408d-9647-384fec6df54e\") " pod="kube-system/coredns-7c65d6cfc9-4h54h" Sep 13 00:06:21.282307 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:06:21.507982 env[1922]: time="2025-09-13T00:06:21.507444829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4h54h,Uid:74152694-8bc7-408d-9647-384fec6df54e,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:21.549302 env[1922]: time="2025-09-13T00:06:21.545794746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2fgvt,Uid:26d9c823-b906-4526-86c3-899f911cd489,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:22.175301 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:06:23.982157 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:06:23.982427 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:06:23.983117 systemd-networkd[1591]: cilium_host: Link UP Sep 13 00:06:23.983579 systemd-networkd[1591]: cilium_net: Link UP Sep 13 00:06:23.983909 systemd-networkd[1591]: cilium_net: Gained carrier Sep 13 00:06:23.985322 systemd-networkd[1591]: cilium_host: Gained carrier Sep 13 00:06:23.989624 (udev-worker)[3819]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:06:23.989627 (udev-worker)[3782]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:06:24.157351 systemd-networkd[1591]: cilium_vxlan: Link UP Sep 13 00:06:24.157373 systemd-networkd[1591]: cilium_vxlan: Gained carrier Sep 13 00:06:24.614183 systemd-networkd[1591]: cilium_net: Gained IPv6LL Sep 13 00:06:24.718297 kernel: NET: Registered PF_ALG protocol family Sep 13 00:06:24.932877 systemd-networkd[1591]: cilium_host: Gained IPv6LL Sep 13 00:06:25.700933 systemd-networkd[1591]: cilium_vxlan: Gained IPv6LL Sep 13 00:06:26.080686 systemd-networkd[1591]: lxc_health: Link UP Sep 13 00:06:26.093379 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:06:26.093233 systemd-networkd[1591]: lxc_health: Gained carrier Sep 13 00:06:26.366689 kubelet[3027]: I0913 00:06:26.366512 3027 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4j57n" podStartSLOduration=12.360249556 podStartE2EDuration="23.366485842s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="2025-09-13 00:06:04.548573557 +0000 UTC m=+6.534521834" lastFinishedPulling="2025-09-13 00:06:15.554809867 +0000 UTC m=+17.540758120" observedRunningTime="2025-09-13 00:06:21.729173301 +0000 UTC m=+23.715121590" watchObservedRunningTime="2025-09-13 00:06:26.366485842 +0000 UTC m=+28.352434143" Sep 13 00:06:26.609240 systemd-networkd[1591]: lxce881213761d5: Link UP Sep 13 00:06:26.621313 kernel: eth0: renamed from tmp07dfe Sep 13 00:06:26.627391 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce881213761d5: link becomes ready Sep 13 00:06:26.627451 systemd-networkd[1591]: lxce881213761d5: Gained carrier Sep 13 00:06:26.672770 systemd-networkd[1591]: lxc5574e56dd7f5: Link UP Sep 13 00:06:26.692376 kernel: eth0: renamed from tmp8f26a Sep 13 00:06:26.704356 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5574e56dd7f5: link becomes ready Sep 13 00:06:26.701486 systemd-networkd[1591]: lxc5574e56dd7f5: Gained carrier Sep 13 00:06:27.685067 systemd-networkd[1591]: lxce881213761d5: Gained IPv6LL Sep 13 00:06:28.069096 systemd-networkd[1591]: lxc5574e56dd7f5: Gained IPv6LL Sep 13 00:06:28.069638 systemd-networkd[1591]: lxc_health: Gained IPv6LL Sep 13 00:06:34.762061 env[1922]: time="2025-09-13T00:06:34.761891218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:34.762927 env[1922]: time="2025-09-13T00:06:34.762861307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:34.763120 env[1922]: time="2025-09-13T00:06:34.763072431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:34.763630 env[1922]: time="2025-09-13T00:06:34.763527035Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f26aa6cb9fa224d7dc83a0b527ba036b59f0f04182de2e7e0728697097cb393 pid=4195 runtime=io.containerd.runc.v2 Sep 13 00:06:34.845641 env[1922]: time="2025-09-13T00:06:34.845501403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:34.845875 env[1922]: time="2025-09-13T00:06:34.845590967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:34.845875 env[1922]: time="2025-09-13T00:06:34.845618822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:34.846069 env[1922]: time="2025-09-13T00:06:34.845911086Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/07dfe9babc279a169173e9470642f3afea88e738735fcc4cc17fe6a543363a38 pid=4211 runtime=io.containerd.runc.v2 Sep 13 00:06:35.037941 env[1922]: time="2025-09-13T00:06:35.035293014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2fgvt,Uid:26d9c823-b906-4526-86c3-899f911cd489,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f26aa6cb9fa224d7dc83a0b527ba036b59f0f04182de2e7e0728697097cb393\"" Sep 13 00:06:35.059924 env[1922]: time="2025-09-13T00:06:35.059838265Z" level=info msg="CreateContainer within sandbox \"8f26aa6cb9fa224d7dc83a0b527ba036b59f0f04182de2e7e0728697097cb393\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:06:35.098357 env[1922]: time="2025-09-13T00:06:35.098250192Z" level=info msg="CreateContainer within sandbox \"8f26aa6cb9fa224d7dc83a0b527ba036b59f0f04182de2e7e0728697097cb393\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11ea070fd6210de9cda88b414d7878ff03ab3a9ac5be6fe93d2daac61bf34605\"" Sep 13 00:06:35.101522 env[1922]: time="2025-09-13T00:06:35.101451435Z" level=info msg="StartContainer for \"11ea070fd6210de9cda88b414d7878ff03ab3a9ac5be6fe93d2daac61bf34605\"" Sep 13 00:06:35.111947 env[1922]: time="2025-09-13T00:06:35.106991531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4h54h,Uid:74152694-8bc7-408d-9647-384fec6df54e,Namespace:kube-system,Attempt:0,} returns sandbox id \"07dfe9babc279a169173e9470642f3afea88e738735fcc4cc17fe6a543363a38\"" Sep 13 00:06:35.124502 env[1922]: time="2025-09-13T00:06:35.124443115Z" level=info msg="CreateContainer within sandbox \"07dfe9babc279a169173e9470642f3afea88e738735fcc4cc17fe6a543363a38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:06:35.145223 env[1922]: time="2025-09-13T00:06:35.145160339Z" level=info msg="CreateContainer within sandbox \"07dfe9babc279a169173e9470642f3afea88e738735fcc4cc17fe6a543363a38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f460be66798f239a329a158755e318a609dd4b7d5b17adf2e55563360b7db92\"" Sep 13 00:06:35.146791 env[1922]: time="2025-09-13T00:06:35.146723079Z" level=info msg="StartContainer for \"9f460be66798f239a329a158755e318a609dd4b7d5b17adf2e55563360b7db92\"" Sep 13 00:06:35.262170 env[1922]: time="2025-09-13T00:06:35.261955619Z" level=info msg="StartContainer for \"11ea070fd6210de9cda88b414d7878ff03ab3a9ac5be6fe93d2daac61bf34605\" returns successfully" Sep 13 00:06:35.309907 env[1922]: time="2025-09-13T00:06:35.309737781Z" level=info msg="StartContainer for \"9f460be66798f239a329a158755e318a609dd4b7d5b17adf2e55563360b7db92\" returns successfully" Sep 13 00:06:35.748578 kubelet[3027]: I0913 00:06:35.748235 3027 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4h54h" podStartSLOduration=32.748214153 podStartE2EDuration="32.748214153s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:35.748093086 +0000 UTC m=+37.734041387" watchObservedRunningTime="2025-09-13 00:06:35.748214153 +0000 UTC m=+37.734162442" Sep 13 00:06:44.258925 systemd[1]: Started sshd@5-172.31.24.134:22-139.178.89.65:33392.service. Sep 13 00:06:44.432720 sshd[4350]: Accepted publickey for core from 139.178.89.65 port 33392 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:44.435539 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:44.457155 systemd-logind[1905]: New session 6 of user core. Sep 13 00:06:44.459223 systemd[1]: Started session-6.scope. Sep 13 00:06:44.740198 sshd[4350]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:44.745034 systemd-logind[1905]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:06:44.745738 systemd[1]: sshd@5-172.31.24.134:22-139.178.89.65:33392.service: Deactivated successfully. Sep 13 00:06:44.747211 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:06:44.750200 systemd-logind[1905]: Removed session 6. Sep 13 00:06:49.767370 systemd[1]: Started sshd@6-172.31.24.134:22-139.178.89.65:33396.service. Sep 13 00:06:49.955426 sshd[4363]: Accepted publickey for core from 139.178.89.65 port 33396 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:49.957818 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:49.965793 systemd-logind[1905]: New session 7 of user core. Sep 13 00:06:49.966850 systemd[1]: Started session-7.scope. Sep 13 00:06:50.210619 sshd[4363]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:50.215995 systemd-logind[1905]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:06:50.216412 systemd[1]: sshd@6-172.31.24.134:22-139.178.89.65:33396.service: Deactivated successfully. Sep 13 00:06:50.218417 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:06:50.220766 systemd-logind[1905]: Removed session 7. Sep 13 00:06:55.236828 systemd[1]: Started sshd@7-172.31.24.134:22-139.178.89.65:53188.service. Sep 13 00:06:55.414283 sshd[4377]: Accepted publickey for core from 139.178.89.65 port 53188 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:55.417718 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:55.426507 systemd[1]: Started session-8.scope. Sep 13 00:06:55.427372 systemd-logind[1905]: New session 8 of user core. Sep 13 00:06:55.679624 sshd[4377]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:55.684949 systemd[1]: sshd@7-172.31.24.134:22-139.178.89.65:53188.service: Deactivated successfully. Sep 13 00:06:55.686444 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:06:55.688056 systemd-logind[1905]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:06:55.690082 systemd-logind[1905]: Removed session 8. Sep 13 00:07:00.706676 systemd[1]: Started sshd@8-172.31.24.134:22-139.178.89.65:53048.service. Sep 13 00:07:00.889579 sshd[4393]: Accepted publickey for core from 139.178.89.65 port 53048 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:00.892725 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:00.901579 systemd[1]: Started session-9.scope. Sep 13 00:07:00.903382 systemd-logind[1905]: New session 9 of user core. Sep 13 00:07:01.167465 sshd[4393]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:01.172422 systemd[1]: sshd@8-172.31.24.134:22-139.178.89.65:53048.service: Deactivated successfully. Sep 13 00:07:01.174713 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:07:01.175223 systemd-logind[1905]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:07:01.177228 systemd-logind[1905]: Removed session 9. Sep 13 00:07:06.192639 systemd[1]: Started sshd@9-172.31.24.134:22-139.178.89.65:53052.service. Sep 13 00:07:06.368589 sshd[4408]: Accepted publickey for core from 139.178.89.65 port 53052 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:06.371758 sshd[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:06.382647 systemd[1]: Started session-10.scope. Sep 13 00:07:06.384419 systemd-logind[1905]: New session 10 of user core. Sep 13 00:07:06.635851 sshd[4408]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:06.641497 systemd-logind[1905]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:07:06.642857 systemd[1]: sshd@9-172.31.24.134:22-139.178.89.65:53052.service: Deactivated successfully. Sep 13 00:07:06.644477 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:07:06.645708 systemd-logind[1905]: Removed session 10. Sep 13 00:07:06.663552 systemd[1]: Started sshd@10-172.31.24.134:22-139.178.89.65:53054.service. Sep 13 00:07:06.846672 sshd[4422]: Accepted publickey for core from 139.178.89.65 port 53054 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:06.849195 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:06.860752 systemd[1]: Started session-11.scope. Sep 13 00:07:06.861173 systemd-logind[1905]: New session 11 of user core. Sep 13 00:07:07.207583 sshd[4422]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:07.213153 systemd[1]: sshd@10-172.31.24.134:22-139.178.89.65:53054.service: Deactivated successfully. Sep 13 00:07:07.215565 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:07:07.216798 systemd-logind[1905]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:07:07.219196 systemd-logind[1905]: Removed session 11. Sep 13 00:07:07.237506 systemd[1]: Started sshd@11-172.31.24.134:22-139.178.89.65:53064.service. Sep 13 00:07:07.420442 sshd[4433]: Accepted publickey for core from 139.178.89.65 port 53064 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:07.422995 sshd[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:07.432382 systemd[1]: Started session-12.scope. Sep 13 00:07:07.432803 systemd-logind[1905]: New session 12 of user core. Sep 13 00:07:07.679669 sshd[4433]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:07.685030 systemd[1]: sshd@11-172.31.24.134:22-139.178.89.65:53064.service: Deactivated successfully. Sep 13 00:07:07.687246 systemd-logind[1905]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:07:07.688759 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:07:07.690683 systemd-logind[1905]: Removed session 12. Sep 13 00:07:12.705756 systemd[1]: Started sshd@12-172.31.24.134:22-139.178.89.65:48370.service. Sep 13 00:07:12.886050 sshd[4446]: Accepted publickey for core from 139.178.89.65 port 48370 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:12.889462 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:12.897354 systemd-logind[1905]: New session 13 of user core. Sep 13 00:07:12.898579 systemd[1]: Started session-13.scope. Sep 13 00:07:13.143375 sshd[4446]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:13.148520 systemd[1]: sshd@12-172.31.24.134:22-139.178.89.65:48370.service: Deactivated successfully. Sep 13 00:07:13.151135 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:07:13.152461 systemd-logind[1905]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:07:13.154910 systemd-logind[1905]: Removed session 13. Sep 13 00:07:18.168660 systemd[1]: Started sshd@13-172.31.24.134:22-139.178.89.65:48384.service. Sep 13 00:07:18.344138 sshd[4459]: Accepted publickey for core from 139.178.89.65 port 48384 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:18.346845 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:18.355914 systemd[1]: Started session-14.scope. Sep 13 00:07:18.356554 systemd-logind[1905]: New session 14 of user core. Sep 13 00:07:18.612975 sshd[4459]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:18.618491 systemd-logind[1905]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:07:18.618496 systemd[1]: sshd@13-172.31.24.134:22-139.178.89.65:48384.service: Deactivated successfully. Sep 13 00:07:18.620460 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:07:18.624857 systemd-logind[1905]: Removed session 14. Sep 13 00:07:23.636199 systemd[1]: Started sshd@14-172.31.24.134:22-139.178.89.65:42422.service. Sep 13 00:07:23.810608 sshd[4472]: Accepted publickey for core from 139.178.89.65 port 42422 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:23.813402 sshd[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:23.821360 systemd-logind[1905]: New session 15 of user core. Sep 13 00:07:23.822877 systemd[1]: Started session-15.scope. Sep 13 00:07:24.081802 sshd[4472]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:24.087214 systemd-logind[1905]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:07:24.087829 systemd[1]: sshd@14-172.31.24.134:22-139.178.89.65:42422.service: Deactivated successfully. Sep 13 00:07:24.089962 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:07:24.091514 systemd-logind[1905]: Removed session 15. Sep 13 00:07:29.108900 systemd[1]: Started sshd@15-172.31.24.134:22-139.178.89.65:42428.service. Sep 13 00:07:29.287458 sshd[4485]: Accepted publickey for core from 139.178.89.65 port 42428 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:29.290615 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:29.299615 systemd[1]: Started session-16.scope. Sep 13 00:07:29.300612 systemd-logind[1905]: New session 16 of user core. Sep 13 00:07:29.550844 sshd[4485]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:29.555560 systemd[1]: sshd@15-172.31.24.134:22-139.178.89.65:42428.service: Deactivated successfully. Sep 13 00:07:29.557072 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:07:29.559957 systemd-logind[1905]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:07:29.562361 systemd-logind[1905]: Removed session 16. Sep 13 00:07:29.576710 systemd[1]: Started sshd@16-172.31.24.134:22-139.178.89.65:42434.service. Sep 13 00:07:29.752035 sshd[4498]: Accepted publickey for core from 139.178.89.65 port 42434 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:29.755409 sshd[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:29.764368 systemd[1]: Started session-17.scope. Sep 13 00:07:29.765117 systemd-logind[1905]: New session 17 of user core. Sep 13 00:07:30.113133 sshd[4498]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:30.118033 systemd-logind[1905]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:07:30.118691 systemd[1]: sshd@16-172.31.24.134:22-139.178.89.65:42434.service: Deactivated successfully. Sep 13 00:07:30.121078 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:07:30.122656 systemd-logind[1905]: Removed session 17. Sep 13 00:07:30.139557 systemd[1]: Started sshd@17-172.31.24.134:22-139.178.89.65:49050.service. Sep 13 00:07:30.314480 sshd[4508]: Accepted publickey for core from 139.178.89.65 port 49050 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:30.316966 sshd[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:30.325222 systemd-logind[1905]: New session 18 of user core. Sep 13 00:07:30.326247 systemd[1]: Started session-18.scope. Sep 13 00:07:32.605645 sshd[4508]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:32.612362 systemd[1]: sshd@17-172.31.24.134:22-139.178.89.65:49050.service: Deactivated successfully. Sep 13 00:07:32.614792 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:07:32.615321 systemd-logind[1905]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:07:32.623175 systemd-logind[1905]: Removed session 18. Sep 13 00:07:32.634560 systemd[1]: Started sshd@18-172.31.24.134:22-139.178.89.65:49062.service. Sep 13 00:07:32.822386 sshd[4525]: Accepted publickey for core from 139.178.89.65 port 49062 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:32.824469 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:32.833762 systemd[1]: Started session-19.scope. Sep 13 00:07:32.834193 systemd-logind[1905]: New session 19 of user core. Sep 13 00:07:33.341584 sshd[4525]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:33.346609 systemd-logind[1905]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:07:33.347184 systemd[1]: sshd@18-172.31.24.134:22-139.178.89.65:49062.service: Deactivated successfully. Sep 13 00:07:33.348955 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:07:33.350906 systemd-logind[1905]: Removed session 19. Sep 13 00:07:33.368614 systemd[1]: Started sshd@19-172.31.24.134:22-139.178.89.65:49068.service. Sep 13 00:07:33.551465 sshd[4537]: Accepted publickey for core from 139.178.89.65 port 49068 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:33.553817 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:33.563527 systemd-logind[1905]: New session 20 of user core. Sep 13 00:07:33.563622 systemd[1]: Started session-20.scope. Sep 13 00:07:33.811549 sshd[4537]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:33.816300 systemd-logind[1905]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:07:33.816877 systemd[1]: sshd@19-172.31.24.134:22-139.178.89.65:49068.service: Deactivated successfully. Sep 13 00:07:33.819740 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:07:33.821032 systemd-logind[1905]: Removed session 20. Sep 13 00:07:38.837188 systemd[1]: Started sshd@20-172.31.24.134:22-139.178.89.65:49078.service. Sep 13 00:07:39.021245 sshd[4552]: Accepted publickey for core from 139.178.89.65 port 49078 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:39.023801 sshd[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:39.032894 systemd[1]: Started session-21.scope. Sep 13 00:07:39.035489 systemd-logind[1905]: New session 21 of user core. Sep 13 00:07:39.280581 sshd[4552]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:39.285153 systemd-logind[1905]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:07:39.286505 systemd[1]: sshd@20-172.31.24.134:22-139.178.89.65:49078.service: Deactivated successfully. Sep 13 00:07:39.288083 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:07:39.289451 systemd-logind[1905]: Removed session 21. Sep 13 00:07:44.307627 systemd[1]: Started sshd@21-172.31.24.134:22-139.178.89.65:58710.service. Sep 13 00:07:44.484810 sshd[4568]: Accepted publickey for core from 139.178.89.65 port 58710 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:44.487488 sshd[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:44.496558 systemd[1]: Started session-22.scope. Sep 13 00:07:44.496973 systemd-logind[1905]: New session 22 of user core. Sep 13 00:07:44.753032 sshd[4568]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:44.759580 systemd-logind[1905]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:07:44.761471 systemd[1]: sshd@21-172.31.24.134:22-139.178.89.65:58710.service: Deactivated successfully. Sep 13 00:07:44.763078 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:07:44.766523 systemd-logind[1905]: Removed session 22. Sep 13 00:07:49.778388 systemd[1]: Started sshd@22-172.31.24.134:22-139.178.89.65:58712.service. Sep 13 00:07:49.955533 sshd[4581]: Accepted publickey for core from 139.178.89.65 port 58712 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:49.958785 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:49.967753 systemd[1]: Started session-23.scope. Sep 13 00:07:49.968171 systemd-logind[1905]: New session 23 of user core. Sep 13 00:07:50.217336 sshd[4581]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:50.223360 systemd[1]: sshd@22-172.31.24.134:22-139.178.89.65:58712.service: Deactivated successfully. Sep 13 00:07:50.225291 systemd-logind[1905]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:07:50.225428 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:07:50.238361 systemd-logind[1905]: Removed session 23. Sep 13 00:07:55.243970 systemd[1]: Started sshd@23-172.31.24.134:22-139.178.89.65:44538.service. Sep 13 00:07:55.422874 sshd[4594]: Accepted publickey for core from 139.178.89.65 port 44538 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:55.424971 sshd[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:55.433340 systemd-logind[1905]: New session 24 of user core. Sep 13 00:07:55.434356 systemd[1]: Started session-24.scope. Sep 13 00:07:55.676452 sshd[4594]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:55.681710 systemd[1]: sshd@23-172.31.24.134:22-139.178.89.65:44538.service: Deactivated successfully. Sep 13 00:07:55.683968 systemd-logind[1905]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:07:55.684177 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:07:55.687524 systemd-logind[1905]: Removed session 24. Sep 13 00:07:55.701905 systemd[1]: Started sshd@24-172.31.24.134:22-139.178.89.65:44548.service. Sep 13 00:07:55.887337 sshd[4607]: Accepted publickey for core from 139.178.89.65 port 44548 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:55.890375 sshd[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:55.899505 systemd-logind[1905]: New session 25 of user core. Sep 13 00:07:55.899816 systemd[1]: Started session-25.scope. Sep 13 00:07:58.675706 kubelet[3027]: I0913 00:07:58.675599 3027 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2fgvt" podStartSLOduration=115.675576756 podStartE2EDuration="1m55.675576756s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:35.859954027 +0000 UTC m=+37.845902316" watchObservedRunningTime="2025-09-13 00:07:58.675576756 +0000 UTC m=+120.661525033" Sep 13 00:07:58.714775 systemd[1]: run-containerd-runc-k8s.io-d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3-runc.MrJ2SZ.mount: Deactivated successfully. Sep 13 00:07:58.719707 env[1922]: time="2025-09-13T00:07:58.719652804Z" level=info msg="StopContainer for \"249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d\" with timeout 30 (s)" Sep 13 00:07:58.727903 env[1922]: time="2025-09-13T00:07:58.726120848Z" level=info msg="Stop container \"249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d\" with signal terminated" Sep 13 00:07:58.785816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d-rootfs.mount: Deactivated successfully. Sep 13 00:07:58.801221 env[1922]: time="2025-09-13T00:07:58.801124544Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:07:58.818536 env[1922]: time="2025-09-13T00:07:58.818460408Z" level=info msg="shim disconnected" id=249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d Sep 13 00:07:58.818536 env[1922]: time="2025-09-13T00:07:58.818531631Z" level=warning msg="cleaning up after shim disconnected" id=249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d namespace=k8s.io Sep 13 00:07:58.818916 env[1922]: time="2025-09-13T00:07:58.818554000Z" level=info msg="cleaning up dead shim" Sep 13 00:07:58.824159 env[1922]: time="2025-09-13T00:07:58.823699346Z" level=info msg="StopContainer for \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\" with timeout 2 (s)" Sep 13 00:07:58.824888 env[1922]: time="2025-09-13T00:07:58.824837079Z" level=info msg="Stop container \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\" with signal terminated" Sep 13 00:07:58.844385 env[1922]: time="2025-09-13T00:07:58.842762025Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4657 runtime=io.containerd.runc.v2\n" Sep 13 00:07:58.848633 env[1922]: time="2025-09-13T00:07:58.848560715Z" level=info msg="StopContainer for \"249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d\" returns successfully" Sep 13 00:07:58.849660 env[1922]: time="2025-09-13T00:07:58.849612610Z" level=info msg="StopPodSandbox for \"983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a\"" Sep 13 00:07:58.851420 env[1922]: time="2025-09-13T00:07:58.851330461Z" level=info msg="Container to stop \"249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:58.861015 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a-shm.mount: Deactivated successfully. Sep 13 00:07:58.865216 systemd-networkd[1591]: lxc_health: Link DOWN Sep 13 00:07:58.865238 systemd-networkd[1591]: lxc_health: Lost carrier Sep 13 00:07:58.932219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3-rootfs.mount: Deactivated successfully. Sep 13 00:07:58.949857 env[1922]: time="2025-09-13T00:07:58.949776297Z" level=info msg="shim disconnected" id=d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3 Sep 13 00:07:58.949857 env[1922]: time="2025-09-13T00:07:58.949848696Z" level=warning msg="cleaning up after shim disconnected" id=d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3 namespace=k8s.io Sep 13 00:07:58.950232 env[1922]: time="2025-09-13T00:07:58.949871221Z" level=info msg="cleaning up dead shim" Sep 13 00:07:58.951043 env[1922]: time="2025-09-13T00:07:58.950969389Z" level=info msg="shim disconnected" id=983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a Sep 13 00:07:58.951183 env[1922]: time="2025-09-13T00:07:58.951042904Z" level=warning msg="cleaning up after shim disconnected" id=983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a namespace=k8s.io Sep 13 00:07:58.951183 env[1922]: time="2025-09-13T00:07:58.951064973Z" level=info msg="cleaning up dead shim" Sep 13 00:07:58.975580 env[1922]: time="2025-09-13T00:07:58.975393628Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4712 runtime=io.containerd.runc.v2\n" Sep 13 00:07:58.977154 env[1922]: time="2025-09-13T00:07:58.977104051Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4717 runtime=io.containerd.runc.v2\n" Sep 13 00:07:58.979147 env[1922]: time="2025-09-13T00:07:58.979099211Z" level=info msg="TearDown network for sandbox \"983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a\" successfully" Sep 13 00:07:58.979394 env[1922]: time="2025-09-13T00:07:58.979360822Z" level=info msg="StopPodSandbox for \"983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a\" returns successfully" Sep 13 00:07:58.981805 env[1922]: time="2025-09-13T00:07:58.979805909Z" level=info msg="StopContainer for \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\" returns successfully" Sep 13 00:07:58.983049 env[1922]: time="2025-09-13T00:07:58.982989445Z" level=info msg="StopPodSandbox for \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\"" Sep 13 00:07:58.983205 env[1922]: time="2025-09-13T00:07:58.983089133Z" level=info msg="Container to stop \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:58.983205 env[1922]: time="2025-09-13T00:07:58.983122099Z" level=info msg="Container to stop \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:58.983205 env[1922]: time="2025-09-13T00:07:58.983148668Z" level=info msg="Container to stop \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:58.983205 env[1922]: time="2025-09-13T00:07:58.983177025Z" level=info msg="Container to stop \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:58.983546 env[1922]: time="2025-09-13T00:07:58.983202982Z" level=info msg="Container to stop \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:07:59.041692 env[1922]: time="2025-09-13T00:07:59.041617733Z" level=info msg="shim disconnected" id=09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478 Sep 13 00:07:59.041950 env[1922]: time="2025-09-13T00:07:59.041690324Z" level=warning msg="cleaning up after shim disconnected" id=09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478 namespace=k8s.io Sep 13 00:07:59.041950 env[1922]: time="2025-09-13T00:07:59.041712777Z" level=info msg="cleaning up dead shim" Sep 13 00:07:59.056372 env[1922]: time="2025-09-13T00:07:59.056241617Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4757 runtime=io.containerd.runc.v2\n" Sep 13 00:07:59.056898 env[1922]: time="2025-09-13T00:07:59.056850032Z" level=info msg="TearDown network for sandbox \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" successfully" Sep 13 00:07:59.057009 env[1922]: time="2025-09-13T00:07:59.056899006Z" level=info msg="StopPodSandbox for \"09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478\" returns successfully" Sep 13 00:07:59.146188 kubelet[3027]: I0913 00:07:59.146137 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rthj5\" (UniqueName: \"kubernetes.io/projected/1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9-kube-api-access-rthj5\") pod \"1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9\" (UID: \"1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9\") " Sep 13 00:07:59.146469 kubelet[3027]: I0913 00:07:59.146215 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9-cilium-config-path\") pod \"1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9\" (UID: \"1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9\") " Sep 13 00:07:59.151953 kubelet[3027]: I0913 00:07:59.151884 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9" (UID: "1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:07:59.156302 kubelet[3027]: I0913 00:07:59.156223 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9-kube-api-access-rthj5" (OuterVolumeSpecName: "kube-api-access-rthj5") pod "1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9" (UID: "1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9"). InnerVolumeSpecName "kube-api-access-rthj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:07:59.248297 kubelet[3027]: I0913 00:07:59.246977 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldz24\" (UniqueName: \"kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-kube-api-access-ldz24\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.248574 kubelet[3027]: I0913 00:07:59.248538 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-host-proc-sys-net\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.248728 kubelet[3027]: I0913 00:07:59.248703 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-host-proc-sys-kernel\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.248871 kubelet[3027]: I0913 00:07:59.248845 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-etc-cni-netd\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.249025 kubelet[3027]: I0913 00:07:59.249000 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/615030d2-cea6-4cb0-b51a-872174570eee-clustermesh-secrets\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.249163 kubelet[3027]: I0913 00:07:59.249139 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-xtables-lock\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.249344 kubelet[3027]: I0913 00:07:59.249318 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-hostproc\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.249507 kubelet[3027]: I0913 00:07:59.249482 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-hubble-tls\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.249650 kubelet[3027]: I0913 00:07:59.249625 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cilium-run\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.249799 kubelet[3027]: I0913 00:07:59.249773 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cni-path\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.249966 kubelet[3027]: I0913 00:07:59.249939 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/615030d2-cea6-4cb0-b51a-872174570eee-cilium-config-path\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.250113 kubelet[3027]: I0913 00:07:59.250088 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-bpf-maps\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.250285 kubelet[3027]: I0913 00:07:59.250232 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cilium-cgroup\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.250440 kubelet[3027]: I0913 00:07:59.250415 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-lib-modules\") pod \"615030d2-cea6-4cb0-b51a-872174570eee\" (UID: \"615030d2-cea6-4cb0-b51a-872174570eee\") " Sep 13 00:07:59.250617 kubelet[3027]: I0913 00:07:59.250590 3027 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rthj5\" (UniqueName: \"kubernetes.io/projected/1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9-kube-api-access-rthj5\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.250753 kubelet[3027]: I0913 00:07:59.250730 3027 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9-cilium-config-path\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.251019 kubelet[3027]: I0913 00:07:59.250985 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.251189 kubelet[3027]: I0913 00:07:59.251161 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.251368 kubelet[3027]: I0913 00:07:59.251342 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.252670 kubelet[3027]: I0913 00:07:59.252602 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-kube-api-access-ldz24" (OuterVolumeSpecName: "kube-api-access-ldz24") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "kube-api-access-ldz24". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:07:59.252832 kubelet[3027]: I0913 00:07:59.252697 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cni-path" (OuterVolumeSpecName: "cni-path") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.252832 kubelet[3027]: I0913 00:07:59.252742 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.252832 kubelet[3027]: I0913 00:07:59.252781 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-hostproc" (OuterVolumeSpecName: "hostproc") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.257215 kubelet[3027]: I0913 00:07:59.257149 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615030d2-cea6-4cb0-b51a-872174570eee-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:07:59.258191 kubelet[3027]: I0913 00:07:59.258129 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:07:59.258360 kubelet[3027]: I0913 00:07:59.258211 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.258360 kubelet[3027]: I0913 00:07:59.258326 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.258505 kubelet[3027]: I0913 00:07:59.258368 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.258505 kubelet[3027]: I0913 00:07:59.258462 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:07:59.263248 kubelet[3027]: I0913 00:07:59.263202 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615030d2-cea6-4cb0-b51a-872174570eee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "615030d2-cea6-4cb0-b51a-872174570eee" (UID: "615030d2-cea6-4cb0-b51a-872174570eee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:07:59.351600 kubelet[3027]: I0913 00:07:59.351548 3027 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cilium-cgroup\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.351600 kubelet[3027]: I0913 00:07:59.351601 3027 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-lib-modules\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.351838 kubelet[3027]: I0913 00:07:59.351629 3027 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldz24\" (UniqueName: \"kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-kube-api-access-ldz24\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.351838 kubelet[3027]: I0913 00:07:59.351653 3027 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-host-proc-sys-net\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.351838 kubelet[3027]: I0913 00:07:59.351678 3027 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-host-proc-sys-kernel\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.351838 kubelet[3027]: I0913 00:07:59.351701 3027 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-etc-cni-netd\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.351838 kubelet[3027]: I0913 00:07:59.351722 3027 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/615030d2-cea6-4cb0-b51a-872174570eee-clustermesh-secrets\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.351838 kubelet[3027]: I0913 00:07:59.351743 3027 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-xtables-lock\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.351838 kubelet[3027]: I0913 00:07:59.351764 3027 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cilium-run\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.351838 kubelet[3027]: I0913 00:07:59.351785 3027 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-hostproc\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.352373 kubelet[3027]: I0913 00:07:59.351805 3027 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/615030d2-cea6-4cb0-b51a-872174570eee-hubble-tls\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.352373 kubelet[3027]: I0913 00:07:59.351826 3027 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/615030d2-cea6-4cb0-b51a-872174570eee-cilium-config-path\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.352373 kubelet[3027]: I0913 00:07:59.351846 3027 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-cni-path\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.352373 kubelet[3027]: I0913 00:07:59.351867 3027 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/615030d2-cea6-4cb0-b51a-872174570eee-bpf-maps\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:07:59.704691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478-rootfs.mount: Deactivated successfully. Sep 13 00:07:59.704977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-983fc2168d83f6328788b2d70ccf39c59b2fe67f6aa16e1a056230863d8af59a-rootfs.mount: Deactivated successfully. Sep 13 00:07:59.705200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09fb97bd5276c6d2a3b804b4e82052ddfba47837862c8a1c82d33416aaf76478-shm.mount: Deactivated successfully. Sep 13 00:07:59.705513 systemd[1]: var-lib-kubelet-pods-1c3633e3\x2d04c6\x2d4a9f\x2d9373\x2d5a89f4f9ffa9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drthj5.mount: Deactivated successfully. Sep 13 00:07:59.705757 systemd[1]: var-lib-kubelet-pods-615030d2\x2dcea6\x2d4cb0\x2db51a\x2d872174570eee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dldz24.mount: Deactivated successfully. Sep 13 00:07:59.705993 systemd[1]: var-lib-kubelet-pods-615030d2\x2dcea6\x2d4cb0\x2db51a\x2d872174570eee-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:07:59.706292 systemd[1]: var-lib-kubelet-pods-615030d2\x2dcea6\x2d4cb0\x2db51a\x2d872174570eee-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:07:59.974885 kubelet[3027]: I0913 00:07:59.974187 3027 scope.go:117] "RemoveContainer" containerID="d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3" Sep 13 00:07:59.983073 env[1922]: time="2025-09-13T00:07:59.983005764Z" level=info msg="RemoveContainer for \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\"" Sep 13 00:07:59.993676 env[1922]: time="2025-09-13T00:07:59.993605102Z" level=info msg="RemoveContainer for \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\" returns successfully" Sep 13 00:07:59.994476 kubelet[3027]: I0913 00:07:59.994438 3027 scope.go:117] "RemoveContainer" containerID="880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15" Sep 13 00:07:59.999016 env[1922]: time="2025-09-13T00:07:59.998418155Z" level=info msg="RemoveContainer for \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\"" Sep 13 00:08:00.017558 env[1922]: time="2025-09-13T00:08:00.017484786Z" level=info msg="RemoveContainer for \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\" returns successfully" Sep 13 00:08:00.018009 kubelet[3027]: I0913 00:08:00.017980 3027 scope.go:117] "RemoveContainer" containerID="5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0" Sep 13 00:08:00.024648 env[1922]: time="2025-09-13T00:08:00.024584351Z" level=info msg="RemoveContainer for \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\"" Sep 13 00:08:00.034032 env[1922]: time="2025-09-13T00:08:00.033300977Z" level=info msg="RemoveContainer for \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\" returns successfully" Sep 13 00:08:00.034454 kubelet[3027]: I0913 00:08:00.033739 3027 scope.go:117] "RemoveContainer" containerID="cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76" Sep 13 00:08:00.038347 env[1922]: time="2025-09-13T00:08:00.038173603Z" level=info msg="RemoveContainer for \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\"" Sep 13 00:08:00.046769 env[1922]: time="2025-09-13T00:08:00.046704028Z" level=info msg="RemoveContainer for \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\" returns successfully" Sep 13 00:08:00.047438 kubelet[3027]: I0913 00:08:00.047395 3027 scope.go:117] "RemoveContainer" containerID="e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c" Sep 13 00:08:00.049751 env[1922]: time="2025-09-13T00:08:00.049559484Z" level=info msg="RemoveContainer for \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\"" Sep 13 00:08:00.057583 env[1922]: time="2025-09-13T00:08:00.057526016Z" level=info msg="RemoveContainer for \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\" returns successfully" Sep 13 00:08:00.058148 kubelet[3027]: I0913 00:08:00.058117 3027 scope.go:117] "RemoveContainer" containerID="d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3" Sep 13 00:08:00.058824 env[1922]: time="2025-09-13T00:08:00.058702765Z" level=error msg="ContainerStatus for \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\": not found" Sep 13 00:08:00.059190 kubelet[3027]: E0913 00:08:00.059151 3027 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\": not found" containerID="d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3" Sep 13 00:08:00.059341 kubelet[3027]: I0913 00:08:00.059207 3027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3"} err="failed to get container status \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2870006a6f8755e7d09aeb12c5ea3f7add29bd12dc6d1955e8994670b1273d3\": not found" Sep 13 00:08:00.059441 kubelet[3027]: I0913 00:08:00.059345 3027 scope.go:117] "RemoveContainer" containerID="880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15" Sep 13 00:08:00.059831 env[1922]: time="2025-09-13T00:08:00.059745791Z" level=error msg="ContainerStatus for \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\": not found" Sep 13 00:08:00.060172 kubelet[3027]: E0913 00:08:00.060130 3027 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\": not found" containerID="880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15" Sep 13 00:08:00.060324 kubelet[3027]: I0913 00:08:00.060185 3027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15"} err="failed to get container status \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\": rpc error: code = NotFound desc = an error occurred when try to find container \"880882bb759003d72129f7711965d38edd038f75238a249b00954b8a4e416a15\": not found" Sep 13 00:08:00.060324 kubelet[3027]: I0913 00:08:00.060220 3027 scope.go:117] "RemoveContainer" containerID="5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0" Sep 13 00:08:00.060640 env[1922]: time="2025-09-13T00:08:00.060542819Z" level=error msg="ContainerStatus for \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\": not found" Sep 13 00:08:00.060854 kubelet[3027]: E0913 00:08:00.060805 3027 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\": not found" containerID="5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0" Sep 13 00:08:00.060941 kubelet[3027]: I0913 00:08:00.060853 3027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0"} err="failed to get container status \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a9b1334606187918604c937b8c191b2d41d1382b6878166d7a271e23b7f33f0\": not found" Sep 13 00:08:00.060941 kubelet[3027]: I0913 00:08:00.060886 3027 scope.go:117] "RemoveContainer" containerID="cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76" Sep 13 00:08:00.061425 env[1922]: time="2025-09-13T00:08:00.061328350Z" level=error msg="ContainerStatus for \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\": not found" Sep 13 00:08:00.061777 kubelet[3027]: E0913 00:08:00.061738 3027 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\": not found" containerID="cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76" Sep 13 00:08:00.061897 kubelet[3027]: I0913 00:08:00.061786 3027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76"} err="failed to get container status \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdbc33525075b2994a795fe21a916e6c3fe6f867f8999023c65275571481ee76\": not found" Sep 13 00:08:00.061897 kubelet[3027]: I0913 00:08:00.061820 3027 scope.go:117] "RemoveContainer" containerID="e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c" Sep 13 00:08:00.062171 env[1922]: time="2025-09-13T00:08:00.062081900Z" level=error msg="ContainerStatus for \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\": not found" Sep 13 00:08:00.062399 kubelet[3027]: E0913 00:08:00.062353 3027 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\": not found" containerID="e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c" Sep 13 00:08:00.062475 kubelet[3027]: I0913 00:08:00.062400 3027 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c"} err="failed to get container status \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e07c56017095fd76fed8a28b5e1b85f7e2b20ba892ca15a84d7e8adee9b3b78c\": not found" Sep 13 00:08:00.062475 kubelet[3027]: I0913 00:08:00.062435 3027 scope.go:117] "RemoveContainer" containerID="249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d" Sep 13 00:08:00.064514 env[1922]: time="2025-09-13T00:08:00.064426408Z" level=info msg="RemoveContainer for \"249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d\"" Sep 13 00:08:00.071344 env[1922]: time="2025-09-13T00:08:00.071219804Z" level=info msg="RemoveContainer for \"249e237d1f85cdd5fdc8c869bffd3ad4e046b731479183926eb3a30437cf418d\" returns successfully" Sep 13 00:08:00.441457 kubelet[3027]: I0913 00:08:00.441404 3027 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9" path="/var/lib/kubelet/pods/1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9/volumes" Sep 13 00:08:00.442481 kubelet[3027]: I0913 00:08:00.442446 3027 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="615030d2-cea6-4cb0-b51a-872174570eee" path="/var/lib/kubelet/pods/615030d2-cea6-4cb0-b51a-872174570eee/volumes" Sep 13 00:08:00.637638 sshd[4607]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:00.641883 systemd[1]: sshd@24-172.31.24.134:22-139.178.89.65:44548.service: Deactivated successfully. Sep 13 00:08:00.643861 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:08:00.643888 systemd-logind[1905]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:08:00.646781 systemd-logind[1905]: Removed session 25. Sep 13 00:08:00.661684 systemd[1]: Started sshd@25-172.31.24.134:22-139.178.89.65:54550.service. Sep 13 00:08:00.835659 sshd[4776]: Accepted publickey for core from 139.178.89.65 port 54550 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:08:00.838334 sshd[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:00.848996 systemd[1]: Started session-26.scope. Sep 13 00:08:00.850617 systemd-logind[1905]: New session 26 of user core. Sep 13 00:08:02.452982 sshd[4776]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:02.459438 systemd-logind[1905]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:08:02.459818 systemd[1]: sshd@25-172.31.24.134:22-139.178.89.65:54550.service: Deactivated successfully. Sep 13 00:08:02.462664 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:08:02.464111 systemd-logind[1905]: Removed session 26. Sep 13 00:08:02.478423 systemd[1]: Started sshd@26-172.31.24.134:22-139.178.89.65:54552.service. Sep 13 00:08:02.567762 kubelet[3027]: E0913 00:08:02.567716 3027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="615030d2-cea6-4cb0-b51a-872174570eee" containerName="mount-cgroup" Sep 13 00:08:02.568446 kubelet[3027]: E0913 00:08:02.568410 3027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="615030d2-cea6-4cb0-b51a-872174570eee" containerName="mount-bpf-fs" Sep 13 00:08:02.568613 kubelet[3027]: E0913 00:08:02.568585 3027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="615030d2-cea6-4cb0-b51a-872174570eee" containerName="apply-sysctl-overwrites" Sep 13 00:08:02.568727 kubelet[3027]: E0913 00:08:02.568704 3027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9" containerName="cilium-operator" Sep 13 00:08:02.568836 kubelet[3027]: E0913 00:08:02.568815 3027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="615030d2-cea6-4cb0-b51a-872174570eee" containerName="clean-cilium-state" Sep 13 00:08:02.568954 kubelet[3027]: E0913 00:08:02.568932 3027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="615030d2-cea6-4cb0-b51a-872174570eee" containerName="cilium-agent" Sep 13 00:08:02.569105 kubelet[3027]: I0913 00:08:02.569083 3027 memory_manager.go:354] "RemoveStaleState removing state" podUID="615030d2-cea6-4cb0-b51a-872174570eee" containerName="cilium-agent" Sep 13 00:08:02.569225 kubelet[3027]: I0913 00:08:02.569203 3027 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3633e3-04c6-4a9f-9373-5a89f4f9ffa9" containerName="cilium-operator" Sep 13 00:08:02.570302 kubelet[3027]: I0913 00:08:02.570202 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-clustermesh-secrets\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570467 kubelet[3027]: I0913 00:08:02.570301 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-host-proc-sys-kernel\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570467 kubelet[3027]: I0913 00:08:02.570346 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zkq2\" (UniqueName: \"kubernetes.io/projected/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-kube-api-access-8zkq2\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570467 kubelet[3027]: I0913 00:08:02.570386 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-hostproc\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570467 kubelet[3027]: I0913 00:08:02.570425 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cni-path\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570467 kubelet[3027]: I0913 00:08:02.570460 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-host-proc-sys-net\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570859 kubelet[3027]: I0913 00:08:02.570494 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-hubble-tls\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570859 kubelet[3027]: I0913 00:08:02.570531 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-ipsec-secrets\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570859 kubelet[3027]: I0913 00:08:02.570570 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-etc-cni-netd\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570859 kubelet[3027]: I0913 00:08:02.570603 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-lib-modules\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570859 kubelet[3027]: I0913 00:08:02.570638 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-config-path\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.570859 kubelet[3027]: I0913 00:08:02.570674 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-run\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.571225 kubelet[3027]: I0913 00:08:02.570712 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-bpf-maps\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.571225 kubelet[3027]: I0913 00:08:02.570747 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-cgroup\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.571225 kubelet[3027]: I0913 00:08:02.570789 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-xtables-lock\") pod \"cilium-8f8k6\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " pod="kube-system/cilium-8f8k6" Sep 13 00:08:02.665206 sshd[4787]: Accepted publickey for core from 139.178.89.65 port 54552 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:08:02.667181 sshd[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:02.677093 systemd[1]: Started session-27.scope. Sep 13 00:08:02.680392 systemd-logind[1905]: New session 27 of user core. Sep 13 00:08:02.883437 env[1922]: time="2025-09-13T00:08:02.883361067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8f8k6,Uid:6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:02.922120 env[1922]: time="2025-09-13T00:08:02.921724580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:02.922120 env[1922]: time="2025-09-13T00:08:02.921794832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:02.922120 env[1922]: time="2025-09-13T00:08:02.921820741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:02.922120 env[1922]: time="2025-09-13T00:08:02.922062864Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8 pid=4808 runtime=io.containerd.runc.v2 Sep 13 00:08:03.035585 sshd[4787]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:03.045189 systemd[1]: sshd@26-172.31.24.134:22-139.178.89.65:54552.service: Deactivated successfully. Sep 13 00:08:03.046774 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:08:03.060420 systemd[1]: Started sshd@27-172.31.24.134:22-139.178.89.65:54558.service. Sep 13 00:08:03.070470 systemd-logind[1905]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:08:03.076481 systemd-logind[1905]: Removed session 27. Sep 13 00:08:03.096242 env[1922]: time="2025-09-13T00:08:03.096143315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8f8k6,Uid:6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8\"" Sep 13 00:08:03.111973 env[1922]: time="2025-09-13T00:08:03.111796809Z" level=info msg="CreateContainer within sandbox \"9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:08:03.143166 env[1922]: time="2025-09-13T00:08:03.142304009Z" level=info msg="CreateContainer within sandbox \"9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"82caa64ac125bab507de348535276cb7319f0faffe6bf6ed8c89a9e5f3231632\"" Sep 13 00:08:03.144518 env[1922]: time="2025-09-13T00:08:03.144435007Z" level=info msg="StartContainer for \"82caa64ac125bab507de348535276cb7319f0faffe6bf6ed8c89a9e5f3231632\"" Sep 13 00:08:03.254564 env[1922]: time="2025-09-13T00:08:03.254503566Z" level=info msg="StartContainer for \"82caa64ac125bab507de348535276cb7319f0faffe6bf6ed8c89a9e5f3231632\" returns successfully" Sep 13 00:08:03.290551 sshd[4844]: Accepted publickey for core from 139.178.89.65 port 54558 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:08:03.293337 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:08:03.304727 systemd[1]: Started session-28.scope. Sep 13 00:08:03.306416 systemd-logind[1905]: New session 28 of user core. Sep 13 00:08:03.356065 env[1922]: time="2025-09-13T00:08:03.355772002Z" level=info msg="shim disconnected" id=82caa64ac125bab507de348535276cb7319f0faffe6bf6ed8c89a9e5f3231632 Sep 13 00:08:03.356418 env[1922]: time="2025-09-13T00:08:03.356080296Z" level=warning msg="cleaning up after shim disconnected" id=82caa64ac125bab507de348535276cb7319f0faffe6bf6ed8c89a9e5f3231632 namespace=k8s.io Sep 13 00:08:03.356418 env[1922]: time="2025-09-13T00:08:03.356107153Z" level=info msg="cleaning up dead shim" Sep 13 00:08:03.373347 env[1922]: time="2025-09-13T00:08:03.373279786Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4901 runtime=io.containerd.runc.v2\n" Sep 13 00:08:03.640707 kubelet[3027]: E0913 00:08:03.640622 3027 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:04.009641 env[1922]: time="2025-09-13T00:08:04.008930217Z" level=info msg="StopPodSandbox for \"9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8\"" Sep 13 00:08:04.009641 env[1922]: time="2025-09-13T00:08:04.009054471Z" level=info msg="Container to stop \"82caa64ac125bab507de348535276cb7319f0faffe6bf6ed8c89a9e5f3231632\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:08:04.013695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8-shm.mount: Deactivated successfully. Sep 13 00:08:04.075740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8-rootfs.mount: Deactivated successfully. Sep 13 00:08:04.089148 env[1922]: time="2025-09-13T00:08:04.089065365Z" level=info msg="shim disconnected" id=9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8 Sep 13 00:08:04.089148 env[1922]: time="2025-09-13T00:08:04.089139060Z" level=warning msg="cleaning up after shim disconnected" id=9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8 namespace=k8s.io Sep 13 00:08:04.089506 env[1922]: time="2025-09-13T00:08:04.089162569Z" level=info msg="cleaning up dead shim" Sep 13 00:08:04.103953 env[1922]: time="2025-09-13T00:08:04.103886566Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4940 runtime=io.containerd.runc.v2\n" Sep 13 00:08:04.104786 env[1922]: time="2025-09-13T00:08:04.104732113Z" level=info msg="TearDown network for sandbox \"9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8\" successfully" Sep 13 00:08:04.104943 env[1922]: time="2025-09-13T00:08:04.104785864Z" level=info msg="StopPodSandbox for \"9774eb5f5fe6bdca44d66bdd54118d59e061d2105af60cef1a6d6f5193db58a8\" returns successfully" Sep 13 00:08:04.305548 kubelet[3027]: I0913 00:08:04.305482 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zkq2\" (UniqueName: \"kubernetes.io/projected/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-kube-api-access-8zkq2\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.305775 kubelet[3027]: I0913 00:08:04.305578 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cni-path\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.305775 kubelet[3027]: I0913 00:08:04.305619 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-hostproc\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.305775 kubelet[3027]: I0913 00:08:04.305680 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-bpf-maps\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.305775 kubelet[3027]: I0913 00:08:04.305754 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-xtables-lock\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306042 kubelet[3027]: I0913 00:08:04.305821 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-clustermesh-secrets\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306042 kubelet[3027]: I0913 00:08:04.305860 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-host-proc-sys-kernel\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306042 kubelet[3027]: I0913 00:08:04.305921 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-lib-modules\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306042 kubelet[3027]: I0913 00:08:04.305963 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-config-path\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306042 kubelet[3027]: I0913 00:08:04.306023 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-host-proc-sys-net\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306403 kubelet[3027]: I0913 00:08:04.306058 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-cgroup\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306403 kubelet[3027]: I0913 00:08:04.306122 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-ipsec-secrets\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306403 kubelet[3027]: I0913 00:08:04.306188 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-hubble-tls\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306403 kubelet[3027]: I0913 00:08:04.306227 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-etc-cni-netd\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306403 kubelet[3027]: I0913 00:08:04.306307 3027 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-run\") pod \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\" (UID: \"6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8\") " Sep 13 00:08:04.306702 kubelet[3027]: I0913 00:08:04.306428 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.306959 kubelet[3027]: I0913 00:08:04.306925 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.307237 kubelet[3027]: I0913 00:08:04.307206 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cni-path" (OuterVolumeSpecName: "cni-path") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.307447 kubelet[3027]: I0913 00:08:04.307420 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-hostproc" (OuterVolumeSpecName: "hostproc") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.307595 kubelet[3027]: I0913 00:08:04.307568 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.307755 kubelet[3027]: I0913 00:08:04.307729 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.311618 kubelet[3027]: I0913 00:08:04.311502 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.316858 systemd[1]: var-lib-kubelet-pods-6cb3eec8\x2d5f7e\x2d4ca8\x2dbb06\x2d31a351d6ffb8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:04.325101 kubelet[3027]: I0913 00:08:04.324072 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.325101 kubelet[3027]: I0913 00:08:04.324156 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.325101 kubelet[3027]: I0913 00:08:04.324338 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:08:04.326437 kubelet[3027]: I0913 00:08:04.326342 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:08:04.341136 kubelet[3027]: I0913 00:08:04.332458 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:08:04.337708 systemd[1]: var-lib-kubelet-pods-6cb3eec8\x2d5f7e\x2d4ca8\x2dbb06\x2d31a351d6ffb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8zkq2.mount: Deactivated successfully. Sep 13 00:08:04.342874 kubelet[3027]: I0913 00:08:04.342824 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-kube-api-access-8zkq2" (OuterVolumeSpecName: "kube-api-access-8zkq2") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "kube-api-access-8zkq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:04.361608 kubelet[3027]: I0913 00:08:04.361552 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:08:04.362412 kubelet[3027]: I0913 00:08:04.362362 3027 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" (UID: "6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:08:04.407457 kubelet[3027]: I0913 00:08:04.407406 3027 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zkq2\" (UniqueName: \"kubernetes.io/projected/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-kube-api-access-8zkq2\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.407681 kubelet[3027]: I0913 00:08:04.407653 3027 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cni-path\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.407899 kubelet[3027]: I0913 00:08:04.407877 3027 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-hostproc\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.408022 kubelet[3027]: I0913 00:08:04.408001 3027 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-xtables-lock\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.408143 kubelet[3027]: I0913 00:08:04.408121 3027 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-clustermesh-secrets\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.408302 kubelet[3027]: I0913 00:08:04.408243 3027 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-host-proc-sys-kernel\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.408434 kubelet[3027]: I0913 00:08:04.408413 3027 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-lib-modules\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.408563 kubelet[3027]: I0913 00:08:04.408541 3027 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-config-path\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.408676 kubelet[3027]: I0913 00:08:04.408655 3027 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-bpf-maps\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.408789 kubelet[3027]: I0913 00:08:04.408768 3027 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-host-proc-sys-net\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.408925 kubelet[3027]: I0913 00:08:04.408903 3027 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-cgroup\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.409053 kubelet[3027]: I0913 00:08:04.409032 3027 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-ipsec-secrets\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.409189 kubelet[3027]: I0913 00:08:04.409166 3027 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-hubble-tls\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.409334 kubelet[3027]: I0913 00:08:04.409312 3027 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-etc-cni-netd\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.409470 kubelet[3027]: I0913 00:08:04.409448 3027 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8-cilium-run\") on node \"ip-172-31-24-134\" DevicePath \"\"" Sep 13 00:08:04.725033 systemd[1]: var-lib-kubelet-pods-6cb3eec8\x2d5f7e\x2d4ca8\x2dbb06\x2d31a351d6ffb8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:08:04.726306 systemd[1]: var-lib-kubelet-pods-6cb3eec8\x2d5f7e\x2d4ca8\x2dbb06\x2d31a351d6ffb8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:08:05.012119 kubelet[3027]: I0913 00:08:05.011975 3027 scope.go:117] "RemoveContainer" containerID="82caa64ac125bab507de348535276cb7319f0faffe6bf6ed8c89a9e5f3231632" Sep 13 00:08:05.020230 env[1922]: time="2025-09-13T00:08:05.018918948Z" level=info msg="RemoveContainer for \"82caa64ac125bab507de348535276cb7319f0faffe6bf6ed8c89a9e5f3231632\"" Sep 13 00:08:05.028134 env[1922]: time="2025-09-13T00:08:05.028074139Z" level=info msg="RemoveContainer for \"82caa64ac125bab507de348535276cb7319f0faffe6bf6ed8c89a9e5f3231632\" returns successfully" Sep 13 00:08:05.081821 kubelet[3027]: E0913 00:08:05.081750 3027 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" containerName="mount-cgroup" Sep 13 00:08:05.081981 kubelet[3027]: I0913 00:08:05.081850 3027 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" containerName="mount-cgroup" Sep 13 00:08:05.113841 kubelet[3027]: I0913 00:08:05.113771 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-hostproc\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.114156 kubelet[3027]: I0913 00:08:05.114106 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-cni-path\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.114394 kubelet[3027]: I0913 00:08:05.114349 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-lib-modules\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.114589 kubelet[3027]: I0913 00:08:05.114552 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-clustermesh-secrets\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.114751 kubelet[3027]: I0913 00:08:05.114715 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-cilium-ipsec-secrets\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.114947 kubelet[3027]: I0913 00:08:05.114903 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-bpf-maps\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.115127 kubelet[3027]: I0913 00:08:05.115080 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-host-proc-sys-net\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.115311 kubelet[3027]: I0913 00:08:05.115286 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbnsk\" (UniqueName: \"kubernetes.io/projected/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-kube-api-access-cbnsk\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.115529 kubelet[3027]: I0913 00:08:05.115456 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-cilium-config-path\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.115732 kubelet[3027]: I0913 00:08:05.115696 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-host-proc-sys-kernel\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.115900 kubelet[3027]: I0913 00:08:05.115876 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-hubble-tls\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.116087 kubelet[3027]: I0913 00:08:05.116051 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-cilium-run\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.116262 kubelet[3027]: I0913 00:08:05.116220 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-cilium-cgroup\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.116457 kubelet[3027]: I0913 00:08:05.116413 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-xtables-lock\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.116645 kubelet[3027]: I0913 00:08:05.116608 3027 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad-etc-cni-netd\") pod \"cilium-m6rh7\" (UID: \"b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad\") " pod="kube-system/cilium-m6rh7" Sep 13 00:08:05.393690 env[1922]: time="2025-09-13T00:08:05.393617208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6rh7,Uid:b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:05.433065 env[1922]: time="2025-09-13T00:08:05.432925377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:05.433357 env[1922]: time="2025-09-13T00:08:05.433007605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:05.433357 env[1922]: time="2025-09-13T00:08:05.433035062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:05.433560 env[1922]: time="2025-09-13T00:08:05.433363097Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8 pid=4970 runtime=io.containerd.runc.v2 Sep 13 00:08:05.532372 env[1922]: time="2025-09-13T00:08:05.532312907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6rh7,Uid:b1bd8a9b-d4e5-4e4e-8e9e-3a0e562f48ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\"" Sep 13 00:08:05.541329 env[1922]: time="2025-09-13T00:08:05.541227631Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:08:05.577360 env[1922]: time="2025-09-13T00:08:05.567764516Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"42c07933b61172819db29e3c47bc9e5af73f2f6ca3f06bd86326f3c840b00f06\"" Sep 13 00:08:05.577360 env[1922]: time="2025-09-13T00:08:05.568887912Z" level=info msg="StartContainer for \"42c07933b61172819db29e3c47bc9e5af73f2f6ca3f06bd86326f3c840b00f06\"" Sep 13 00:08:05.717750 env[1922]: time="2025-09-13T00:08:05.717607619Z" level=info msg="StartContainer for \"42c07933b61172819db29e3c47bc9e5af73f2f6ca3f06bd86326f3c840b00f06\" returns successfully" Sep 13 00:08:05.807613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42c07933b61172819db29e3c47bc9e5af73f2f6ca3f06bd86326f3c840b00f06-rootfs.mount: Deactivated successfully. Sep 13 00:08:05.829075 env[1922]: time="2025-09-13T00:08:05.828958735Z" level=info msg="shim disconnected" id=42c07933b61172819db29e3c47bc9e5af73f2f6ca3f06bd86326f3c840b00f06 Sep 13 00:08:05.829525 env[1922]: time="2025-09-13T00:08:05.829492412Z" level=warning msg="cleaning up after shim disconnected" id=42c07933b61172819db29e3c47bc9e5af73f2f6ca3f06bd86326f3c840b00f06 namespace=k8s.io Sep 13 00:08:05.829825 env[1922]: time="2025-09-13T00:08:05.829794178Z" level=info msg="cleaning up dead shim" Sep 13 00:08:05.854693 env[1922]: time="2025-09-13T00:08:05.854636284Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5053 runtime=io.containerd.runc.v2\n" Sep 13 00:08:06.024883 env[1922]: time="2025-09-13T00:08:06.023940343Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:08:06.065683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887685504.mount: Deactivated successfully. Sep 13 00:08:06.083324 env[1922]: time="2025-09-13T00:08:06.083229260Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae0b44244353d6bc106f1f3273ddeaeadcc13890bb13f37f192968c287095d76\"" Sep 13 00:08:06.084634 env[1922]: time="2025-09-13T00:08:06.084584963Z" level=info msg="StartContainer for \"ae0b44244353d6bc106f1f3273ddeaeadcc13890bb13f37f192968c287095d76\"" Sep 13 00:08:06.199732 env[1922]: time="2025-09-13T00:08:06.199672374Z" level=info msg="StartContainer for \"ae0b44244353d6bc106f1f3273ddeaeadcc13890bb13f37f192968c287095d76\" returns successfully" Sep 13 00:08:06.241000 env[1922]: time="2025-09-13T00:08:06.240913367Z" level=info msg="shim disconnected" id=ae0b44244353d6bc106f1f3273ddeaeadcc13890bb13f37f192968c287095d76 Sep 13 00:08:06.241000 env[1922]: time="2025-09-13T00:08:06.240983211Z" level=warning msg="cleaning up after shim disconnected" id=ae0b44244353d6bc106f1f3273ddeaeadcc13890bb13f37f192968c287095d76 namespace=k8s.io Sep 13 00:08:06.241388 env[1922]: time="2025-09-13T00:08:06.241005604Z" level=info msg="cleaning up dead shim" Sep 13 00:08:06.254916 env[1922]: time="2025-09-13T00:08:06.254843790Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5115 runtime=io.containerd.runc.v2\n" Sep 13 00:08:06.441533 kubelet[3027]: I0913 00:08:06.441362 3027 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8" path="/var/lib/kubelet/pods/6cb3eec8-5f7e-4ca8-bb06-31a351d6ffb8/volumes" Sep 13 00:08:07.041737 env[1922]: time="2025-09-13T00:08:07.041655786Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:08:07.094353 env[1922]: time="2025-09-13T00:08:07.094248671Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd5c9ceec51087dd0db2c2b2093b2c885d56225d9fadc5675193e865680396b1\"" Sep 13 00:08:07.097171 env[1922]: time="2025-09-13T00:08:07.097102670Z" level=info msg="StartContainer for \"fd5c9ceec51087dd0db2c2b2093b2c885d56225d9fadc5675193e865680396b1\"" Sep 13 00:08:07.222970 env[1922]: time="2025-09-13T00:08:07.222903602Z" level=info msg="StartContainer for \"fd5c9ceec51087dd0db2c2b2093b2c885d56225d9fadc5675193e865680396b1\" returns successfully" Sep 13 00:08:07.269031 env[1922]: time="2025-09-13T00:08:07.268971178Z" level=info msg="shim disconnected" id=fd5c9ceec51087dd0db2c2b2093b2c885d56225d9fadc5675193e865680396b1 Sep 13 00:08:07.269495 env[1922]: time="2025-09-13T00:08:07.269450661Z" level=warning msg="cleaning up after shim disconnected" id=fd5c9ceec51087dd0db2c2b2093b2c885d56225d9fadc5675193e865680396b1 namespace=k8s.io Sep 13 00:08:07.269653 env[1922]: time="2025-09-13T00:08:07.269624765Z" level=info msg="cleaning up dead shim" Sep 13 00:08:07.283007 env[1922]: time="2025-09-13T00:08:07.282950303Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5174 runtime=io.containerd.runc.v2\n" Sep 13 00:08:07.725466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd5c9ceec51087dd0db2c2b2093b2c885d56225d9fadc5675193e865680396b1-rootfs.mount: Deactivated successfully. Sep 13 00:08:08.051422 env[1922]: time="2025-09-13T00:08:08.051350266Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:08:08.094036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569227814.mount: Deactivated successfully. Sep 13 00:08:08.110801 env[1922]: time="2025-09-13T00:08:08.110672069Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"870019864b6f739d3381114909335c3aae09569aa0dc0633cea5f214a7c735b2\"" Sep 13 00:08:08.112454 env[1922]: time="2025-09-13T00:08:08.111646539Z" level=info msg="StartContainer for \"870019864b6f739d3381114909335c3aae09569aa0dc0633cea5f214a7c735b2\"" Sep 13 00:08:08.222611 env[1922]: time="2025-09-13T00:08:08.222546271Z" level=info msg="StartContainer for \"870019864b6f739d3381114909335c3aae09569aa0dc0633cea5f214a7c735b2\" returns successfully" Sep 13 00:08:08.260766 env[1922]: time="2025-09-13T00:08:08.260682324Z" level=info msg="shim disconnected" id=870019864b6f739d3381114909335c3aae09569aa0dc0633cea5f214a7c735b2 Sep 13 00:08:08.260766 env[1922]: time="2025-09-13T00:08:08.260758780Z" level=warning msg="cleaning up after shim disconnected" id=870019864b6f739d3381114909335c3aae09569aa0dc0633cea5f214a7c735b2 namespace=k8s.io Sep 13 00:08:08.261112 env[1922]: time="2025-09-13T00:08:08.260782469Z" level=info msg="cleaning up dead shim" Sep 13 00:08:08.276079 env[1922]: time="2025-09-13T00:08:08.276020495Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5231 runtime=io.containerd.runc.v2\n" Sep 13 00:08:08.642108 kubelet[3027]: E0913 00:08:08.642037 3027 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:08:09.053768 env[1922]: time="2025-09-13T00:08:09.053693936Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:08:09.100356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111831817.mount: Deactivated successfully. Sep 13 00:08:09.118057 env[1922]: time="2025-09-13T00:08:09.117978775Z" level=info msg="CreateContainer within sandbox \"41c87d06c180e4843b657d14cc668634fef9ad131167dc1836b8316f4d20d9e8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"92de86372a532ad9587ba46105d71cad70fb2fbd7d8989482e6a25dbcf124641\"" Sep 13 00:08:09.119004 env[1922]: time="2025-09-13T00:08:09.118922705Z" level=info msg="StartContainer for \"92de86372a532ad9587ba46105d71cad70fb2fbd7d8989482e6a25dbcf124641\"" Sep 13 00:08:09.275136 env[1922]: time="2025-09-13T00:08:09.275073522Z" level=info msg="StartContainer for \"92de86372a532ad9587ba46105d71cad70fb2fbd7d8989482e6a25dbcf124641\" returns successfully" Sep 13 00:08:10.088388 kubelet[3027]: I0913 00:08:10.088306 3027 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m6rh7" podStartSLOduration=5.088252436 podStartE2EDuration="5.088252436s" podCreationTimestamp="2025-09-13 00:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:10.08642638 +0000 UTC m=+132.072374657" watchObservedRunningTime="2025-09-13 00:08:10.088252436 +0000 UTC m=+132.074200701" Sep 13 00:08:10.221302 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 13 00:08:10.745182 kubelet[3027]: I0913 00:08:10.745112 3027 setters.go:600] "Node became not ready" node="ip-172-31-24-134" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:08:10Z","lastTransitionTime":"2025-09-13T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:08:14.476499 (udev-worker)[5800]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:08:14.486153 (udev-worker)[5801]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:08:14.492471 systemd-networkd[1591]: lxc_health: Link UP Sep 13 00:08:14.520675 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:08:14.520205 systemd-networkd[1591]: lxc_health: Gained carrier Sep 13 00:08:15.652964 systemd-networkd[1591]: lxc_health: Gained IPv6LL Sep 13 00:08:21.434334 systemd[1]: run-containerd-runc-k8s.io-92de86372a532ad9587ba46105d71cad70fb2fbd7d8989482e6a25dbcf124641-runc.rxwr1E.mount: Deactivated successfully. Sep 13 00:08:21.597768 sshd[4844]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:21.604993 systemd[1]: sshd@27-172.31.24.134:22-139.178.89.65:54558.service: Deactivated successfully. Sep 13 00:08:21.606491 systemd-logind[1905]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:08:21.607712 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:08:21.609061 systemd-logind[1905]: Removed session 28. Sep 13 00:08:36.895332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9148a3ec48381f0d1db13222c141d4b52f1cb543c6bcbb3ae55d37204320caf-rootfs.mount: Deactivated successfully. Sep 13 00:08:36.933400 env[1922]: time="2025-09-13T00:08:36.933333618Z" level=info msg="shim disconnected" id=f9148a3ec48381f0d1db13222c141d4b52f1cb543c6bcbb3ae55d37204320caf Sep 13 00:08:36.934156 env[1922]: time="2025-09-13T00:08:36.934119372Z" level=warning msg="cleaning up after shim disconnected" id=f9148a3ec48381f0d1db13222c141d4b52f1cb543c6bcbb3ae55d37204320caf namespace=k8s.io Sep 13 00:08:36.934318 env[1922]: time="2025-09-13T00:08:36.934290778Z" level=info msg="cleaning up dead shim" Sep 13 00:08:36.947553 env[1922]: time="2025-09-13T00:08:36.947497918Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5913 runtime=io.containerd.runc.v2\n" Sep 13 00:08:37.133251 kubelet[3027]: I0913 00:08:37.133187 3027 scope.go:117] "RemoveContainer" containerID="f9148a3ec48381f0d1db13222c141d4b52f1cb543c6bcbb3ae55d37204320caf" Sep 13 00:08:37.136490 env[1922]: time="2025-09-13T00:08:37.136435254Z" level=info msg="CreateContainer within sandbox \"9de39b785988e079e548eef407c1284952d1719110cc8799c794f18e0bbe20a5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:08:37.173156 env[1922]: time="2025-09-13T00:08:37.173022125Z" level=info msg="CreateContainer within sandbox \"9de39b785988e079e548eef407c1284952d1719110cc8799c794f18e0bbe20a5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6d32690a32799cd19bd8442aee98640a2e8c023cb6591c8d0f41574c18a2a11e\"" Sep 13 00:08:37.174303 env[1922]: time="2025-09-13T00:08:37.174233638Z" level=info msg="StartContainer for \"6d32690a32799cd19bd8442aee98640a2e8c023cb6591c8d0f41574c18a2a11e\"" Sep 13 00:08:37.308788 env[1922]: time="2025-09-13T00:08:37.308674395Z" level=info msg="StartContainer for \"6d32690a32799cd19bd8442aee98640a2e8c023cb6591c8d0f41574c18a2a11e\" returns successfully" Sep 13 00:08:40.552794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afaf537f515b2182998dbe51e9f2324b4c2b5efe383617528073a166abd99572-rootfs.mount: Deactivated successfully. Sep 13 00:08:40.567092 env[1922]: time="2025-09-13T00:08:40.567028275Z" level=info msg="shim disconnected" id=afaf537f515b2182998dbe51e9f2324b4c2b5efe383617528073a166abd99572 Sep 13 00:08:40.567934 env[1922]: time="2025-09-13T00:08:40.567859200Z" level=warning msg="cleaning up after shim disconnected" id=afaf537f515b2182998dbe51e9f2324b4c2b5efe383617528073a166abd99572 namespace=k8s.io Sep 13 00:08:40.568077 env[1922]: time="2025-09-13T00:08:40.568048510Z" level=info msg="cleaning up dead shim" Sep 13 00:08:40.581519 env[1922]: time="2025-09-13T00:08:40.581464744Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5972 runtime=io.containerd.runc.v2\n" Sep 13 00:08:41.147832 kubelet[3027]: I0913 00:08:41.147501 3027 scope.go:117] "RemoveContainer" containerID="afaf537f515b2182998dbe51e9f2324b4c2b5efe383617528073a166abd99572" Sep 13 00:08:41.151477 env[1922]: time="2025-09-13T00:08:41.151416754Z" level=info msg="CreateContainer within sandbox \"f5e05bc054e7ecc909a66acd9b1ae6c3772705b21e7b5d9a3fc382ae69733ab7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:08:41.180545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906126759.mount: Deactivated successfully. Sep 13 00:08:41.188953 kubelet[3027]: E0913 00:08:41.187765 3027 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-134?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 13 00:08:41.199871 env[1922]: time="2025-09-13T00:08:41.199806799Z" level=info msg="CreateContainer within sandbox \"f5e05bc054e7ecc909a66acd9b1ae6c3772705b21e7b5d9a3fc382ae69733ab7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5e1ecc89d7c80337cfb56fbbbe001e3cfb1889bc62272182dff1285a228502ee\"" Sep 13 00:08:41.200835 env[1922]: time="2025-09-13T00:08:41.200774004Z" level=info msg="StartContainer for \"5e1ecc89d7c80337cfb56fbbbe001e3cfb1889bc62272182dff1285a228502ee\"" Sep 13 00:08:41.203329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041354590.mount: Deactivated successfully. Sep 13 00:08:41.324727 env[1922]: time="2025-09-13T00:08:41.324634584Z" level=info msg="StartContainer for \"5e1ecc89d7c80337cfb56fbbbe001e3cfb1889bc62272182dff1285a228502ee\" returns successfully"