Nov 1 00:23:08.011026 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 1 00:23:08.011063 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Oct 31 23:12:38 -00 2025 Nov 1 00:23:08.011085 kernel: efi: EFI v2.70 by EDK II Nov 1 00:23:08.011100 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Nov 1 00:23:08.011114 kernel: ACPI: Early table checksum verification disabled Nov 1 00:23:08.011127 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 1 00:23:08.011143 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 1 00:23:08.011157 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 1 00:23:08.011171 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 1 00:23:08.011184 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 1 00:23:08.011202 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 1 00:23:08.011216 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 1 00:23:08.011229 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 1 00:23:08.011244 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 1 00:23:08.011260 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 1 00:23:08.011280 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 1 00:23:08.011294 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 1 00:23:08.011337 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 1 00:23:08.011353 kernel: printk: bootconsole [uart0] enabled Nov 1 00:23:08.011368 kernel: NUMA: Failed to initialise from firmware Nov 1 00:23:08.011384 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 1 00:23:08.011399 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Nov 1 00:23:08.011414 kernel: Zone ranges: Nov 1 00:23:08.011428 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 1 00:23:08.011442 kernel: DMA32 empty Nov 1 00:23:08.011456 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 1 00:23:08.011476 kernel: Movable zone start for each node Nov 1 00:23:08.011491 kernel: Early memory node ranges Nov 1 00:23:08.011505 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 1 00:23:08.011520 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 1 00:23:08.011534 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 1 00:23:08.011548 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 1 00:23:08.011563 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 1 00:23:08.011577 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 1 00:23:08.011591 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 1 00:23:08.011605 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 1 00:23:08.011619 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 1 00:23:08.011634 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 1 00:23:08.011652 kernel: psci: probing for conduit method from ACPI. Nov 1 00:23:08.011667 kernel: psci: PSCIv1.0 detected in firmware. Nov 1 00:23:08.011687 kernel: psci: Using standard PSCI v0.2 function IDs Nov 1 00:23:08.011703 kernel: psci: Trusted OS migration not required Nov 1 00:23:08.011718 kernel: psci: SMC Calling Convention v1.1 Nov 1 00:23:08.011737 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 1 00:23:08.011752 kernel: ACPI: SRAT not present Nov 1 00:23:08.011768 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Nov 1 00:23:08.011782 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Nov 1 00:23:08.011798 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 1 00:23:08.011813 kernel: Detected PIPT I-cache on CPU0 Nov 1 00:23:08.011828 kernel: CPU features: detected: GIC system register CPU interface Nov 1 00:23:08.011843 kernel: CPU features: detected: Spectre-v2 Nov 1 00:23:08.011858 kernel: CPU features: detected: Spectre-v3a Nov 1 00:23:08.011873 kernel: CPU features: detected: Spectre-BHB Nov 1 00:23:08.011888 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 1 00:23:08.011907 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 1 00:23:08.011922 kernel: CPU features: detected: ARM erratum 1742098 Nov 1 00:23:08.011937 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 1 00:23:08.011952 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Nov 1 00:23:08.011967 kernel: Policy zone: Normal Nov 1 00:23:08.011985 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:23:08.012001 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:23:08.012016 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:23:08.012031 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:23:08.012046 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:23:08.012065 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Nov 1 00:23:08.012081 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Nov 1 00:23:08.012097 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:23:08.012111 kernel: trace event string verifier disabled Nov 1 00:23:08.012126 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:23:08.012142 kernel: rcu: RCU event tracing is enabled. Nov 1 00:23:08.012158 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:23:08.012173 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:23:08.012188 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:23:08.012204 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:23:08.012219 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:23:08.012234 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 1 00:23:08.012253 kernel: GICv3: 96 SPIs implemented Nov 1 00:23:08.012268 kernel: GICv3: 0 Extended SPIs implemented Nov 1 00:23:08.012283 kernel: GICv3: Distributor has no Range Selector support Nov 1 00:23:08.021114 kernel: Root IRQ handler: gic_handle_irq Nov 1 00:23:08.021163 kernel: GICv3: 16 PPIs implemented Nov 1 00:23:08.021180 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 1 00:23:08.021196 kernel: ACPI: SRAT not present Nov 1 00:23:08.021211 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 1 00:23:08.021227 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Nov 1 00:23:08.021243 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Nov 1 00:23:08.021258 kernel: GICv3: using LPI property table @0x00000004000b0000 Nov 1 00:23:08.021281 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 1 00:23:08.021297 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Nov 1 00:23:08.022366 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 1 00:23:08.022383 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 1 00:23:08.022398 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 1 00:23:08.022414 kernel: Console: colour dummy device 80x25 Nov 1 00:23:08.022430 kernel: printk: console [tty1] enabled Nov 1 00:23:08.022445 kernel: ACPI: Core revision 20210730 Nov 1 00:23:08.022461 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 1 00:23:08.022477 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:23:08.022498 kernel: LSM: Security Framework initializing Nov 1 00:23:08.022513 kernel: SELinux: Initializing. Nov 1 00:23:08.022529 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:23:08.022545 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:23:08.022561 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:23:08.022576 kernel: Platform MSI: ITS@0x10080000 domain created Nov 1 00:23:08.022592 kernel: PCI/MSI: ITS@0x10080000 domain created Nov 1 00:23:08.022607 kernel: Remapping and enabling EFI services. Nov 1 00:23:08.022623 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:23:08.022638 kernel: Detected PIPT I-cache on CPU1 Nov 1 00:23:08.022659 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 1 00:23:08.022675 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Nov 1 00:23:08.022691 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 1 00:23:08.022706 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:23:08.022722 kernel: SMP: Total of 2 processors activated. Nov 1 00:23:08.022738 kernel: CPU features: detected: 32-bit EL0 Support Nov 1 00:23:08.022753 kernel: CPU features: detected: 32-bit EL1 Support Nov 1 00:23:08.022768 kernel: CPU features: detected: CRC32 instructions Nov 1 00:23:08.022784 kernel: CPU: All CPU(s) started at EL1 Nov 1 00:23:08.022804 kernel: alternatives: patching kernel code Nov 1 00:23:08.022820 kernel: devtmpfs: initialized Nov 1 00:23:08.022847 kernel: KASLR disabled due to lack of seed Nov 1 00:23:08.022868 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:23:08.022884 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:23:08.022901 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:23:08.022917 kernel: SMBIOS 3.0.0 present. Nov 1 00:23:08.022933 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 1 00:23:08.022949 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:23:08.022965 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 1 00:23:08.022982 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 1 00:23:08.023002 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 1 00:23:08.023018 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:23:08.023034 kernel: audit: type=2000 audit(0.298:1): state=initialized audit_enabled=0 res=1 Nov 1 00:23:08.023050 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:23:08.023066 kernel: cpuidle: using governor menu Nov 1 00:23:08.023086 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 1 00:23:08.023102 kernel: ASID allocator initialised with 32768 entries Nov 1 00:23:08.023118 kernel: ACPI: bus type PCI registered Nov 1 00:23:08.023135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:23:08.023151 kernel: Serial: AMBA PL011 UART driver Nov 1 00:23:08.023167 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:23:08.023183 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Nov 1 00:23:08.023199 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:23:08.023215 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Nov 1 00:23:08.023234 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:23:08.023251 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 1 00:23:08.023267 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:23:08.023283 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:23:08.024346 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:23:08.024375 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:23:08.024392 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:23:08.024409 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:23:08.024425 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:23:08.024442 kernel: ACPI: Interpreter enabled Nov 1 00:23:08.024464 kernel: ACPI: Using GIC for interrupt routing Nov 1 00:23:08.024481 kernel: ACPI: MCFG table detected, 1 entries Nov 1 00:23:08.024497 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 1 00:23:08.024786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:23:08.024989 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 00:23:08.025182 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 00:23:08.025405 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 1 00:23:08.025628 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 1 00:23:08.025653 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 1 00:23:08.025671 kernel: acpiphp: Slot [1] registered Nov 1 00:23:08.025687 kernel: acpiphp: Slot [2] registered Nov 1 00:23:08.025703 kernel: acpiphp: Slot [3] registered Nov 1 00:23:08.025719 kernel: acpiphp: Slot [4] registered Nov 1 00:23:08.025735 kernel: acpiphp: Slot [5] registered Nov 1 00:23:08.025751 kernel: acpiphp: Slot [6] registered Nov 1 00:23:08.025767 kernel: acpiphp: Slot [7] registered Nov 1 00:23:08.025788 kernel: acpiphp: Slot [8] registered Nov 1 00:23:08.025804 kernel: acpiphp: Slot [9] registered Nov 1 00:23:08.025820 kernel: acpiphp: Slot [10] registered Nov 1 00:23:08.025836 kernel: acpiphp: Slot [11] registered Nov 1 00:23:08.025852 kernel: acpiphp: Slot [12] registered Nov 1 00:23:08.025868 kernel: acpiphp: Slot [13] registered Nov 1 00:23:08.025884 kernel: acpiphp: Slot [14] registered Nov 1 00:23:08.025900 kernel: acpiphp: Slot [15] registered Nov 1 00:23:08.025916 kernel: acpiphp: Slot [16] registered Nov 1 00:23:08.025936 kernel: acpiphp: Slot [17] registered Nov 1 00:23:08.025952 kernel: acpiphp: Slot [18] registered Nov 1 00:23:08.025968 kernel: acpiphp: Slot [19] registered Nov 1 00:23:08.025984 kernel: acpiphp: Slot [20] registered Nov 1 00:23:08.026000 kernel: acpiphp: Slot [21] registered Nov 1 00:23:08.026016 kernel: acpiphp: Slot [22] registered Nov 1 00:23:08.026032 kernel: acpiphp: Slot [23] registered Nov 1 00:23:08.026048 kernel: acpiphp: Slot [24] registered Nov 1 00:23:08.026064 kernel: acpiphp: Slot [25] registered Nov 1 00:23:08.026080 kernel: acpiphp: Slot [26] registered Nov 1 00:23:08.026100 kernel: acpiphp: Slot [27] registered Nov 1 00:23:08.026116 kernel: acpiphp: Slot [28] registered Nov 1 00:23:08.026132 kernel: acpiphp: Slot [29] registered Nov 1 00:23:08.026148 kernel: acpiphp: Slot [30] registered Nov 1 00:23:08.026163 kernel: acpiphp: Slot [31] registered Nov 1 00:23:08.026179 kernel: PCI host bridge to bus 0000:00 Nov 1 00:23:08.026420 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 1 00:23:08.026602 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 1 00:23:08.026783 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 1 00:23:08.026957 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 1 00:23:08.027173 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Nov 1 00:23:08.030486 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Nov 1 00:23:08.030707 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Nov 1 00:23:08.030918 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 1 00:23:08.031126 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Nov 1 00:23:08.042220 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 00:23:08.042559 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 1 00:23:08.042764 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Nov 1 00:23:08.042963 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Nov 1 00:23:08.043157 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Nov 1 00:23:08.053895 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 00:23:08.054145 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Nov 1 00:23:08.054727 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Nov 1 00:23:08.054956 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Nov 1 00:23:08.055164 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Nov 1 00:23:08.055401 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Nov 1 00:23:08.055591 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 1 00:23:08.055779 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 1 00:23:08.055963 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 1 00:23:08.055986 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 1 00:23:08.056003 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 1 00:23:08.056020 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 1 00:23:08.056037 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 1 00:23:08.056053 kernel: iommu: Default domain type: Translated Nov 1 00:23:08.056070 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 1 00:23:08.056086 kernel: vgaarb: loaded Nov 1 00:23:08.056102 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:23:08.056124 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:23:08.056141 kernel: PTP clock support registered Nov 1 00:23:08.056157 kernel: Registered efivars operations Nov 1 00:23:08.056173 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 1 00:23:08.056189 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:23:08.056206 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:23:08.056222 kernel: pnp: PnP ACPI init Nov 1 00:23:08.056479 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 1 00:23:08.056511 kernel: pnp: PnP ACPI: found 1 devices Nov 1 00:23:08.056529 kernel: NET: Registered PF_INET protocol family Nov 1 00:23:08.056545 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:23:08.056562 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:23:08.056578 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:23:08.056595 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:23:08.056611 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:23:08.056628 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:23:08.056644 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:23:08.056665 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:23:08.056681 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:23:08.056697 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:23:08.056714 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Nov 1 00:23:08.056730 kernel: kvm [1]: HYP mode not available Nov 1 00:23:08.056746 kernel: Initialise system trusted keyrings Nov 1 00:23:08.056763 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:23:08.056780 kernel: Key type asymmetric registered Nov 1 00:23:08.056796 kernel: Asymmetric key parser 'x509' registered Nov 1 00:23:08.056816 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:23:08.056832 kernel: io scheduler mq-deadline registered Nov 1 00:23:08.056848 kernel: io scheduler kyber registered Nov 1 00:23:08.056864 kernel: io scheduler bfq registered Nov 1 00:23:08.057072 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 1 00:23:08.057099 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 1 00:23:08.057116 kernel: ACPI: button: Power Button [PWRB] Nov 1 00:23:08.057133 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 1 00:23:08.057149 kernel: ACPI: button: Sleep Button [SLPB] Nov 1 00:23:08.057170 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:23:08.057188 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 1 00:23:08.060584 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 1 00:23:08.060630 kernel: printk: console [ttyS0] disabled Nov 1 00:23:08.060649 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 1 00:23:08.060667 kernel: printk: console [ttyS0] enabled Nov 1 00:23:08.060683 kernel: printk: bootconsole [uart0] disabled Nov 1 00:23:08.060700 kernel: thunder_xcv, ver 1.0 Nov 1 00:23:08.060716 kernel: thunder_bgx, ver 1.0 Nov 1 00:23:08.060743 kernel: nicpf, ver 1.0 Nov 1 00:23:08.060759 kernel: nicvf, ver 1.0 Nov 1 00:23:08.060991 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 1 00:23:08.061182 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-01T00:23:07 UTC (1761956587) Nov 1 00:23:08.061208 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:23:08.061225 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:23:08.061241 kernel: Segment Routing with IPv6 Nov 1 00:23:08.061258 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:23:08.061280 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:23:08.061296 kernel: Key type dns_resolver registered Nov 1 00:23:08.061341 kernel: registered taskstats version 1 Nov 1 00:23:08.061358 kernel: Loading compiled-in X.509 certificates Nov 1 00:23:08.061376 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 4aa5071b9a6f96878595e36d4bd5862a671c915d' Nov 1 00:23:08.061392 kernel: Key type .fscrypt registered Nov 1 00:23:08.061409 kernel: Key type fscrypt-provisioning registered Nov 1 00:23:08.061425 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:23:08.061462 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:23:08.061489 kernel: ima: No architecture policies found Nov 1 00:23:08.061505 kernel: clk: Disabling unused clocks Nov 1 00:23:08.061522 kernel: Freeing unused kernel memory: 36416K Nov 1 00:23:08.061538 kernel: Run /init as init process Nov 1 00:23:08.061554 kernel: with arguments: Nov 1 00:23:08.061570 kernel: /init Nov 1 00:23:08.061586 kernel: with environment: Nov 1 00:23:08.061602 kernel: HOME=/ Nov 1 00:23:08.061618 kernel: TERM=linux Nov 1 00:23:08.061637 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:23:08.061659 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:23:08.061681 systemd[1]: Detected virtualization amazon. Nov 1 00:23:08.061699 systemd[1]: Detected architecture arm64. Nov 1 00:23:08.061716 systemd[1]: Running in initrd. Nov 1 00:23:08.061733 systemd[1]: No hostname configured, using default hostname. Nov 1 00:23:08.061750 systemd[1]: Hostname set to . Nov 1 00:23:08.061772 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:23:08.061789 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:23:08.061807 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:23:08.061824 systemd[1]: Reached target cryptsetup.target. Nov 1 00:23:08.061841 systemd[1]: Reached target paths.target. Nov 1 00:23:08.061858 systemd[1]: Reached target slices.target. Nov 1 00:23:08.061875 systemd[1]: Reached target swap.target. Nov 1 00:23:08.061892 systemd[1]: Reached target timers.target. Nov 1 00:23:08.061914 systemd[1]: Listening on iscsid.socket. Nov 1 00:23:08.061931 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:23:08.061949 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:23:08.061966 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:23:08.061983 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:23:08.062001 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:23:08.062018 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:23:08.062035 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:23:08.062056 systemd[1]: Reached target sockets.target. Nov 1 00:23:08.062074 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:23:08.062091 systemd[1]: Finished network-cleanup.service. Nov 1 00:23:08.062108 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:23:08.062125 systemd[1]: Starting systemd-journald.service... Nov 1 00:23:08.062143 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:23:08.062161 systemd[1]: Starting systemd-resolved.service... Nov 1 00:23:08.062178 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:23:08.062195 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:23:08.062217 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:23:08.062234 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:23:08.062252 kernel: audit: type=1130 audit(1761956588.011:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.062270 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:23:08.062287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:23:08.062326 systemd-journald[310]: Journal started Nov 1 00:23:08.062429 systemd-journald[310]: Runtime Journal (/run/log/journal/ec2530a4887e05b386cdac81e78a659f) is 8.0M, max 75.4M, 67.4M free. Nov 1 00:23:08.078937 systemd[1]: Started systemd-journald.service. Nov 1 00:23:08.078992 kernel: audit: type=1130 audit(1761956588.068:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:07.990368 systemd-modules-load[311]: Inserted module 'overlay' Nov 1 00:23:08.078577 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:23:08.098249 kernel: audit: type=1130 audit(1761956588.068:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.105884 systemd-resolved[312]: Positive Trust Anchors: Nov 1 00:23:08.105919 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:23:08.105974 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:23:08.115340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:23:08.142231 systemd-modules-load[311]: Inserted module 'br_netfilter' Nov 1 00:23:08.142439 kernel: Bridge firewalling registered Nov 1 00:23:08.161904 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:23:08.182349 kernel: audit: type=1130 audit(1761956588.164:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.167710 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:23:08.187483 kernel: SCSI subsystem initialized Nov 1 00:23:08.208424 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:23:08.208510 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:23:08.211876 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:23:08.213055 dracut-cmdline[329]: dracut-dracut-053 Nov 1 00:23:08.218085 systemd-modules-load[311]: Inserted module 'dm_multipath' Nov 1 00:23:08.220468 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:23:08.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.238757 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:23:08.241034 kernel: audit: type=1130 audit(1761956588.223:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.243322 dracut-cmdline[329]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:23:08.279497 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:23:08.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.292353 kernel: audit: type=1130 audit(1761956588.280:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.376367 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:23:08.398340 kernel: iscsi: registered transport (tcp) Nov 1 00:23:08.426865 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:23:08.426934 kernel: QLogic iSCSI HBA Driver Nov 1 00:23:08.634339 kernel: random: crng init done Nov 1 00:23:08.634590 systemd-resolved[312]: Defaulting to hostname 'linux'. Nov 1 00:23:08.640832 systemd[1]: Started systemd-resolved.service. Nov 1 00:23:08.659562 kernel: audit: type=1130 audit(1761956588.639:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.641112 systemd[1]: Reached target nss-lookup.target. Nov 1 00:23:08.664934 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:23:08.682512 kernel: audit: type=1130 audit(1761956588.663:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:08.670530 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:23:08.743354 kernel: raid6: neonx8 gen() 6360 MB/s Nov 1 00:23:08.761345 kernel: raid6: neonx8 xor() 4746 MB/s Nov 1 00:23:08.779341 kernel: raid6: neonx4 gen() 6417 MB/s Nov 1 00:23:08.797337 kernel: raid6: neonx4 xor() 4976 MB/s Nov 1 00:23:08.815342 kernel: raid6: neonx2 gen() 5723 MB/s Nov 1 00:23:08.833342 kernel: raid6: neonx2 xor() 4549 MB/s Nov 1 00:23:08.851336 kernel: raid6: neonx1 gen() 4425 MB/s Nov 1 00:23:08.869345 kernel: raid6: neonx1 xor() 3674 MB/s Nov 1 00:23:08.887343 kernel: raid6: int64x8 gen() 3386 MB/s Nov 1 00:23:08.905345 kernel: raid6: int64x8 xor() 2083 MB/s Nov 1 00:23:08.923341 kernel: raid6: int64x4 gen() 3736 MB/s Nov 1 00:23:08.941341 kernel: raid6: int64x4 xor() 2188 MB/s Nov 1 00:23:08.959344 kernel: raid6: int64x2 gen() 3563 MB/s Nov 1 00:23:08.977340 kernel: raid6: int64x2 xor() 1942 MB/s Nov 1 00:23:08.995340 kernel: raid6: int64x1 gen() 2740 MB/s Nov 1 00:23:09.014950 kernel: raid6: int64x1 xor() 1442 MB/s Nov 1 00:23:09.015001 kernel: raid6: using algorithm neonx4 gen() 6417 MB/s Nov 1 00:23:09.015026 kernel: raid6: .... xor() 4976 MB/s, rmw enabled Nov 1 00:23:09.016810 kernel: raid6: using neon recovery algorithm Nov 1 00:23:09.037503 kernel: xor: measuring software checksum speed Nov 1 00:23:09.037569 kernel: 8regs : 9299 MB/sec Nov 1 00:23:09.039426 kernel: 32regs : 11100 MB/sec Nov 1 00:23:09.041416 kernel: arm64_neon : 9210 MB/sec Nov 1 00:23:09.041489 kernel: xor: using function: 32regs (11100 MB/sec) Nov 1 00:23:09.141343 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Nov 1 00:23:09.158437 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:23:09.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:09.170000 audit: BPF prog-id=7 op=LOAD Nov 1 00:23:09.170000 audit: BPF prog-id=8 op=LOAD Nov 1 00:23:09.172348 kernel: audit: type=1130 audit(1761956589.163:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:09.173032 systemd[1]: Starting systemd-udevd.service... Nov 1 00:23:09.203120 systemd-udevd[510]: Using default interface naming scheme 'v252'. Nov 1 00:23:09.212436 systemd[1]: Started systemd-udevd.service. Nov 1 00:23:09.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:09.225773 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:23:09.259786 dracut-pre-trigger[524]: rd.md=0: removing MD RAID activation Nov 1 00:23:09.322775 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:23:09.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:09.328293 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:23:09.435876 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:23:09.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:09.576785 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 1 00:23:09.576859 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 1 00:23:09.600847 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 1 00:23:09.600879 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 1 00:23:09.601134 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 1 00:23:09.601388 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 1 00:23:09.601628 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 1 00:23:09.601842 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:bf:6a:75:1b:b3 Nov 1 00:23:09.602051 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:23:09.603478 kernel: GPT:9289727 != 33554431 Nov 1 00:23:09.605772 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:23:09.607163 kernel: GPT:9289727 != 33554431 Nov 1 00:23:09.609189 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:23:09.610854 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:23:09.615555 (udev-worker)[574]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:23:09.791528 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:23:09.809340 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (568) Nov 1 00:23:09.858489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:23:09.913792 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:23:09.933370 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:23:09.940178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:23:09.956854 systemd[1]: Starting disk-uuid.service... Nov 1 00:23:09.969193 disk-uuid[673]: Primary Header is updated. Nov 1 00:23:09.969193 disk-uuid[673]: Secondary Entries is updated. Nov 1 00:23:09.969193 disk-uuid[673]: Secondary Header is updated. Nov 1 00:23:09.977521 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:23:11.008337 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:23:11.009026 disk-uuid[674]: The operation has completed successfully. Nov 1 00:23:11.175255 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:23:11.175848 systemd[1]: Finished disk-uuid.service. Nov 1 00:23:11.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.203205 systemd[1]: Starting verity-setup.service... Nov 1 00:23:11.236343 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 1 00:23:11.317155 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:23:11.321976 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:23:11.329002 systemd[1]: Finished verity-setup.service. Nov 1 00:23:11.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.422342 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:23:11.423662 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:23:11.424000 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:23:11.425329 systemd[1]: Starting ignition-setup.service... Nov 1 00:23:11.429682 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:23:11.468195 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:23:11.468275 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:23:11.470961 kernel: BTRFS info (device nvme0n1p6): has skinny extents Nov 1 00:23:11.564961 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:23:11.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.571000 audit: BPF prog-id=9 op=LOAD Nov 1 00:23:11.574252 systemd[1]: Starting systemd-networkd.service... Nov 1 00:23:11.619955 systemd-networkd[1021]: lo: Link UP Nov 1 00:23:11.619976 systemd-networkd[1021]: lo: Gained carrier Nov 1 00:23:11.620972 systemd-networkd[1021]: Enumeration completed Nov 1 00:23:11.621486 systemd-networkd[1021]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:23:11.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.640388 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:23:11.622704 systemd[1]: Started systemd-networkd.service. Nov 1 00:23:11.629937 systemd[1]: Reached target network.target. Nov 1 00:23:11.631519 systemd[1]: Starting iscsiuio.service... Nov 1 00:23:11.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.645349 systemd-networkd[1021]: eth0: Link UP Nov 1 00:23:11.680569 iscsid[1029]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:23:11.680569 iscsid[1029]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:23:11.680569 iscsid[1029]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:23:11.680569 iscsid[1029]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:23:11.680569 iscsid[1029]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:23:11.680569 iscsid[1029]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:23:11.680569 iscsid[1029]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:23:11.763498 kernel: kauditd_printk_skb: 12 callbacks suppressed Nov 1 00:23:11.763550 kernel: audit: type=1130 audit(1761956591.686:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.763578 kernel: audit: type=1130 audit(1761956591.734:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.645358 systemd-networkd[1021]: eth0: Gained carrier Nov 1 00:23:11.652689 systemd[1]: Started iscsiuio.service. Nov 1 00:23:11.661737 systemd[1]: Starting iscsid.service... Nov 1 00:23:11.680495 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:23:11.680995 systemd[1]: Started iscsid.service. Nov 1 00:23:11.682952 systemd-networkd[1021]: eth0: DHCPv4 address 172.31.20.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 1 00:23:11.705714 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:23:11.733544 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:23:11.735897 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:23:11.745435 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:23:11.747748 systemd[1]: Reached target remote-fs.target. Nov 1 00:23:11.760774 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:23:11.792527 systemd[1]: Finished ignition-setup.service. Nov 1 00:23:11.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.807246 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:23:11.809848 kernel: audit: type=1130 audit(1761956591.793:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.819978 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:23:11.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:11.832355 kernel: audit: type=1130 audit(1761956591.822:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:12.971229 ignition[1044]: Ignition 2.14.0 Nov 1 00:23:12.973187 ignition[1044]: Stage: fetch-offline Nov 1 00:23:12.973771 ignition[1044]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:23:12.973833 ignition[1044]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:23:12.995676 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:23:12.998910 ignition[1044]: Ignition finished successfully Nov 1 00:23:13.002723 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:23:13.019650 kernel: audit: type=1130 audit(1761956593.003:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.006575 systemd[1]: Starting ignition-fetch.service... Nov 1 00:23:13.033849 ignition[1054]: Ignition 2.14.0 Nov 1 00:23:13.033878 ignition[1054]: Stage: fetch Nov 1 00:23:13.034166 ignition[1054]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:23:13.034219 ignition[1054]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:23:13.050640 ignition[1054]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:23:13.053769 ignition[1054]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:23:13.067382 ignition[1054]: INFO : PUT result: OK Nov 1 00:23:13.073515 ignition[1054]: DEBUG : parsed url from cmdline: "" Nov 1 00:23:13.075649 ignition[1054]: INFO : no config URL provided Nov 1 00:23:13.075649 ignition[1054]: INFO : reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:23:13.075649 ignition[1054]: INFO : no config at "/usr/lib/ignition/user.ign" Nov 1 00:23:13.075649 ignition[1054]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:23:13.087071 ignition[1054]: INFO : PUT result: OK Nov 1 00:23:13.089130 ignition[1054]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 1 00:23:13.092738 ignition[1054]: INFO : GET result: OK Nov 1 00:23:13.095089 ignition[1054]: DEBUG : parsing config with SHA512: 48a0d460b7cc34418da174e9672aa710f5142baa9a47a26fac7288211268d169e5012494d4e79f7888727578a8eed9a9848d969e94d663f15c4873754e568f0c Nov 1 00:23:13.106411 unknown[1054]: fetched base config from "system" Nov 1 00:23:13.106655 unknown[1054]: fetched base config from "system" Nov 1 00:23:13.107878 ignition[1054]: fetch: fetch complete Nov 1 00:23:13.106671 unknown[1054]: fetched user config from "aws" Nov 1 00:23:13.107892 ignition[1054]: fetch: fetch passed Nov 1 00:23:13.107979 ignition[1054]: Ignition finished successfully Nov 1 00:23:13.120342 systemd[1]: Finished ignition-fetch.service. Nov 1 00:23:13.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.125700 systemd[1]: Starting ignition-kargs.service... Nov 1 00:23:13.141762 kernel: audit: type=1130 audit(1761956593.122:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.148822 ignition[1060]: Ignition 2.14.0 Nov 1 00:23:13.148850 ignition[1060]: Stage: kargs Nov 1 00:23:13.149155 ignition[1060]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:23:13.149219 ignition[1060]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:23:13.166908 ignition[1060]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:23:13.170415 ignition[1060]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:23:13.174187 ignition[1060]: INFO : PUT result: OK Nov 1 00:23:13.179096 ignition[1060]: kargs: kargs passed Nov 1 00:23:13.179209 ignition[1060]: Ignition finished successfully Nov 1 00:23:13.184460 systemd[1]: Finished ignition-kargs.service. Nov 1 00:23:13.190003 systemd[1]: Starting ignition-disks.service... Nov 1 00:23:13.204812 kernel: audit: type=1130 audit(1761956593.183:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.212405 ignition[1066]: Ignition 2.14.0 Nov 1 00:23:13.212433 ignition[1066]: Stage: disks Nov 1 00:23:13.212730 ignition[1066]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:23:13.212783 ignition[1066]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:23:13.227779 ignition[1066]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:23:13.230963 ignition[1066]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:23:13.234711 ignition[1066]: INFO : PUT result: OK Nov 1 00:23:13.240028 ignition[1066]: disks: disks passed Nov 1 00:23:13.240160 ignition[1066]: Ignition finished successfully Nov 1 00:23:13.245214 systemd[1]: Finished ignition-disks.service. Nov 1 00:23:13.273554 kernel: audit: type=1130 audit(1761956593.246:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.247621 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:23:13.250031 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:23:13.252261 systemd[1]: Reached target local-fs.target. Nov 1 00:23:13.254363 systemd[1]: Reached target sysinit.target. Nov 1 00:23:13.256472 systemd[1]: Reached target basic.target. Nov 1 00:23:13.269196 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:23:13.375723 systemd-fsck[1074]: ROOT: clean, 637/553520 files, 56031/553472 blocks Nov 1 00:23:13.381003 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:23:13.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.384774 systemd[1]: Mounting sysroot.mount... Nov 1 00:23:13.400696 kernel: audit: type=1130 audit(1761956593.382:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:13.423343 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:23:13.424515 systemd[1]: Mounted sysroot.mount. Nov 1 00:23:13.428652 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:23:13.477855 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:23:13.480530 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:23:13.480628 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:23:13.480689 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:23:13.501932 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:23:13.543486 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:23:13.549175 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:23:13.578327 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1091) Nov 1 00:23:13.584701 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:23:13.584749 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:23:13.584913 systemd-networkd[1021]: eth0: Gained IPv6LL Nov 1 00:23:13.589688 kernel: BTRFS info (device nvme0n1p6): has skinny extents Nov 1 00:23:13.601323 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:23:13.604102 initrd-setup-root[1096]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:23:13.621988 initrd-setup-root[1122]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:23:13.630628 initrd-setup-root[1130]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:23:13.639990 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:23:13.647844 initrd-setup-root[1138]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:23:14.217587 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:23:14.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:14.234343 systemd[1]: Starting ignition-mount.service... Nov 1 00:23:14.240746 kernel: audit: type=1130 audit(1761956594.216:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:14.242697 systemd[1]: Starting sysroot-boot.service... Nov 1 00:23:14.249840 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 00:23:14.250002 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 00:23:14.278680 ignition[1156]: INFO : Ignition 2.14.0 Nov 1 00:23:14.283515 ignition[1156]: INFO : Stage: mount Nov 1 00:23:14.283515 ignition[1156]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:23:14.283515 ignition[1156]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:23:14.313234 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:23:14.316724 ignition[1156]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:23:14.323029 systemd[1]: Finished sysroot-boot.service. Nov 1 00:23:14.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:14.327256 ignition[1156]: INFO : PUT result: OK Nov 1 00:23:14.333011 ignition[1156]: INFO : mount: mount passed Nov 1 00:23:14.335423 ignition[1156]: INFO : Ignition finished successfully Nov 1 00:23:14.337878 systemd[1]: Finished ignition-mount.service. Nov 1 00:23:14.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:14.346435 systemd[1]: Starting ignition-files.service... Nov 1 00:23:14.359747 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:23:14.385329 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1166) Nov 1 00:23:14.391686 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:23:14.391737 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:23:14.393938 kernel: BTRFS info (device nvme0n1p6): has skinny extents Nov 1 00:23:14.425341 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:23:14.432440 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:23:14.448912 ignition[1185]: INFO : Ignition 2.14.0 Nov 1 00:23:14.448912 ignition[1185]: INFO : Stage: files Nov 1 00:23:14.455023 ignition[1185]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:23:14.455023 ignition[1185]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:23:14.471807 ignition[1185]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:23:14.475407 ignition[1185]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:23:14.479474 ignition[1185]: INFO : PUT result: OK Nov 1 00:23:14.485441 ignition[1185]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:23:14.489604 ignition[1185]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:23:14.489604 ignition[1185]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:23:14.690936 ignition[1185]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:23:14.697792 ignition[1185]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:23:14.702752 unknown[1185]: wrote ssh authorized keys file for user: core Nov 1 00:23:14.705630 ignition[1185]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:23:14.710437 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 1 00:23:14.715469 ignition[1185]: INFO : GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 1 00:23:14.826391 ignition[1185]: INFO : GET result: OK Nov 1 00:23:14.984164 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 1 00:23:14.988724 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:23:14.988724 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:23:14.988724 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Nov 1 00:23:14.988724 ignition[1185]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:23:15.014448 ignition[1185]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1231733091" Nov 1 00:23:15.018023 ignition[1185]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1231733091": device or resource busy Nov 1 00:23:15.018023 ignition[1185]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1231733091", trying btrfs: device or resource busy Nov 1 00:23:15.018023 ignition[1185]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1231733091" Nov 1 00:23:15.030951 ignition[1185]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1231733091" Nov 1 00:23:15.101096 ignition[1185]: INFO : op(3): [started] unmounting "/mnt/oem1231733091" Nov 1 00:23:15.105530 ignition[1185]: INFO : op(3): [finished] unmounting "/mnt/oem1231733091" Nov 1 00:23:15.108556 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Nov 1 00:23:15.108556 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:23:15.108556 ignition[1185]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 1 00:23:15.124406 systemd[1]: mnt-oem1231733091.mount: Deactivated successfully. Nov 1 00:23:15.314405 ignition[1185]: INFO : GET result: OK Nov 1 00:23:15.489522 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:23:15.494156 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:23:15.498969 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:23:15.503493 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:23:15.507876 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:23:15.507876 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:23:15.507876 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:23:15.507876 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:23:15.507876 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:23:15.507876 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 1 00:23:15.507876 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 1 00:23:15.543220 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:23:15.543220 ignition[1185]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:23:15.560095 ignition[1185]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4012254430" Nov 1 00:23:15.564027 ignition[1185]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4012254430": device or resource busy Nov 1 00:23:15.564027 ignition[1185]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4012254430", trying btrfs: device or resource busy Nov 1 00:23:15.564027 ignition[1185]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4012254430" Nov 1 00:23:15.564027 ignition[1185]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4012254430" Nov 1 00:23:15.582415 ignition[1185]: INFO : op(6): [started] unmounting "/mnt/oem4012254430" Nov 1 00:23:15.582415 ignition[1185]: INFO : op(6): [finished] unmounting "/mnt/oem4012254430" Nov 1 00:23:15.590229 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:23:15.583999 systemd[1]: mnt-oem4012254430.mount: Deactivated successfully. Nov 1 00:23:15.599436 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Nov 1 00:23:15.604406 ignition[1185]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:23:15.617827 ignition[1185]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3742724046" Nov 1 00:23:15.621894 ignition[1185]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3742724046": device or resource busy Nov 1 00:23:15.621894 ignition[1185]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3742724046", trying btrfs: device or resource busy Nov 1 00:23:15.621894 ignition[1185]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3742724046" Nov 1 00:23:15.641661 ignition[1185]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3742724046" Nov 1 00:23:15.651870 ignition[1185]: INFO : op(9): [started] unmounting "/mnt/oem3742724046" Nov 1 00:23:15.651870 ignition[1185]: INFO : op(9): [finished] unmounting "/mnt/oem3742724046" Nov 1 00:23:15.651870 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Nov 1 00:23:15.651870 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 1 00:23:15.651870 ignition[1185]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 1 00:23:15.647944 systemd[1]: mnt-oem3742724046.mount: Deactivated successfully. Nov 1 00:23:16.019674 ignition[1185]: INFO : GET result: OK Nov 1 00:23:16.575862 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 1 00:23:16.580919 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Nov 1 00:23:16.580919 ignition[1185]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:23:16.597478 ignition[1185]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1181333687" Nov 1 00:23:16.600839 ignition[1185]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1181333687": device or resource busy Nov 1 00:23:16.600839 ignition[1185]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1181333687", trying btrfs: device or resource busy Nov 1 00:23:16.600839 ignition[1185]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1181333687" Nov 1 00:23:16.615354 ignition[1185]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1181333687" Nov 1 00:23:16.615354 ignition[1185]: INFO : op(c): [started] unmounting "/mnt/oem1181333687" Nov 1 00:23:16.615354 ignition[1185]: INFO : op(c): [finished] unmounting "/mnt/oem1181333687" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(13): [started] processing unit "nvidia.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(13): [finished] processing unit "nvidia.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:23:16.615354 ignition[1185]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:23:16.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.733169 ignition[1185]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:23:16.733169 ignition[1185]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Nov 1 00:23:16.733169 ignition[1185]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Nov 1 00:23:16.733169 ignition[1185]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:23:16.733169 ignition[1185]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:23:16.733169 ignition[1185]: INFO : files: files passed Nov 1 00:23:16.733169 ignition[1185]: INFO : Ignition finished successfully Nov 1 00:23:16.629000 systemd[1]: mnt-oem1181333687.mount: Deactivated successfully. Nov 1 00:23:16.664920 systemd[1]: Finished ignition-files.service. Nov 1 00:23:16.694566 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:23:16.763225 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:23:16.764754 systemd[1]: Starting ignition-quench.service... Nov 1 00:23:16.778434 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:23:16.778742 systemd[1]: Finished ignition-quench.service. Nov 1 00:23:16.794394 kernel: kauditd_printk_skb: 3 callbacks suppressed Nov 1 00:23:16.794455 kernel: audit: type=1130 audit(1761956596.785:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.805046 kernel: audit: type=1131 audit(1761956596.788:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.808186 initrd-setup-root-after-ignition[1210]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:23:16.813014 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:23:16.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.817779 systemd[1]: Reached target ignition-complete.target. Nov 1 00:23:16.826378 kernel: audit: type=1130 audit(1761956596.816:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.831421 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:23:16.859248 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:23:16.862021 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:23:16.888163 kernel: audit: type=1130 audit(1761956596.862:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.888201 kernel: audit: type=1131 audit(1761956596.862:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.864468 systemd[1]: Reached target initrd-fs.target. Nov 1 00:23:16.881636 systemd[1]: Reached target initrd.target. Nov 1 00:23:16.883512 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:23:16.885823 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:23:16.921838 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:23:16.923482 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:23:16.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.942350 kernel: audit: type=1130 audit(1761956596.920:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.951777 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:23:16.956168 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:23:16.960888 systemd[1]: Stopped target timers.target. Nov 1 00:23:16.964934 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:23:16.967730 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:23:16.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.972369 systemd[1]: Stopped target initrd.target. Nov 1 00:23:16.983711 kernel: audit: type=1131 audit(1761956596.970:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:16.983780 systemd[1]: Stopped target basic.target. Nov 1 00:23:16.987757 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:23:16.992366 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:23:16.996996 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:23:17.001617 systemd[1]: Stopped target remote-fs.target. Nov 1 00:23:17.005654 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:23:17.009946 systemd[1]: Stopped target sysinit.target. Nov 1 00:23:17.014032 systemd[1]: Stopped target local-fs.target. Nov 1 00:23:17.018072 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:23:17.022421 systemd[1]: Stopped target swap.target. Nov 1 00:23:17.026230 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:23:17.028980 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:23:17.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.033416 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:23:17.044705 kernel: audit: type=1131 audit(1761956597.032:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.044944 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:23:17.047652 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:23:17.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.052090 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:23:17.052367 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:23:17.074938 kernel: audit: type=1131 audit(1761956597.050:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.074978 kernel: audit: type=1131 audit(1761956597.063:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.065880 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:23:17.066109 systemd[1]: Stopped ignition-files.service. Nov 1 00:23:17.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.083201 systemd[1]: Stopping ignition-mount.service... Nov 1 00:23:17.095708 iscsid[1029]: iscsid shutting down. Nov 1 00:23:17.088814 systemd[1]: Stopping iscsid.service... Nov 1 00:23:17.094013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:23:17.095506 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:23:17.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.111427 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:23:17.127335 ignition[1223]: INFO : Ignition 2.14.0 Nov 1 00:23:17.127335 ignition[1223]: INFO : Stage: umount Nov 1 00:23:17.127335 ignition[1223]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:23:17.127335 ignition[1223]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:23:17.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.115281 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:23:17.115629 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:23:17.126855 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:23:17.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.127899 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:23:17.158480 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:23:17.158693 systemd[1]: Stopped iscsid.service. Nov 1 00:23:17.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.164224 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:23:17.164447 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:23:17.174443 systemd[1]: Stopping iscsiuio.service... Nov 1 00:23:17.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.185858 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:23:17.189757 ignition[1223]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:23:17.193656 ignition[1223]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:23:17.195368 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:23:17.195589 systemd[1]: Stopped iscsiuio.service. Nov 1 00:23:17.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.204735 ignition[1223]: INFO : PUT result: OK Nov 1 00:23:17.215927 ignition[1223]: INFO : umount: umount passed Nov 1 00:23:17.218171 ignition[1223]: INFO : Ignition finished successfully Nov 1 00:23:17.221962 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:23:17.222216 systemd[1]: Stopped ignition-mount.service. Nov 1 00:23:17.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.227879 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:23:17.227969 systemd[1]: Stopped ignition-disks.service. Nov 1 00:23:17.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.230171 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:23:17.230255 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:23:17.232449 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:23:17.232525 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:23:17.234697 systemd[1]: Stopped target network.target. Nov 1 00:23:17.236766 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:23:17.236852 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:23:17.241074 systemd[1]: Stopped target paths.target. Nov 1 00:23:17.243037 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:23:17.247122 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:23:17.273124 systemd[1]: Stopped target slices.target. Nov 1 00:23:17.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.275132 systemd[1]: Stopped target sockets.target. Nov 1 00:23:17.277270 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:23:17.277386 systemd[1]: Closed iscsid.socket. Nov 1 00:23:17.279339 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:23:17.279416 systemd[1]: Closed iscsiuio.socket. Nov 1 00:23:17.281202 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:23:17.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.281313 systemd[1]: Stopped ignition-setup.service. Nov 1 00:23:17.285909 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:23:17.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.293705 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:23:17.296437 systemd-networkd[1021]: eth0: DHCPv6 lease lost Nov 1 00:23:17.341000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:23:17.303859 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:23:17.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.304060 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:23:17.306692 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:23:17.306760 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:23:17.310072 systemd[1]: Stopping network-cleanup.service... Nov 1 00:23:17.313366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:23:17.313556 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:23:17.318235 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:23:17.318356 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:23:17.324096 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:23:17.324202 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:23:17.328918 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:23:17.339481 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:23:17.343086 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:23:17.344752 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:23:17.348734 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:23:17.348912 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:23:17.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.389981 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:23:17.389000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:23:17.390270 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:23:17.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.399991 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:23:17.400366 systemd[1]: Stopped network-cleanup.service. Nov 1 00:23:17.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.408794 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:23:17.409234 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:23:17.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.415745 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:23:17.415982 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:23:17.422799 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:23:17.422897 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:23:17.427474 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:23:17.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.429431 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:23:17.435944 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:23:17.436067 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:23:17.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.442622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:23:17.442717 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:23:17.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.451018 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:23:17.460671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:23:17.460781 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:23:17.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.473681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:23:17.476640 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:23:17.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:17.481590 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:23:17.487383 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:23:17.501259 systemd[1]: Switching root. Nov 1 00:23:17.527626 systemd-journald[310]: Journal stopped Nov 1 00:23:36.770185 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Nov 1 00:23:36.770327 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:23:36.770376 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:23:36.770423 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:23:36.770456 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:23:36.770490 kernel: SELinux: policy capability open_perms=1 Nov 1 00:23:36.770522 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:23:36.770553 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:23:36.770587 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:23:36.770615 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:23:36.770646 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:23:36.770675 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:23:36.770706 systemd[1]: Successfully loaded SELinux policy in 353.865ms. Nov 1 00:23:36.770759 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.978ms. Nov 1 00:23:36.770795 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:23:36.770825 systemd[1]: Detected virtualization amazon. Nov 1 00:23:36.770860 systemd[1]: Detected architecture arm64. Nov 1 00:23:36.770891 systemd[1]: Detected first boot. Nov 1 00:23:36.770921 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:23:36.770952 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:23:36.770982 kernel: kauditd_printk_skb: 38 callbacks suppressed Nov 1 00:23:36.771018 kernel: audit: type=1400 audit(1761956602.516:84): avc: denied { associate } for pid=1257 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:23:36.771060 kernel: audit: type=1300 audit(1761956602.516:84): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1240 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:36.771095 kernel: audit: type=1327 audit(1761956602.516:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:23:36.771125 kernel: audit: type=1400 audit(1761956602.520:85): avc: denied { associate } for pid=1257 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:23:36.771157 kernel: audit: type=1300 audit(1761956602.520:85): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145979 a2=1ed a3=0 items=2 ppid=1240 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:36.771187 kernel: audit: type=1307 audit(1761956602.520:85): cwd="/" Nov 1 00:23:36.771221 kernel: audit: type=1302 audit(1761956602.520:85): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:36.771250 kernel: audit: type=1302 audit(1761956602.520:85): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:36.771285 kernel: audit: type=1327 audit(1761956602.520:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:23:36.771554 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:23:36.771600 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:23:36.771643 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:23:36.771678 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:23:36.771709 kernel: audit: type=1334 audit(1761956616.248:86): prog-id=12 op=LOAD Nov 1 00:23:36.771739 kernel: audit: type=1334 audit(1761956616.248:87): prog-id=3 op=UNLOAD Nov 1 00:23:36.771772 kernel: audit: type=1334 audit(1761956616.251:88): prog-id=13 op=LOAD Nov 1 00:23:36.771802 kernel: audit: type=1334 audit(1761956616.253:89): prog-id=14 op=LOAD Nov 1 00:23:36.771831 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:23:36.771862 kernel: audit: type=1334 audit(1761956616.253:90): prog-id=4 op=UNLOAD Nov 1 00:23:36.771927 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:23:36.775030 kernel: audit: type=1334 audit(1761956616.253:91): prog-id=5 op=UNLOAD Nov 1 00:23:36.775067 kernel: audit: type=1131 audit(1761956616.256:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.775103 kernel: audit: type=1334 audit(1761956616.267:93): prog-id=12 op=UNLOAD Nov 1 00:23:36.775135 kernel: audit: type=1130 audit(1761956616.282:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.775166 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:23:36.775200 kernel: audit: type=1131 audit(1761956616.282:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.775231 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:23:36.775264 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:23:36.775296 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 00:23:36.775405 systemd[1]: Created slice system-getty.slice. Nov 1 00:23:36.775441 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:23:36.775472 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:23:36.775504 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:23:36.775535 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:23:36.775564 systemd[1]: Created slice user.slice. Nov 1 00:23:36.775593 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:23:36.775623 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:23:36.775654 systemd[1]: Set up automount boot.automount. Nov 1 00:23:36.775690 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:23:36.775720 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:23:36.775756 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:23:36.775786 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:23:36.775826 systemd[1]: Reached target integritysetup.target. Nov 1 00:23:36.775859 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:23:36.775890 systemd[1]: Reached target remote-fs.target. Nov 1 00:23:36.775921 systemd[1]: Reached target slices.target. Nov 1 00:23:36.775953 systemd[1]: Reached target swap.target. Nov 1 00:23:36.775983 systemd[1]: Reached target torcx.target. Nov 1 00:23:36.776014 systemd[1]: Reached target veritysetup.target. Nov 1 00:23:36.776048 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:23:36.776081 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:23:36.776110 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:23:36.776139 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:23:36.776168 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:23:36.776199 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:23:36.776230 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:23:36.776263 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:23:36.776292 systemd[1]: Mounting media.mount... Nov 1 00:23:36.776347 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:23:36.776382 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:23:36.776411 systemd[1]: Mounting tmp.mount... Nov 1 00:23:36.776442 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:23:36.776471 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:23:36.776504 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:23:36.776536 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:23:36.776589 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:23:36.776628 systemd[1]: Starting modprobe@drm.service... Nov 1 00:23:36.776661 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:23:36.776695 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:23:36.776808 systemd[1]: Starting modprobe@loop.service... Nov 1 00:23:36.777287 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:23:36.778798 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:23:36.778835 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:23:36.778914 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:23:36.778950 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:23:36.778983 systemd[1]: Stopped systemd-journald.service. Nov 1 00:23:36.779018 systemd[1]: Starting systemd-journald.service... Nov 1 00:23:36.779048 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:23:36.779077 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:23:36.779106 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:23:36.779140 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:23:36.779172 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:23:36.779201 systemd[1]: Stopped verity-setup.service. Nov 1 00:23:36.779233 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:23:36.779263 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:23:36.779292 systemd[1]: Mounted media.mount. Nov 1 00:23:36.779346 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:23:36.779378 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:23:36.779407 systemd[1]: Mounted tmp.mount. Nov 1 00:23:36.779437 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:23:36.779466 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:23:36.779495 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:23:36.779524 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:23:36.779553 systemd[1]: Finished modprobe@drm.service. Nov 1 00:23:36.779583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:23:36.779616 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:23:36.779646 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:23:36.779678 systemd[1]: Reached target network-pre.target. Nov 1 00:23:36.779711 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:23:36.779740 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:23:36.779771 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:23:36.779805 kernel: loop: module loaded Nov 1 00:23:36.779837 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:23:36.779867 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:23:36.779903 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:23:36.779944 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:23:36.779982 systemd-journald[1327]: Journal started Nov 1 00:23:36.780087 systemd-journald[1327]: Runtime Journal (/run/log/journal/ec2530a4887e05b386cdac81e78a659f) is 8.0M, max 75.4M, 67.4M free. Nov 1 00:23:20.552000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:23:21.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:23:21.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:23:21.444000 audit: BPF prog-id=10 op=LOAD Nov 1 00:23:21.444000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:23:21.444000 audit: BPF prog-id=11 op=LOAD Nov 1 00:23:21.444000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:23:22.516000 audit[1257]: AVC avc: denied { associate } for pid=1257 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:23:22.516000 audit[1257]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1240 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:22.516000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:23:22.520000 audit[1257]: AVC avc: denied { associate } for pid=1257 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:23:22.520000 audit[1257]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145979 a2=1ed a3=0 items=2 ppid=1240 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:22.520000 audit: CWD cwd="/" Nov 1 00:23:22.520000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:22.520000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:22.520000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:23:36.248000 audit: BPF prog-id=12 op=LOAD Nov 1 00:23:36.248000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:23:36.251000 audit: BPF prog-id=13 op=LOAD Nov 1 00:23:36.253000 audit: BPF prog-id=14 op=LOAD Nov 1 00:23:36.253000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:23:36.253000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:23:36.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.267000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:23:36.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.587000 audit: BPF prog-id=15 op=LOAD Nov 1 00:23:36.587000 audit: BPF prog-id=16 op=LOAD Nov 1 00:23:36.587000 audit: BPF prog-id=17 op=LOAD Nov 1 00:23:36.587000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:23:36.587000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:23:36.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.766000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:23:36.766000 audit[1327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffcbd2a460 a2=4000 a3=1 items=0 ppid=1 pid=1327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:36.766000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:23:22.400480 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:23:36.245246 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:23:22.401490 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:23:36.245268 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Nov 1 00:23:22.401540 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:23:36.257615 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:23:22.401609 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:23:22.401636 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:23:22.401704 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:23:22.401735 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:23:22.402184 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:23:22.402275 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:23:22.402343 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:23:22.403884 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:23:22.403968 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:23:22.404017 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:23:22.404059 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:23:22.404107 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:23:22.404145 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:23:34.495501 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:34Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:23:34.496026 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:34Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:23:34.496258 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:34Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:23:34.496721 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:34Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:23:34.496829 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:34Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:23:34.496966 /usr/lib/systemd/system-generators/torcx-generator[1257]: time="2025-11-01T00:23:34Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:23:36.802351 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:23:36.802489 systemd[1]: Started systemd-journald.service. Nov 1 00:23:36.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.807485 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:23:36.807926 systemd[1]: Finished modprobe@loop.service. Nov 1 00:23:36.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.810754 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:23:36.820753 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:23:36.823429 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:23:36.834321 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:23:36.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.840278 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:23:36.863437 kernel: fuse: init (API version 7.34) Nov 1 00:23:36.863940 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:23:36.864387 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:23:36.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.873561 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:23:36.885898 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:23:36.914776 systemd-journald[1327]: Time spent on flushing to /var/log/journal/ec2530a4887e05b386cdac81e78a659f is 60.508ms for 1152 entries. Nov 1 00:23:36.914776 systemd-journald[1327]: System Journal (/var/log/journal/ec2530a4887e05b386cdac81e78a659f) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:23:37.034980 systemd-journald[1327]: Received client request to flush runtime journal. Nov 1 00:23:36.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:37.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:36.919577 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:23:37.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:37.041736 udevadm[1348]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:23:36.922399 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:23:36.933004 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:23:36.937524 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:23:36.968698 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:23:37.018150 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:23:37.022738 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:23:37.036657 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:23:37.818055 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:23:37.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:38.437440 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:23:38.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:38.439000 audit: BPF prog-id=18 op=LOAD Nov 1 00:23:38.439000 audit: BPF prog-id=19 op=LOAD Nov 1 00:23:38.439000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:23:38.439000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:23:38.442211 systemd[1]: Starting systemd-udevd.service... Nov 1 00:23:38.483848 systemd-udevd[1376]: Using default interface naming scheme 'v252'. Nov 1 00:23:38.606717 systemd[1]: Started systemd-udevd.service. Nov 1 00:23:38.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:38.614000 audit: BPF prog-id=20 op=LOAD Nov 1 00:23:38.616878 systemd[1]: Starting systemd-networkd.service... Nov 1 00:23:38.625000 audit: BPF prog-id=21 op=LOAD Nov 1 00:23:38.625000 audit: BPF prog-id=22 op=LOAD Nov 1 00:23:38.625000 audit: BPF prog-id=23 op=LOAD Nov 1 00:23:38.628555 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:23:38.700851 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 00:23:38.721682 systemd[1]: Started systemd-userdbd.service. Nov 1 00:23:38.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:38.734376 (udev-worker)[1396]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:23:38.874645 systemd-networkd[1386]: lo: Link UP Nov 1 00:23:38.875106 systemd-networkd[1386]: lo: Gained carrier Nov 1 00:23:38.876212 systemd-networkd[1386]: Enumeration completed Nov 1 00:23:38.876633 systemd[1]: Started systemd-networkd.service. Nov 1 00:23:38.877070 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:23:38.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:38.883334 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:23:38.883241 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:23:38.885277 systemd-networkd[1386]: eth0: Link UP Nov 1 00:23:38.885790 systemd-networkd[1386]: eth0: Gained carrier Nov 1 00:23:38.898621 systemd-networkd[1386]: eth0: DHCPv4 address 172.31.20.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 1 00:23:39.491471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:23:39.498337 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:23:39.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:39.506520 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:23:39.780039 lvm[1495]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:23:39.817810 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:23:39.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:39.820221 systemd[1]: Reached target cryptsetup.target. Nov 1 00:23:39.824728 systemd[1]: Starting lvm2-activation.service... Nov 1 00:23:39.833884 lvm[1496]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:23:39.872194 systemd[1]: Finished lvm2-activation.service. Nov 1 00:23:39.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:39.874752 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:23:39.876749 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:23:39.876799 systemd[1]: Reached target local-fs.target. Nov 1 00:23:39.879597 systemd[1]: Reached target machines.target. Nov 1 00:23:39.883928 systemd[1]: Starting ldconfig.service... Nov 1 00:23:39.887809 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:23:39.888022 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:23:39.890625 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:23:39.894944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:23:39.900111 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:23:39.905818 systemd[1]: Starting systemd-sysext.service... Nov 1 00:23:39.932149 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:23:39.944023 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:23:39.944412 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:23:39.952797 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1498 (bootctl) Nov 1 00:23:39.955766 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:23:39.974336 kernel: loop0: detected capacity change from 0 to 211168 Nov 1 00:23:40.083556 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:23:40.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.107400 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:23:40.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.110045 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:23:40.120445 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:23:40.141361 kernel: loop1: detected capacity change from 0 to 211168 Nov 1 00:23:40.156568 (sd-sysext)[1511]: Using extensions 'kubernetes'. Nov 1 00:23:40.157369 (sd-sysext)[1511]: Merged extensions into '/usr'. Nov 1 00:23:40.193524 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:23:40.197464 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.201593 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:23:40.205914 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:23:40.209906 systemd[1]: Starting modprobe@loop.service... Nov 1 00:23:40.210186 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.210863 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:23:40.218667 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:23:40.219894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:23:40.220504 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:23:40.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.222101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:23:40.222607 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:23:40.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.225959 systemd[1]: Finished systemd-sysext.service. Nov 1 00:23:40.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.230616 systemd[1]: Starting ensure-sysext.service... Nov 1 00:23:40.230840 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:23:40.234130 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:23:40.246283 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:23:40.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.246975 systemd[1]: Finished modprobe@loop.service. Nov 1 00:23:40.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.250897 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.253479 systemd[1]: Reloading. Nov 1 00:23:40.325916 systemd-fsck[1508]: fsck.fat 4.2 (2021-01-31) Nov 1 00:23:40.325916 systemd-fsck[1508]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Nov 1 00:23:40.398391 /usr/lib/systemd/system-generators/torcx-generator[1546]: time="2025-11-01T00:23:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:23:40.398466 /usr/lib/systemd/system-generators/torcx-generator[1546]: time="2025-11-01T00:23:40Z" level=info msg="torcx already run" Nov 1 00:23:40.399745 systemd-networkd[1386]: eth0: Gained IPv6LL Nov 1 00:23:40.432011 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:23:40.577287 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:23:40.577351 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:23:40.616741 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:23:40.718770 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:23:40.763000 audit: BPF prog-id=24 op=LOAD Nov 1 00:23:40.763000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:23:40.766000 audit: BPF prog-id=25 op=LOAD Nov 1 00:23:40.766000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:23:40.766000 audit: BPF prog-id=26 op=LOAD Nov 1 00:23:40.767000 audit: BPF prog-id=27 op=LOAD Nov 1 00:23:40.767000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:23:40.767000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:23:40.768000 audit: BPF prog-id=28 op=LOAD Nov 1 00:23:40.768000 audit: BPF prog-id=29 op=LOAD Nov 1 00:23:40.768000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:23:40.768000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:23:40.776000 audit: BPF prog-id=30 op=LOAD Nov 1 00:23:40.776000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:23:40.776000 audit: BPF prog-id=31 op=LOAD Nov 1 00:23:40.776000 audit: BPF prog-id=32 op=LOAD Nov 1 00:23:40.776000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:23:40.776000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:23:40.796711 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:23:40.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.800174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:23:40.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.816116 systemd[1]: Mounting boot.mount... Nov 1 00:23:40.828072 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.831102 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:23:40.837403 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:23:40.843957 systemd[1]: Starting modprobe@loop.service... Nov 1 00:23:40.846187 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.846569 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:23:40.851736 systemd[1]: Mounted boot.mount. Nov 1 00:23:40.852256 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:23:40.856205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:23:40.856528 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:23:40.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.863201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:23:40.863564 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:23:40.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.867100 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:23:40.867444 systemd[1]: Finished modprobe@loop.service. Nov 1 00:23:40.870684 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:23:40.870881 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.877406 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.879968 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:23:40.884461 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:23:40.889981 systemd[1]: Starting modprobe@loop.service... Nov 1 00:23:40.892255 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.892600 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:23:40.895208 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:23:40.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.905026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:23:40.905356 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:23:40.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.908637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:23:40.908897 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:23:40.912011 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:23:40.912265 systemd[1]: Finished modprobe@loop.service. Nov 1 00:23:40.915812 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.918449 systemd[1]: Starting modprobe@drm.service... Nov 1 00:23:40.922211 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:40.922535 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:23:40.922760 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:23:40.922967 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:23:40.925290 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:23:40.925674 systemd[1]: Finished modprobe@drm.service. Nov 1 00:23:40.932103 systemd[1]: Finished ensure-sysext.service. Nov 1 00:23:40.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.455393 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:23:41.461603 kernel: kauditd_printk_skb: 96 callbacks suppressed Nov 1 00:23:41.461696 kernel: audit: type=1130 audit(1761956621.454:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.471034 systemd[1]: Starting audit-rules.service... Nov 1 00:23:41.475405 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:23:41.482370 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:23:41.492000 audit: BPF prog-id=33 op=LOAD Nov 1 00:23:41.496773 systemd[1]: Starting systemd-resolved.service... Nov 1 00:23:41.499227 kernel: audit: type=1334 audit(1761956621.492:191): prog-id=33 op=LOAD Nov 1 00:23:41.500000 audit: BPF prog-id=34 op=LOAD Nov 1 00:23:41.507089 kernel: audit: type=1334 audit(1761956621.500:192): prog-id=34 op=LOAD Nov 1 00:23:41.504671 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:23:41.512349 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:23:41.534000 audit[1619]: SYSTEM_BOOT pid=1619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.539378 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:23:41.550081 kernel: audit: type=1127 audit(1761956621.534:193): pid=1619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.563485 kernel: audit: type=1130 audit(1761956621.543:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.570596 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:23:41.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.575023 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:23:41.583349 kernel: audit: type=1130 audit(1761956621.573:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.606269 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:23:41.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.622414 kernel: audit: type=1130 audit(1761956621.610:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.664820 systemd-resolved[1617]: Positive Trust Anchors: Nov 1 00:23:41.665398 systemd-resolved[1617]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:23:41.665554 systemd-resolved[1617]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:23:41.686875 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:23:41.689535 systemd[1]: Reached target time-set.target. Nov 1 00:23:41.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.700349 kernel: audit: type=1130 audit(1761956621.685:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:41.434109 systemd-timesyncd[1618]: Contacted time server 69.48.203.162:123 (0.flatcar.pool.ntp.org). Nov 1 00:23:41.470139 systemd-journald[1327]: Time jumped backwards, rotating. Nov 1 00:23:41.470264 kernel: audit: type=1305 audit(1761956621.447:198): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:23:41.470339 kernel: audit: type=1300 audit(1761956621.447:198): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffec013770 a2=420 a3=0 items=0 ppid=1613 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:41.447000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:23:41.447000 audit[1634]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffec013770 a2=420 a3=0 items=0 ppid=1613 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:41.447000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:23:41.470621 augenrules[1634]: No rules Nov 1 00:23:41.437645 systemd-timesyncd[1618]: Initial clock synchronization to Sat 2025-11-01 00:23:41.433792 UTC. Nov 1 00:23:41.456211 systemd[1]: Finished audit-rules.service. Nov 1 00:23:41.508954 systemd-resolved[1617]: Defaulting to hostname 'linux'. Nov 1 00:23:41.512072 systemd[1]: Started systemd-resolved.service. Nov 1 00:23:41.514674 systemd[1]: Reached target network.target. Nov 1 00:23:41.516583 systemd[1]: Reached target network-online.target. Nov 1 00:23:41.518544 systemd[1]: Reached target nss-lookup.target. Nov 1 00:23:45.656602 ldconfig[1497]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:23:45.665466 systemd[1]: Finished ldconfig.service. Nov 1 00:23:45.670176 systemd[1]: Starting systemd-update-done.service... Nov 1 00:23:45.686093 systemd[1]: Finished systemd-update-done.service. Nov 1 00:23:45.688565 systemd[1]: Reached target sysinit.target. Nov 1 00:23:45.690885 systemd[1]: Started motdgen.path. Nov 1 00:23:45.692662 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:23:45.695555 systemd[1]: Started logrotate.timer. Nov 1 00:23:45.697996 systemd[1]: Started mdadm.timer. Nov 1 00:23:45.699781 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:23:45.702053 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:23:45.702104 systemd[1]: Reached target paths.target. Nov 1 00:23:45.704082 systemd[1]: Reached target timers.target. Nov 1 00:23:45.706732 systemd[1]: Listening on dbus.socket. Nov 1 00:23:45.710693 systemd[1]: Starting docker.socket... Nov 1 00:23:45.718437 systemd[1]: Listening on sshd.socket. Nov 1 00:23:45.720658 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:23:45.721665 systemd[1]: Listening on docker.socket. Nov 1 00:23:45.723702 systemd[1]: Reached target sockets.target. Nov 1 00:23:45.726148 systemd[1]: Reached target basic.target. Nov 1 00:23:45.728218 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:23:45.728415 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:23:45.732583 systemd[1]: Started amazon-ssm-agent.service. Nov 1 00:23:45.738518 systemd[1]: Starting containerd.service... Nov 1 00:23:45.744524 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 00:23:45.749973 systemd[1]: Starting dbus.service... Nov 1 00:23:45.754723 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:23:45.760059 systemd[1]: Starting extend-filesystems.service... Nov 1 00:23:45.762255 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:23:45.765170 systemd[1]: Starting kubelet.service... Nov 1 00:23:45.769640 systemd[1]: Starting motdgen.service... Nov 1 00:23:45.774082 systemd[1]: Started nvidia.service. Nov 1 00:23:45.778641 systemd[1]: Starting prepare-helm.service... Nov 1 00:23:45.782813 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:23:45.802261 jq[1647]: false Nov 1 00:23:45.787753 systemd[1]: Starting sshd-keygen.service... Nov 1 00:23:45.797743 systemd[1]: Starting systemd-logind.service... Nov 1 00:23:45.799934 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:23:45.800127 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:23:45.806742 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:23:45.808382 systemd[1]: Starting update-engine.service... Nov 1 00:23:45.812777 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:23:45.822660 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:23:45.823092 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:23:45.836414 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:23:45.836843 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:23:45.845700 jq[1659]: true Nov 1 00:23:45.889807 extend-filesystems[1648]: Found loop1 Nov 1 00:23:45.892566 jq[1665]: true Nov 1 00:23:45.893153 extend-filesystems[1648]: Found nvme0n1 Nov 1 00:23:45.902317 extend-filesystems[1648]: Found nvme0n1p1 Nov 1 00:23:45.908921 extend-filesystems[1648]: Found nvme0n1p2 Nov 1 00:23:45.917482 extend-filesystems[1648]: Found nvme0n1p3 Nov 1 00:23:45.919740 extend-filesystems[1648]: Found usr Nov 1 00:23:45.929420 extend-filesystems[1648]: Found nvme0n1p4 Nov 1 00:23:45.932989 extend-filesystems[1648]: Found nvme0n1p6 Nov 1 00:23:45.936146 extend-filesystems[1648]: Found nvme0n1p7 Nov 1 00:23:45.941250 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:23:45.941680 systemd[1]: Finished motdgen.service. Nov 1 00:23:45.946425 tar[1662]: linux-arm64/LICENSE Nov 1 00:23:45.947076 tar[1662]: linux-arm64/helm Nov 1 00:23:45.950010 extend-filesystems[1648]: Found nvme0n1p9 Nov 1 00:23:45.952961 extend-filesystems[1648]: Checking size of /dev/nvme0n1p9 Nov 1 00:23:46.036653 bash[1697]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:23:46.038044 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:23:46.086103 extend-filesystems[1648]: Resized partition /dev/nvme0n1p9 Nov 1 00:23:46.156963 dbus-daemon[1646]: [system] SELinux support is enabled Nov 1 00:23:46.157782 systemd[1]: Started dbus.service. Nov 1 00:23:46.164576 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:23:46.164635 systemd[1]: Reached target system-config.target. Nov 1 00:23:46.168782 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:23:46.168818 systemd[1]: Reached target user-config.target. Nov 1 00:23:46.186922 extend-filesystems[1701]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:23:46.201613 dbus-daemon[1646]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1386 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:23:46.205320 systemd-logind[1655]: Watching system buttons on /dev/input/event0 (Power Button) Nov 1 00:23:46.205376 systemd-logind[1655]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 1 00:23:46.208580 systemd-logind[1655]: New seat seat0. Nov 1 00:23:46.248438 systemd[1]: Started systemd-logind.service. Nov 1 00:23:46.259672 systemd[1]: Starting systemd-hostnamed.service... Nov 1 00:23:46.309318 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 1 00:23:46.309428 env[1666]: time="2025-11-01T00:23:46.306844915Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:23:46.371629 env[1666]: time="2025-11-01T00:23:46.371385068Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:23:46.372003 env[1666]: time="2025-11-01T00:23:46.371966948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:23:46.375398 env[1666]: time="2025-11-01T00:23:46.374544848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:23:46.375398 env[1666]: time="2025-11-01T00:23:46.374612816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:23:46.375398 env[1666]: time="2025-11-01T00:23:46.374971064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:23:46.375398 env[1666]: time="2025-11-01T00:23:46.375006608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:23:46.375398 env[1666]: time="2025-11-01T00:23:46.375037016Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:23:46.375398 env[1666]: time="2025-11-01T00:23:46.375061436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:23:46.566016 env[1666]: time="2025-11-01T00:23:46.410272112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:23:46.566016 env[1666]: time="2025-11-01T00:23:46.412635704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:23:46.566016 env[1666]: time="2025-11-01T00:23:46.413811572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:23:46.566016 env[1666]: time="2025-11-01T00:23:46.413849600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:23:46.566016 env[1666]: time="2025-11-01T00:23:46.425398052Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:23:46.566016 env[1666]: time="2025-11-01T00:23:46.425448068Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:23:46.566452 update_engine[1658]: I1101 00:23:46.559736 1658 main.cc:92] Flatcar Update Engine starting Nov 1 00:23:46.594600 env[1666]: time="2025-11-01T00:23:46.594520737Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:23:46.594752 env[1666]: time="2025-11-01T00:23:46.594601353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:23:46.594752 env[1666]: time="2025-11-01T00:23:46.594636933Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:23:46.594752 env[1666]: time="2025-11-01T00:23:46.594709377Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:23:46.594752 env[1666]: time="2025-11-01T00:23:46.594744969Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:23:46.594970 env[1666]: time="2025-11-01T00:23:46.594777441Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:23:46.594970 env[1666]: time="2025-11-01T00:23:46.594812757Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:23:46.595497 env[1666]: time="2025-11-01T00:23:46.595365357Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:23:46.595497 env[1666]: time="2025-11-01T00:23:46.595446561Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:23:46.595497 env[1666]: time="2025-11-01T00:23:46.595488789Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:23:46.595729 env[1666]: time="2025-11-01T00:23:46.595520349Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:23:46.595729 env[1666]: time="2025-11-01T00:23:46.595552857Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:23:46.595836 env[1666]: time="2025-11-01T00:23:46.595786449Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:23:46.596012 env[1666]: time="2025-11-01T00:23:46.595958721Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:23:46.596482 env[1666]: time="2025-11-01T00:23:46.596405145Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:23:46.596578 env[1666]: time="2025-11-01T00:23:46.596477649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.596578 env[1666]: time="2025-11-01T00:23:46.596513853Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:23:46.596908 env[1666]: time="2025-11-01T00:23:46.596737845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.596908 env[1666]: time="2025-11-01T00:23:46.596786085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.596908 env[1666]: time="2025-11-01T00:23:46.596819361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.596908 env[1666]: time="2025-11-01T00:23:46.596852637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.596908 env[1666]: time="2025-11-01T00:23:46.596882637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.597196 env[1666]: time="2025-11-01T00:23:46.596915109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.597196 env[1666]: time="2025-11-01T00:23:46.596957349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.597196 env[1666]: time="2025-11-01T00:23:46.596987217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.597196 env[1666]: time="2025-11-01T00:23:46.597020457Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:23:46.598412 env[1666]: time="2025-11-01T00:23:46.597287925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.598412 env[1666]: time="2025-11-01T00:23:46.598400841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.598671 env[1666]: time="2025-11-01T00:23:46.598434177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.598671 env[1666]: time="2025-11-01T00:23:46.598463373Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:23:46.598671 env[1666]: time="2025-11-01T00:23:46.598550589Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:23:46.598671 env[1666]: time="2025-11-01T00:23:46.598579209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:23:46.598858 env[1666]: time="2025-11-01T00:23:46.598616445Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:23:46.598858 env[1666]: time="2025-11-01T00:23:46.598739145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:23:46.600829 env[1666]: time="2025-11-01T00:23:46.600421893Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:23:46.603071 env[1666]: time="2025-11-01T00:23:46.600762021Z" level=info msg="Connect containerd service" Nov 1 00:23:46.603071 env[1666]: time="2025-11-01T00:23:46.601226913Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:23:46.610948 env[1666]: time="2025-11-01T00:23:46.610860717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:23:46.612375 env[1666]: time="2025-11-01T00:23:46.611458929Z" level=info msg="Start subscribing containerd event" Nov 1 00:23:46.612375 env[1666]: time="2025-11-01T00:23:46.611546481Z" level=info msg="Start recovering state" Nov 1 00:23:46.612375 env[1666]: time="2025-11-01T00:23:46.611665029Z" level=info msg="Start event monitor" Nov 1 00:23:46.612375 env[1666]: time="2025-11-01T00:23:46.611703057Z" level=info msg="Start snapshots syncer" Nov 1 00:23:46.612375 env[1666]: time="2025-11-01T00:23:46.611725905Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:23:46.612375 env[1666]: time="2025-11-01T00:23:46.611744529Z" level=info msg="Start streaming server" Nov 1 00:23:46.613740 env[1666]: time="2025-11-01T00:23:46.613657773Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:23:46.621364 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 1 00:23:46.622144 env[1666]: time="2025-11-01T00:23:46.617288157Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:23:46.622404 systemd[1]: Started containerd.service. Nov 1 00:23:46.628487 env[1666]: time="2025-11-01T00:23:46.624738825Z" level=info msg="containerd successfully booted in 0.325905s" Nov 1 00:23:46.640412 extend-filesystems[1701]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 1 00:23:46.640412 extend-filesystems[1701]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 1 00:23:46.640412 extend-filesystems[1701]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 1 00:23:46.663214 extend-filesystems[1648]: Resized filesystem in /dev/nvme0n1p9 Nov 1 00:23:46.641864 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:23:46.642238 systemd[1]: Finished extend-filesystems.service. Nov 1 00:23:46.670286 systemd[1]: Started update-engine.service. Nov 1 00:23:46.674150 update_engine[1658]: I1101 00:23:46.670622 1658 update_check_scheduler.cc:74] Next update check in 8m25s Nov 1 00:23:46.677823 systemd[1]: Started locksmithd.service. Nov 1 00:23:46.723711 amazon-ssm-agent[1643]: 2025/11/01 00:23:46 Failed to load instance info from vault. RegistrationKey does not exist. Nov 1 00:23:46.743168 amazon-ssm-agent[1643]: Initializing new seelog logger Nov 1 00:23:46.749613 amazon-ssm-agent[1643]: New Seelog Logger Creation Complete Nov 1 00:23:46.749908 amazon-ssm-agent[1643]: 2025/11/01 00:23:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:23:46.750029 amazon-ssm-agent[1643]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:23:46.750612 amazon-ssm-agent[1643]: 2025/11/01 00:23:46 processing appconfig overrides Nov 1 00:23:46.792655 systemd[1]: nvidia.service: Deactivated successfully. Nov 1 00:23:46.843765 dbus-daemon[1646]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:23:46.844039 systemd[1]: Started systemd-hostnamed.service. Nov 1 00:23:46.849272 dbus-daemon[1646]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1725 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:23:46.854429 systemd[1]: Starting polkit.service... Nov 1 00:23:46.892392 polkitd[1802]: Started polkitd version 121 Nov 1 00:23:46.937701 polkitd[1802]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:23:46.946325 polkitd[1802]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:23:47.032513 polkitd[1802]: Finished loading, compiling and executing 2 rules Nov 1 00:23:47.037222 dbus-daemon[1646]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:23:47.038134 systemd[1]: Started polkit.service. Nov 1 00:23:47.042249 polkitd[1802]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:23:47.082907 systemd-hostnamed[1725]: Hostname set to (transient) Nov 1 00:23:47.083082 systemd-resolved[1617]: System hostname changed to 'ip-172-31-20-188'. Nov 1 00:23:47.328402 coreos-metadata[1645]: Nov 01 00:23:47.328 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 1 00:23:47.331839 coreos-metadata[1645]: Nov 01 00:23:47.331 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Nov 1 00:23:47.338545 coreos-metadata[1645]: Nov 01 00:23:47.338 INFO Fetch successful Nov 1 00:23:47.338545 coreos-metadata[1645]: Nov 01 00:23:47.338 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 00:23:47.344209 coreos-metadata[1645]: Nov 01 00:23:47.344 INFO Fetch successful Nov 1 00:23:47.348326 unknown[1645]: wrote ssh authorized keys file for user: core Nov 1 00:23:47.374898 update-ssh-keys[1847]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:23:47.375676 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 00:23:47.555671 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Create new startup processor Nov 1 00:23:47.557536 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [LongRunningPluginsManager] registered plugins: {} Nov 1 00:23:47.561402 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Initializing bookkeeping folders Nov 1 00:23:47.562385 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO removing the completed state files Nov 1 00:23:47.562531 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Initializing bookkeeping folders for long running plugins Nov 1 00:23:47.562687 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Nov 1 00:23:47.562817 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Initializing healthcheck folders for long running plugins Nov 1 00:23:47.562946 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Initializing locations for inventory plugin Nov 1 00:23:47.563088 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Initializing default location for custom inventory Nov 1 00:23:47.563218 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Initializing default location for file inventory Nov 1 00:23:47.563364 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Initializing default location for role inventory Nov 1 00:23:47.563493 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Init the cloudwatchlogs publisher Nov 1 00:23:47.563636 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform independent plugin aws:runDocument Nov 1 00:23:47.563766 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform independent plugin aws:configureDocker Nov 1 00:23:47.563896 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform independent plugin aws:runDockerAction Nov 1 00:23:47.564343 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform independent plugin aws:refreshAssociation Nov 1 00:23:47.564484 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform independent plugin aws:configurePackage Nov 1 00:23:47.564615 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform independent plugin aws:softwareInventory Nov 1 00:23:47.564809 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform independent plugin aws:runPowerShellScript Nov 1 00:23:47.567049 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform independent plugin aws:updateSsmAgent Nov 1 00:23:47.567275 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform independent plugin aws:downloadContent Nov 1 00:23:47.567458 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Successfully loaded platform dependent plugin aws:runShellScript Nov 1 00:23:47.567629 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Nov 1 00:23:47.567775 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO OS: linux, Arch: arm64 Nov 1 00:23:47.569376 amazon-ssm-agent[1643]: datastore file /var/lib/amazon/ssm/i-05bbf5088495c407b/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Nov 1 00:23:47.576579 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] Starting document processing engine... Nov 1 00:23:47.673396 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] [EngineProcessor] Starting Nov 1 00:23:47.768958 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Nov 1 00:23:47.863513 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] Starting message polling Nov 1 00:23:47.958329 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] Starting send replies to MDS Nov 1 00:23:48.053226 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [instanceID=i-05bbf5088495c407b] Starting association polling Nov 1 00:23:48.121185 tar[1662]: linux-arm64/README.md Nov 1 00:23:48.132415 systemd[1]: Finished prepare-helm.service. Nov 1 00:23:48.148452 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Nov 1 00:23:48.243848 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] [Association] Launching response handler Nov 1 00:23:48.339369 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Nov 1 00:23:48.435096 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Nov 1 00:23:48.531101 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Nov 1 00:23:48.627142 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessageGatewayService] Starting session document processing engine... Nov 1 00:23:48.723480 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessageGatewayService] [EngineProcessor] Starting Nov 1 00:23:48.819984 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Nov 1 00:23:48.916714 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-05bbf5088495c407b, requestId: bc15ed84-9411-4e29-993f-51bb77937e91 Nov 1 00:23:49.013618 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [OfflineService] Starting document processing engine... Nov 1 00:23:49.110772 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [OfflineService] [EngineProcessor] Starting Nov 1 00:23:49.168968 systemd[1]: Started kubelet.service. Nov 1 00:23:49.208920 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [OfflineService] [EngineProcessor] Initial processing Nov 1 00:23:49.256335 locksmithd[1790]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:23:49.306429 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [OfflineService] Starting message polling Nov 1 00:23:49.404177 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [OfflineService] Starting send replies to MDS Nov 1 00:23:49.484822 systemd[1]: Created slice system-sshd.slice. Nov 1 00:23:49.502024 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [LongRunningPluginsManager] starting long running plugin manager Nov 1 00:23:49.600105 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Nov 1 00:23:49.698520 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [HealthCheck] HealthCheck reporting agent health. Nov 1 00:23:49.796904 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessageGatewayService] listening reply. Nov 1 00:23:49.895545 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Nov 1 00:23:49.994499 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [StartupProcessor] Executing startup processor tasks Nov 1 00:23:50.093512 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Nov 1 00:23:50.132495 kubelet[1858]: E1101 00:23:50.132390 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:23:50.136230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:23:50.136596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:23:50.137031 systemd[1]: kubelet.service: Consumed 1.596s CPU time. Nov 1 00:23:50.192799 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Nov 1 00:23:50.292309 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Nov 1 00:23:50.391928 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-05bbf5088495c407b?role=subscribe&stream=input Nov 1 00:23:50.491867 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-05bbf5088495c407b?role=subscribe&stream=input Nov 1 00:23:50.591861 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessageGatewayService] Starting receiving message from control channel Nov 1 00:23:50.692153 amazon-ssm-agent[1643]: 2025-11-01 00:23:47 INFO [MessageGatewayService] [EngineProcessor] Initial processing Nov 1 00:23:53.029142 sshd_keygen[1676]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:23:53.066909 systemd[1]: Finished sshd-keygen.service. Nov 1 00:23:53.072727 systemd[1]: Starting issuegen.service... Nov 1 00:23:53.077559 systemd[1]: Started sshd@0-172.31.20.188:22-147.75.109.163:58126.service. Nov 1 00:23:53.090820 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:23:53.091195 systemd[1]: Finished issuegen.service. Nov 1 00:23:53.096130 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:23:53.112919 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:23:53.118817 systemd[1]: Started getty@tty1.service. Nov 1 00:23:53.124944 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:23:53.127972 systemd[1]: Reached target getty.target. Nov 1 00:23:53.131046 systemd[1]: Reached target multi-user.target. Nov 1 00:23:53.136838 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:23:53.153476 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:23:53.153845 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:23:53.156855 systemd[1]: Startup finished in 1.218s (kernel) + 12.584s (initrd) + 33.406s (userspace) = 47.209s. Nov 1 00:23:54.710970 sshd[1873]: Accepted publickey for core from 147.75.109.163 port 58126 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:23:54.716886 sshd[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:54.737359 systemd[1]: Created slice user-500.slice. Nov 1 00:23:54.739854 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:23:54.749444 systemd-logind[1655]: New session 1 of user core. Nov 1 00:23:54.762822 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:23:54.766766 systemd[1]: Starting user@500.service... Nov 1 00:23:54.774523 (systemd)[1882]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:54.987152 systemd[1882]: Queued start job for default target default.target. Nov 1 00:23:54.988248 systemd[1882]: Reached target paths.target. Nov 1 00:23:54.988337 systemd[1882]: Reached target sockets.target. Nov 1 00:23:54.988374 systemd[1882]: Reached target timers.target. Nov 1 00:23:54.988404 systemd[1882]: Reached target basic.target. Nov 1 00:23:54.988501 systemd[1882]: Reached target default.target. Nov 1 00:23:54.988567 systemd[1882]: Startup finished in 201ms. Nov 1 00:23:54.989627 systemd[1]: Started user@500.service. Nov 1 00:23:54.991632 systemd[1]: Started session-1.scope. Nov 1 00:23:55.145628 systemd[1]: Started sshd@1-172.31.20.188:22-147.75.109.163:34936.service. Nov 1 00:23:55.311816 sshd[1891]: Accepted publickey for core from 147.75.109.163 port 34936 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:23:55.314331 sshd[1891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:55.323397 systemd-logind[1655]: New session 2 of user core. Nov 1 00:23:55.324331 systemd[1]: Started session-2.scope. Nov 1 00:23:55.455748 sshd[1891]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:55.461173 systemd-logind[1655]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:23:55.461673 systemd[1]: sshd@1-172.31.20.188:22-147.75.109.163:34936.service: Deactivated successfully. Nov 1 00:23:55.462949 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:23:55.464388 systemd-logind[1655]: Removed session 2. Nov 1 00:23:55.484662 systemd[1]: Started sshd@2-172.31.20.188:22-147.75.109.163:34952.service. Nov 1 00:23:55.657745 sshd[1897]: Accepted publickey for core from 147.75.109.163 port 34952 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:23:55.660794 sshd[1897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:55.669151 systemd[1]: Started session-3.scope. Nov 1 00:23:55.670093 systemd-logind[1655]: New session 3 of user core. Nov 1 00:23:55.793668 sshd[1897]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:55.798643 systemd[1]: sshd@2-172.31.20.188:22-147.75.109.163:34952.service: Deactivated successfully. Nov 1 00:23:55.799959 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:23:55.801146 systemd-logind[1655]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:23:55.803494 systemd-logind[1655]: Removed session 3. Nov 1 00:23:55.821049 systemd[1]: Started sshd@3-172.31.20.188:22-147.75.109.163:34954.service. Nov 1 00:23:55.990968 sshd[1903]: Accepted publickey for core from 147.75.109.163 port 34954 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:23:55.994065 sshd[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:56.002231 systemd-logind[1655]: New session 4 of user core. Nov 1 00:23:56.002994 systemd[1]: Started session-4.scope. Nov 1 00:23:56.135506 sshd[1903]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:56.140592 systemd[1]: sshd@3-172.31.20.188:22-147.75.109.163:34954.service: Deactivated successfully. Nov 1 00:23:56.141861 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:23:56.143537 systemd-logind[1655]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:23:56.145730 systemd-logind[1655]: Removed session 4. Nov 1 00:23:56.164742 systemd[1]: Started sshd@4-172.31.20.188:22-147.75.109.163:34966.service. Nov 1 00:23:56.330119 sshd[1909]: Accepted publickey for core from 147.75.109.163 port 34966 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:23:56.332785 sshd[1909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:56.340394 systemd-logind[1655]: New session 5 of user core. Nov 1 00:23:56.341870 systemd[1]: Started session-5.scope. Nov 1 00:23:56.787911 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:23:56.789202 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:23:56.874265 systemd[1]: Starting docker.service... Nov 1 00:23:57.005912 env[1922]: time="2025-11-01T00:23:57.005814485Z" level=info msg="Starting up" Nov 1 00:23:57.010769 env[1922]: time="2025-11-01T00:23:57.010700237Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:23:57.010769 env[1922]: time="2025-11-01T00:23:57.010747145Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:23:57.010954 env[1922]: time="2025-11-01T00:23:57.010793489Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:23:57.010954 env[1922]: time="2025-11-01T00:23:57.010818989Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:23:57.014917 env[1922]: time="2025-11-01T00:23:57.014834609Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:23:57.014917 env[1922]: time="2025-11-01T00:23:57.014887985Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:23:57.015103 env[1922]: time="2025-11-01T00:23:57.014935613Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:23:57.015103 env[1922]: time="2025-11-01T00:23:57.014958281Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:23:57.028827 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2901543092-merged.mount: Deactivated successfully. Nov 1 00:23:57.068954 env[1922]: time="2025-11-01T00:23:57.068883629Z" level=info msg="Loading containers: start." Nov 1 00:23:57.889340 kernel: Initializing XFRM netlink socket Nov 1 00:23:58.233988 env[1922]: time="2025-11-01T00:23:58.233608495Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:23:58.237067 (udev-worker)[1933]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:23:58.419405 systemd-networkd[1386]: docker0: Link UP Nov 1 00:23:58.455003 env[1922]: time="2025-11-01T00:23:58.454929788Z" level=info msg="Loading containers: done." Nov 1 00:23:58.487383 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1671911435-merged.mount: Deactivated successfully. Nov 1 00:23:58.511726 env[1922]: time="2025-11-01T00:23:58.511669760Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:23:58.512281 env[1922]: time="2025-11-01T00:23:58.512247776Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:23:58.512624 env[1922]: time="2025-11-01T00:23:58.512593784Z" level=info msg="Daemon has completed initialization" Nov 1 00:23:58.570520 systemd[1]: Started docker.service. Nov 1 00:23:58.587317 env[1922]: time="2025-11-01T00:23:58.587050820Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:23:59.764047 env[1666]: time="2025-11-01T00:23:59.763991626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 1 00:24:00.299324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:24:00.299680 systemd[1]: Stopped kubelet.service. Nov 1 00:24:00.299762 systemd[1]: kubelet.service: Consumed 1.596s CPU time. Nov 1 00:24:00.302706 systemd[1]: Starting kubelet.service... Nov 1 00:24:00.429796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840921756.mount: Deactivated successfully. Nov 1 00:24:00.738257 systemd[1]: Started kubelet.service. Nov 1 00:24:00.853401 kubelet[2048]: E1101 00:24:00.853341 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:24:00.861204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:24:00.861573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:24:02.614728 env[1666]: time="2025-11-01T00:24:02.614656584Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:02.617324 env[1666]: time="2025-11-01T00:24:02.617244504Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:02.620841 env[1666]: time="2025-11-01T00:24:02.620785320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:02.624229 env[1666]: time="2025-11-01T00:24:02.624160008Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:02.625969 env[1666]: time="2025-11-01T00:24:02.625919412Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 1 00:24:02.629118 env[1666]: time="2025-11-01T00:24:02.629070468Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 1 00:24:04.929679 env[1666]: time="2025-11-01T00:24:04.929596708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:04.936069 amazon-ssm-agent[1643]: 2025-11-01 00:24:04 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Nov 1 00:24:04.941770 env[1666]: time="2025-11-01T00:24:04.941707084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:04.945874 env[1666]: time="2025-11-01T00:24:04.945806200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:04.949645 env[1666]: time="2025-11-01T00:24:04.949563352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:04.951694 env[1666]: time="2025-11-01T00:24:04.951621280Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 1 00:24:04.952718 env[1666]: time="2025-11-01T00:24:04.952666252Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 1 00:24:06.996843 env[1666]: time="2025-11-01T00:24:06.996760518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:06.999783 env[1666]: time="2025-11-01T00:24:06.999715602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:07.003258 env[1666]: time="2025-11-01T00:24:07.003206342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:07.006776 env[1666]: time="2025-11-01T00:24:07.006689318Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:07.010355 env[1666]: time="2025-11-01T00:24:07.010264418Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 1 00:24:07.011396 env[1666]: time="2025-11-01T00:24:07.011354126Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 1 00:24:08.383433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539078248.mount: Deactivated successfully. Nov 1 00:24:09.331565 env[1666]: time="2025-11-01T00:24:09.331496226Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:09.334104 env[1666]: time="2025-11-01T00:24:09.334030182Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:09.336570 env[1666]: time="2025-11-01T00:24:09.336511914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:09.338856 env[1666]: time="2025-11-01T00:24:09.338795358Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:09.339923 env[1666]: time="2025-11-01T00:24:09.339848298Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 1 00:24:09.341837 env[1666]: time="2025-11-01T00:24:09.341785902Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 1 00:24:09.964465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534748103.mount: Deactivated successfully. Nov 1 00:24:11.049322 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:24:11.049648 systemd[1]: Stopped kubelet.service. Nov 1 00:24:11.052250 systemd[1]: Starting kubelet.service... Nov 1 00:24:11.395420 systemd[1]: Started kubelet.service. Nov 1 00:24:11.477050 kubelet[2058]: E1101 00:24:11.476988 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:24:11.481867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:24:11.482200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:24:11.617148 env[1666]: time="2025-11-01T00:24:11.617064477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:11.621751 env[1666]: time="2025-11-01T00:24:11.621683433Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:11.626032 env[1666]: time="2025-11-01T00:24:11.625967085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:11.630128 env[1666]: time="2025-11-01T00:24:11.630062301Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:11.631938 env[1666]: time="2025-11-01T00:24:11.631883433Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 1 00:24:11.638001 env[1666]: time="2025-11-01T00:24:11.635078217Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:24:12.126237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634541059.mount: Deactivated successfully. Nov 1 00:24:12.143351 env[1666]: time="2025-11-01T00:24:12.143263364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:12.149517 env[1666]: time="2025-11-01T00:24:12.149463260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:12.154614 env[1666]: time="2025-11-01T00:24:12.154562648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:12.159389 env[1666]: time="2025-11-01T00:24:12.159338612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:12.160920 env[1666]: time="2025-11-01T00:24:12.160866200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 1 00:24:12.162334 env[1666]: time="2025-11-01T00:24:12.162258992Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 1 00:24:12.752249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199892032.mount: Deactivated successfully. Nov 1 00:24:16.145882 env[1666]: time="2025-11-01T00:24:16.145812440Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:16.148792 env[1666]: time="2025-11-01T00:24:16.148718186Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:16.153011 env[1666]: time="2025-11-01T00:24:16.152943264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:16.157037 env[1666]: time="2025-11-01T00:24:16.156970449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:16.159175 env[1666]: time="2025-11-01T00:24:16.159094081Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 1 00:24:17.116624 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:24:21.549324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:24:21.549676 systemd[1]: Stopped kubelet.service. Nov 1 00:24:21.552484 systemd[1]: Starting kubelet.service... Nov 1 00:24:22.014659 systemd[1]: Started kubelet.service. Nov 1 00:24:22.097601 kubelet[2090]: E1101 00:24:22.097524 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:24:22.100957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:24:22.101338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:24:24.053963 systemd[1]: Stopped kubelet.service. Nov 1 00:24:24.058773 systemd[1]: Starting kubelet.service... Nov 1 00:24:24.135471 systemd[1]: Reloading. Nov 1 00:24:24.300939 /usr/lib/systemd/system-generators/torcx-generator[2128]: time="2025-11-01T00:24:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:24:24.301011 /usr/lib/systemd/system-generators/torcx-generator[2128]: time="2025-11-01T00:24:24Z" level=info msg="torcx already run" Nov 1 00:24:24.508412 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:24:24.508453 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:24:24.548605 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:24:24.774232 systemd[1]: Started kubelet.service. Nov 1 00:24:24.777523 systemd[1]: Stopping kubelet.service... Nov 1 00:24:24.778262 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:24:24.778697 systemd[1]: Stopped kubelet.service. Nov 1 00:24:24.782698 systemd[1]: Starting kubelet.service... Nov 1 00:24:25.101231 systemd[1]: Started kubelet.service. Nov 1 00:24:25.194377 kubelet[2185]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:24:25.194897 kubelet[2185]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:24:25.195002 kubelet[2185]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:24:25.195222 kubelet[2185]: I1101 00:24:25.195176 2185 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:24:26.387551 kubelet[2185]: I1101 00:24:26.387486 2185 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:24:26.387551 kubelet[2185]: I1101 00:24:26.387540 2185 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:24:26.388417 kubelet[2185]: I1101 00:24:26.388371 2185 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:24:26.477642 kubelet[2185]: E1101 00:24:26.477592 2185 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:24:26.480994 kubelet[2185]: I1101 00:24:26.480926 2185 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:24:26.498363 kubelet[2185]: E1101 00:24:26.498277 2185 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:24:26.498580 kubelet[2185]: I1101 00:24:26.498554 2185 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:24:26.505080 kubelet[2185]: I1101 00:24:26.505037 2185 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:24:26.506096 kubelet[2185]: I1101 00:24:26.506045 2185 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:24:26.506496 kubelet[2185]: I1101 00:24:26.506210 2185 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:24:26.506893 kubelet[2185]: I1101 00:24:26.506867 2185 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:24:26.507019 kubelet[2185]: I1101 00:24:26.506995 2185 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:24:26.507498 kubelet[2185]: I1101 00:24:26.507474 2185 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:24:26.515509 kubelet[2185]: I1101 00:24:26.515472 2185 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:24:26.515718 kubelet[2185]: I1101 00:24:26.515695 2185 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:24:26.515867 kubelet[2185]: I1101 00:24:26.515846 2185 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:24:26.524765 kubelet[2185]: E1101 00:24:26.524705 2185 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-188&limit=500&resourceVersion=0\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:24:26.526352 kubelet[2185]: I1101 00:24:26.526311 2185 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:24:26.528724 kubelet[2185]: E1101 00:24:26.528660 2185 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:24:26.529099 kubelet[2185]: I1101 00:24:26.529071 2185 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:24:26.530503 kubelet[2185]: I1101 00:24:26.530464 2185 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:24:26.530885 kubelet[2185]: W1101 00:24:26.530862 2185 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:24:26.535807 kubelet[2185]: I1101 00:24:26.535774 2185 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:24:26.536013 kubelet[2185]: I1101 00:24:26.535992 2185 server.go:1289] "Started kubelet" Nov 1 00:24:26.555255 kubelet[2185]: I1101 00:24:26.555165 2185 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:24:26.558764 kubelet[2185]: I1101 00:24:26.558724 2185 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:24:26.565186 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:24:26.566634 kubelet[2185]: I1101 00:24:26.566566 2185 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:24:26.574517 kubelet[2185]: I1101 00:24:26.574472 2185 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:24:26.575404 kubelet[2185]: I1101 00:24:26.575312 2185 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:24:26.575671 kubelet[2185]: E1101 00:24:26.575639 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-188\" not found" Nov 1 00:24:26.575856 kubelet[2185]: I1101 00:24:26.575811 2185 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:24:26.577241 kubelet[2185]: I1101 00:24:26.577206 2185 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:24:26.577632 kubelet[2185]: I1101 00:24:26.577607 2185 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:24:26.577834 kubelet[2185]: I1101 00:24:26.577815 2185 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:24:26.579121 kubelet[2185]: E1101 00:24:26.579067 2185 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:24:26.579524 kubelet[2185]: E1101 00:24:26.579483 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-188?timeout=10s\": dial tcp 172.31.20.188:6443: connect: connection refused" interval="200ms" Nov 1 00:24:26.580445 kubelet[2185]: I1101 00:24:26.580409 2185 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:24:26.580688 kubelet[2185]: E1101 00:24:26.576857 2185 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.188:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-188.1873ba40c281ad3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-188,UID:ip-172-31-20-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-188,},FirstTimestamp:2025-11-01 00:24:26.535947583 +0000 UTC m=+1.424970821,LastTimestamp:2025-11-01 00:24:26.535947583 +0000 UTC m=+1.424970821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-188,}" Nov 1 00:24:26.581013 kubelet[2185]: I1101 00:24:26.580981 2185 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:24:26.587544 kubelet[2185]: I1101 00:24:26.587490 2185 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:24:26.615791 kubelet[2185]: E1101 00:24:26.615732 2185 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:24:26.621343 kubelet[2185]: I1101 00:24:26.621277 2185 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:24:26.621568 kubelet[2185]: I1101 00:24:26.621541 2185 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:24:26.621693 kubelet[2185]: I1101 00:24:26.621674 2185 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:24:26.625850 kubelet[2185]: I1101 00:24:26.625811 2185 policy_none.go:49] "None policy: Start" Nov 1 00:24:26.626056 kubelet[2185]: I1101 00:24:26.626033 2185 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:24:26.626168 kubelet[2185]: I1101 00:24:26.626149 2185 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:24:26.636684 systemd[1]: Created slice kubepods.slice. Nov 1 00:24:26.650579 kubelet[2185]: I1101 00:24:26.650469 2185 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:24:26.652619 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:24:26.655080 kubelet[2185]: I1101 00:24:26.655008 2185 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:24:26.655080 kubelet[2185]: I1101 00:24:26.655058 2185 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:24:26.655279 kubelet[2185]: I1101 00:24:26.655111 2185 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:24:26.655279 kubelet[2185]: I1101 00:24:26.655128 2185 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:24:26.655279 kubelet[2185]: E1101 00:24:26.655222 2185 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:24:26.668534 kubelet[2185]: E1101 00:24:26.668475 2185 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:24:26.679251 kubelet[2185]: E1101 00:24:26.678003 2185 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-188\" not found" Nov 1 00:24:26.678645 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:24:26.686217 kubelet[2185]: E1101 00:24:26.686180 2185 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:24:26.686675 kubelet[2185]: I1101 00:24:26.686653 2185 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:24:26.686840 kubelet[2185]: I1101 00:24:26.686788 2185 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:24:26.690656 kubelet[2185]: I1101 00:24:26.689407 2185 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:24:26.693551 kubelet[2185]: E1101 00:24:26.693494 2185 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:24:26.693823 kubelet[2185]: E1101 00:24:26.693569 2185 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-188\" not found" Nov 1 00:24:26.776643 systemd[1]: Created slice kubepods-burstable-pod433f088130c6808a19dc4d689baa9450.slice. Nov 1 00:24:26.782517 kubelet[2185]: E1101 00:24:26.782471 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-188?timeout=10s\": dial tcp 172.31.20.188:6443: connect: connection refused" interval="400ms" Nov 1 00:24:26.790801 kubelet[2185]: E1101 00:24:26.790760 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:26.793568 kubelet[2185]: I1101 00:24:26.792679 2185 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-188" Nov 1 00:24:26.793568 kubelet[2185]: E1101 00:24:26.793394 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.188:6443/api/v1/nodes\": dial tcp 172.31.20.188:6443: connect: connection refused" node="ip-172-31-20-188" Nov 1 00:24:26.796059 systemd[1]: Created slice kubepods-burstable-poddbc629de219a70480b3d7795dbb5c3bc.slice. Nov 1 00:24:26.800977 kubelet[2185]: E1101 00:24:26.800940 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:26.813817 systemd[1]: Created slice kubepods-burstable-pod772c2a211013aa09ae2c0ff2c0a6cac4.slice. Nov 1 00:24:26.818275 kubelet[2185]: E1101 00:24:26.818217 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:26.880319 kubelet[2185]: I1101 00:24:26.880243 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/772c2a211013aa09ae2c0ff2c0a6cac4-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-188\" (UID: \"772c2a211013aa09ae2c0ff2c0a6cac4\") " pod="kube-system/kube-scheduler-ip-172-31-20-188" Nov 1 00:24:26.880605 kubelet[2185]: I1101 00:24:26.880569 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/433f088130c6808a19dc4d689baa9450-ca-certs\") pod \"kube-apiserver-ip-172-31-20-188\" (UID: \"433f088130c6808a19dc4d689baa9450\") " pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:26.880771 kubelet[2185]: I1101 00:24:26.880743 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/433f088130c6808a19dc4d689baa9450-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-188\" (UID: \"433f088130c6808a19dc4d689baa9450\") " pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:26.880929 kubelet[2185]: I1101 00:24:26.880899 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/433f088130c6808a19dc4d689baa9450-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-188\" (UID: \"433f088130c6808a19dc4d689baa9450\") " pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:26.881078 kubelet[2185]: I1101 00:24:26.881052 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:26.881238 kubelet[2185]: I1101 00:24:26.881212 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:26.881415 kubelet[2185]: I1101 00:24:26.881389 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:26.881570 kubelet[2185]: I1101 00:24:26.881544 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:26.881735 kubelet[2185]: I1101 00:24:26.881708 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:26.996171 kubelet[2185]: I1101 00:24:26.996043 2185 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-188" Nov 1 00:24:26.997095 kubelet[2185]: E1101 00:24:26.997054 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.188:6443/api/v1/nodes\": dial tcp 172.31.20.188:6443: connect: connection refused" node="ip-172-31-20-188" Nov 1 00:24:27.093374 env[1666]: time="2025-11-01T00:24:27.092848542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-188,Uid:433f088130c6808a19dc4d689baa9450,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:27.103666 env[1666]: time="2025-11-01T00:24:27.103609494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-188,Uid:dbc629de219a70480b3d7795dbb5c3bc,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:27.121197 env[1666]: time="2025-11-01T00:24:27.121143985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-188,Uid:772c2a211013aa09ae2c0ff2c0a6cac4,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:27.183891 kubelet[2185]: E1101 00:24:27.183780 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-188?timeout=10s\": dial tcp 172.31.20.188:6443: connect: connection refused" interval="800ms" Nov 1 00:24:27.397975 kubelet[2185]: E1101 00:24:27.397827 2185 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:24:27.399904 kubelet[2185]: I1101 00:24:27.399847 2185 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-188" Nov 1 00:24:27.400460 kubelet[2185]: E1101 00:24:27.400401 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.188:6443/api/v1/nodes\": dial tcp 172.31.20.188:6443: connect: connection refused" node="ip-172-31-20-188" Nov 1 00:24:27.434189 kubelet[2185]: E1101 00:24:27.434121 2185 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:24:27.609380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182279222.mount: Deactivated successfully. Nov 1 00:24:27.625574 env[1666]: time="2025-11-01T00:24:27.625495941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.628083 env[1666]: time="2025-11-01T00:24:27.627990480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.634621 env[1666]: time="2025-11-01T00:24:27.634563055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.637458 env[1666]: time="2025-11-01T00:24:27.637385074Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.639360 env[1666]: time="2025-11-01T00:24:27.639263472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.643198 env[1666]: time="2025-11-01T00:24:27.643145944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.646977 env[1666]: time="2025-11-01T00:24:27.646911740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.650559 env[1666]: time="2025-11-01T00:24:27.650425056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.654486 env[1666]: time="2025-11-01T00:24:27.654429340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.661882 env[1666]: time="2025-11-01T00:24:27.661805617Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.665625 env[1666]: time="2025-11-01T00:24:27.665547809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.667032 env[1666]: time="2025-11-01T00:24:27.666962142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:27.745948 env[1666]: time="2025-11-01T00:24:27.745802588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:27.746115 env[1666]: time="2025-11-01T00:24:27.745969316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:27.746115 env[1666]: time="2025-11-01T00:24:27.746037704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:27.746616 env[1666]: time="2025-11-01T00:24:27.746514981Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/776cd904f2c20cc19d688aed77c4a24060b7e1f764bd6f76dc1c81205bbee492 pid=2230 runtime=io.containerd.runc.v2 Nov 1 00:24:27.752139 env[1666]: time="2025-11-01T00:24:27.751975623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:27.752468 env[1666]: time="2025-11-01T00:24:27.752069835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:27.752662 env[1666]: time="2025-11-01T00:24:27.752419335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:27.753143 env[1666]: time="2025-11-01T00:24:27.753023632Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/edcd7427acb9e276a9a724bfd642f8bcc0113c5df09cd60b89c8d3959262fcbd pid=2240 runtime=io.containerd.runc.v2 Nov 1 00:24:27.768054 env[1666]: time="2025-11-01T00:24:27.767874452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:27.768461 env[1666]: time="2025-11-01T00:24:27.768018284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:27.768769 env[1666]: time="2025-11-01T00:24:27.768413684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:27.769519 env[1666]: time="2025-11-01T00:24:27.769399101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec97c64ef3f69e4bc17b0cf0883a27ac04058a10b3d4ca18a0faa6036c6b53b2 pid=2256 runtime=io.containerd.runc.v2 Nov 1 00:24:27.789544 systemd[1]: Started cri-containerd-776cd904f2c20cc19d688aed77c4a24060b7e1f764bd6f76dc1c81205bbee492.scope. Nov 1 00:24:27.819742 systemd[1]: Started cri-containerd-edcd7427acb9e276a9a724bfd642f8bcc0113c5df09cd60b89c8d3959262fcbd.scope. Nov 1 00:24:27.851913 systemd[1]: Started cri-containerd-ec97c64ef3f69e4bc17b0cf0883a27ac04058a10b3d4ca18a0faa6036c6b53b2.scope. Nov 1 00:24:27.859452 kubelet[2185]: E1101 00:24:27.859375 2185 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:24:27.969023 env[1666]: time="2025-11-01T00:24:27.968866054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-188,Uid:dbc629de219a70480b3d7795dbb5c3bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"edcd7427acb9e276a9a724bfd642f8bcc0113c5df09cd60b89c8d3959262fcbd\"" Nov 1 00:24:27.984621 kubelet[2185]: E1101 00:24:27.984556 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-188?timeout=10s\": dial tcp 172.31.20.188:6443: connect: connection refused" interval="1.6s" Nov 1 00:24:27.989836 env[1666]: time="2025-11-01T00:24:27.989765121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-188,Uid:433f088130c6808a19dc4d689baa9450,Namespace:kube-system,Attempt:0,} returns sandbox id \"776cd904f2c20cc19d688aed77c4a24060b7e1f764bd6f76dc1c81205bbee492\"" Nov 1 00:24:27.997756 env[1666]: time="2025-11-01T00:24:27.997701534Z" level=info msg="CreateContainer within sandbox \"edcd7427acb9e276a9a724bfd642f8bcc0113c5df09cd60b89c8d3959262fcbd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:24:28.004140 env[1666]: time="2025-11-01T00:24:28.004066952Z" level=info msg="CreateContainer within sandbox \"776cd904f2c20cc19d688aed77c4a24060b7e1f764bd6f76dc1c81205bbee492\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:24:28.024131 env[1666]: time="2025-11-01T00:24:28.024044201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-188,Uid:772c2a211013aa09ae2c0ff2c0a6cac4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec97c64ef3f69e4bc17b0cf0883a27ac04058a10b3d4ca18a0faa6036c6b53b2\"" Nov 1 00:24:28.034145 env[1666]: time="2025-11-01T00:24:28.034080003Z" level=info msg="CreateContainer within sandbox \"edcd7427acb9e276a9a724bfd642f8bcc0113c5df09cd60b89c8d3959262fcbd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff\"" Nov 1 00:24:28.035851 env[1666]: time="2025-11-01T00:24:28.035798453Z" level=info msg="StartContainer for \"15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff\"" Nov 1 00:24:28.037950 env[1666]: time="2025-11-01T00:24:28.037858459Z" level=info msg="CreateContainer within sandbox \"ec97c64ef3f69e4bc17b0cf0883a27ac04058a10b3d4ca18a0faa6036c6b53b2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:24:28.046135 env[1666]: time="2025-11-01T00:24:28.046074687Z" level=info msg="CreateContainer within sandbox \"776cd904f2c20cc19d688aed77c4a24060b7e1f764bd6f76dc1c81205bbee492\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f007e469251ed618803e958f02124753327e697bb059a45631c8b359db9b1b0c\"" Nov 1 00:24:28.048744 env[1666]: time="2025-11-01T00:24:28.048692886Z" level=info msg="StartContainer for \"f007e469251ed618803e958f02124753327e697bb059a45631c8b359db9b1b0c\"" Nov 1 00:24:28.069270 env[1666]: time="2025-11-01T00:24:28.069189543Z" level=info msg="CreateContainer within sandbox \"ec97c64ef3f69e4bc17b0cf0883a27ac04058a10b3d4ca18a0faa6036c6b53b2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8\"" Nov 1 00:24:28.070211 env[1666]: time="2025-11-01T00:24:28.070151692Z" level=info msg="StartContainer for \"d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8\"" Nov 1 00:24:28.077811 systemd[1]: Started cri-containerd-15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff.scope. Nov 1 00:24:28.090957 kubelet[2185]: E1101 00:24:28.090882 2185 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-188&limit=500&resourceVersion=0\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:24:28.136598 systemd[1]: Started cri-containerd-f007e469251ed618803e958f02124753327e697bb059a45631c8b359db9b1b0c.scope. Nov 1 00:24:28.146993 systemd[1]: Started cri-containerd-d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8.scope. Nov 1 00:24:28.204164 kubelet[2185]: I1101 00:24:28.204097 2185 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-188" Nov 1 00:24:28.206822 kubelet[2185]: E1101 00:24:28.206711 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.188:6443/api/v1/nodes\": dial tcp 172.31.20.188:6443: connect: connection refused" node="ip-172-31-20-188" Nov 1 00:24:28.222839 env[1666]: time="2025-11-01T00:24:28.222712287Z" level=info msg="StartContainer for \"15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff\" returns successfully" Nov 1 00:24:28.323380 env[1666]: time="2025-11-01T00:24:28.323306962Z" level=info msg="StartContainer for \"f007e469251ed618803e958f02124753327e697bb059a45631c8b359db9b1b0c\" returns successfully" Nov 1 00:24:28.330850 env[1666]: time="2025-11-01T00:24:28.330785045Z" level=info msg="StartContainer for \"d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8\" returns successfully" Nov 1 00:24:28.646882 kubelet[2185]: E1101 00:24:28.646836 2185 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.188:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:24:28.681277 kubelet[2185]: E1101 00:24:28.681236 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:28.688872 kubelet[2185]: E1101 00:24:28.688815 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:28.691724 kubelet[2185]: E1101 00:24:28.691690 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:29.693996 kubelet[2185]: E1101 00:24:29.693958 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:29.695242 kubelet[2185]: E1101 00:24:29.695205 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:29.809355 kubelet[2185]: I1101 00:24:29.809318 2185 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-188" Nov 1 00:24:30.695579 kubelet[2185]: E1101 00:24:30.695539 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:31.335425 kubelet[2185]: E1101 00:24:31.335377 2185 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:31.951174 update_engine[1658]: I1101 00:24:31.950371 1658 update_attempter.cc:509] Updating boot flags... Nov 1 00:24:33.912284 kubelet[2185]: E1101 00:24:33.912204 2185 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-188\" not found" node="ip-172-31-20-188" Nov 1 00:24:34.095924 kubelet[2185]: I1101 00:24:34.095875 2185 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-188" Nov 1 00:24:34.096146 kubelet[2185]: E1101 00:24:34.096120 2185 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-188\": node \"ip-172-31-20-188\" not found" Nov 1 00:24:34.133464 kubelet[2185]: E1101 00:24:34.133206 2185 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-188.1873ba40c281ad3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-188,UID:ip-172-31-20-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-188,},FirstTimestamp:2025-11-01 00:24:26.535947583 +0000 UTC m=+1.424970821,LastTimestamp:2025-11-01 00:24:26.535947583 +0000 UTC m=+1.424970821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-188,}" Nov 1 00:24:34.177105 kubelet[2185]: I1101 00:24:34.176980 2185 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:34.189572 kubelet[2185]: E1101 00:24:34.189509 2185 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-188\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:34.189754 kubelet[2185]: I1101 00:24:34.189583 2185 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-188" Nov 1 00:24:34.192592 kubelet[2185]: E1101 00:24:34.192533 2185 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-188\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-188" Nov 1 00:24:34.192746 kubelet[2185]: I1101 00:24:34.192600 2185 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:34.196344 kubelet[2185]: E1101 00:24:34.196265 2185 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-188\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:34.532627 kubelet[2185]: I1101 00:24:34.532481 2185 apiserver.go:52] "Watching apiserver" Nov 1 00:24:34.578202 kubelet[2185]: I1101 00:24:34.578148 2185 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:24:34.966714 amazon-ssm-agent[1643]: 2025-11-01 00:24:34 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Nov 1 00:24:36.130861 systemd[1]: Reloading. Nov 1 00:24:36.297918 /usr/lib/systemd/system-generators/torcx-generator[2591]: time="2025-11-01T00:24:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:24:36.297983 /usr/lib/systemd/system-generators/torcx-generator[2591]: time="2025-11-01T00:24:36Z" level=info msg="torcx already run" Nov 1 00:24:36.490826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:24:36.491404 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:24:36.533923 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:24:36.799162 kubelet[2185]: I1101 00:24:36.794521 2185 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:36.819137 systemd[1]: Stopping kubelet.service... Nov 1 00:24:36.844245 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:24:36.844672 systemd[1]: Stopped kubelet.service. Nov 1 00:24:36.844759 systemd[1]: kubelet.service: Consumed 2.205s CPU time. Nov 1 00:24:36.848173 systemd[1]: Starting kubelet.service... Nov 1 00:24:37.186831 systemd[1]: Started kubelet.service. Nov 1 00:24:37.305380 kubelet[2651]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:24:37.305911 kubelet[2651]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:24:37.305911 kubelet[2651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:24:37.305911 kubelet[2651]: I1101 00:24:37.305585 2651 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:24:37.326044 kubelet[2651]: I1101 00:24:37.325969 2651 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:24:37.326044 kubelet[2651]: I1101 00:24:37.326025 2651 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:24:37.326624 kubelet[2651]: I1101 00:24:37.326573 2651 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:24:37.329997 kubelet[2651]: I1101 00:24:37.329938 2651 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:24:37.335574 kubelet[2651]: I1101 00:24:37.335508 2651 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:24:37.350087 kubelet[2651]: E1101 00:24:37.349939 2651 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:24:37.350389 kubelet[2651]: I1101 00:24:37.350357 2651 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:24:37.356114 kubelet[2651]: I1101 00:24:37.356051 2651 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:24:37.356554 sudo[2666]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:24:37.357162 sudo[2666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:24:37.357520 kubelet[2651]: I1101 00:24:37.357466 2651 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:24:37.358485 kubelet[2651]: I1101 00:24:37.358142 2651 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:24:37.361014 kubelet[2651]: I1101 00:24:37.360924 2651 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:24:37.361224 kubelet[2651]: I1101 00:24:37.361198 2651 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:24:37.361529 kubelet[2651]: I1101 00:24:37.361498 2651 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:24:37.361929 kubelet[2651]: I1101 00:24:37.361904 2651 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:24:37.362314 kubelet[2651]: I1101 00:24:37.362265 2651 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:24:37.362526 kubelet[2651]: I1101 00:24:37.362504 2651 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:24:37.362679 kubelet[2651]: I1101 00:24:37.362657 2651 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:24:37.366039 kubelet[2651]: I1101 00:24:37.365999 2651 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:24:37.367268 kubelet[2651]: I1101 00:24:37.367226 2651 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:24:37.374558 kubelet[2651]: I1101 00:24:37.374520 2651 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:24:37.374797 kubelet[2651]: I1101 00:24:37.374776 2651 server.go:1289] "Started kubelet" Nov 1 00:24:37.389210 kubelet[2651]: I1101 00:24:37.389173 2651 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:24:37.417501 kubelet[2651]: I1101 00:24:37.417387 2651 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:24:37.419081 kubelet[2651]: I1101 00:24:37.419008 2651 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:24:37.432909 kubelet[2651]: I1101 00:24:37.432855 2651 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:24:37.435874 kubelet[2651]: I1101 00:24:37.435775 2651 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:24:37.436189 kubelet[2651]: I1101 00:24:37.436144 2651 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:24:37.440335 kubelet[2651]: I1101 00:24:37.437203 2651 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:24:37.459820 kubelet[2651]: I1101 00:24:37.459706 2651 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:24:37.463837 kubelet[2651]: E1101 00:24:37.463696 2651 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-188\" not found" Nov 1 00:24:37.468127 kubelet[2651]: I1101 00:24:37.467052 2651 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:24:37.471593 kubelet[2651]: I1101 00:24:37.471563 2651 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:24:37.497424 kubelet[2651]: I1101 00:24:37.497378 2651 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:24:37.497858 kubelet[2651]: I1101 00:24:37.497812 2651 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:24:37.507436 kubelet[2651]: E1101 00:24:37.507377 2651 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:24:37.513962 kubelet[2651]: I1101 00:24:37.513136 2651 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:24:37.558453 kubelet[2651]: I1101 00:24:37.558411 2651 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:24:37.558644 kubelet[2651]: I1101 00:24:37.558622 2651 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:24:37.558791 kubelet[2651]: I1101 00:24:37.558766 2651 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:24:37.558909 kubelet[2651]: I1101 00:24:37.558890 2651 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:24:37.559093 kubelet[2651]: E1101 00:24:37.559063 2651 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:24:37.647118 kubelet[2651]: I1101 00:24:37.647085 2651 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:24:37.647371 kubelet[2651]: I1101 00:24:37.647343 2651 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:24:37.647493 kubelet[2651]: I1101 00:24:37.647473 2651 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:24:37.647859 kubelet[2651]: I1101 00:24:37.647832 2651 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:24:37.648000 kubelet[2651]: I1101 00:24:37.647962 2651 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:24:37.648139 kubelet[2651]: I1101 00:24:37.648119 2651 policy_none.go:49] "None policy: Start" Nov 1 00:24:37.648249 kubelet[2651]: I1101 00:24:37.648229 2651 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:24:37.648412 kubelet[2651]: I1101 00:24:37.648393 2651 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:24:37.648709 kubelet[2651]: I1101 00:24:37.648687 2651 state_mem.go:75] "Updated machine memory state" Nov 1 00:24:37.664327 kubelet[2651]: E1101 00:24:37.664268 2651 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:24:37.664783 kubelet[2651]: I1101 00:24:37.664751 2651 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:24:37.664976 kubelet[2651]: I1101 00:24:37.664922 2651 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:24:37.670707 kubelet[2651]: E1101 00:24:37.667064 2651 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:24:37.670926 kubelet[2651]: E1101 00:24:37.670132 2651 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:24:37.673499 kubelet[2651]: I1101 00:24:37.673456 2651 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:24:37.786885 kubelet[2651]: I1101 00:24:37.786758 2651 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-188" Nov 1 00:24:37.805320 kubelet[2651]: I1101 00:24:37.805255 2651 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-188" Nov 1 00:24:37.805468 kubelet[2651]: I1101 00:24:37.805439 2651 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-188" Nov 1 00:24:37.872100 kubelet[2651]: I1101 00:24:37.872040 2651 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:37.879362 kubelet[2651]: I1101 00:24:37.872265 2651 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-188" Nov 1 00:24:37.879362 kubelet[2651]: I1101 00:24:37.872624 2651 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:37.879362 kubelet[2651]: I1101 00:24:37.873856 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/433f088130c6808a19dc4d689baa9450-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-188\" (UID: \"433f088130c6808a19dc4d689baa9450\") " pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:37.879362 kubelet[2651]: I1101 00:24:37.873913 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:37.879362 kubelet[2651]: I1101 00:24:37.874094 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:37.879362 kubelet[2651]: I1101 00:24:37.874280 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:37.879862 kubelet[2651]: I1101 00:24:37.874470 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:37.879862 kubelet[2651]: I1101 00:24:37.874847 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dbc629de219a70480b3d7795dbb5c3bc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-188\" (UID: \"dbc629de219a70480b3d7795dbb5c3bc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:37.879862 kubelet[2651]: I1101 00:24:37.875095 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/433f088130c6808a19dc4d689baa9450-ca-certs\") pod \"kube-apiserver-ip-172-31-20-188\" (UID: \"433f088130c6808a19dc4d689baa9450\") " pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:37.879862 kubelet[2651]: I1101 00:24:37.875175 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/433f088130c6808a19dc4d689baa9450-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-188\" (UID: \"433f088130c6808a19dc4d689baa9450\") " pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:37.890677 kubelet[2651]: E1101 00:24:37.890602 2651 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-188\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-188" Nov 1 00:24:37.976276 kubelet[2651]: I1101 00:24:37.976165 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/772c2a211013aa09ae2c0ff2c0a6cac4-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-188\" (UID: \"772c2a211013aa09ae2c0ff2c0a6cac4\") " pod="kube-system/kube-scheduler-ip-172-31-20-188" Nov 1 00:24:38.357563 sudo[2666]: pam_unix(sudo:session): session closed for user root Nov 1 00:24:38.364129 kubelet[2651]: I1101 00:24:38.363967 2651 apiserver.go:52] "Watching apiserver" Nov 1 00:24:38.372424 kubelet[2651]: I1101 00:24:38.372333 2651 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:24:38.526710 kubelet[2651]: I1101 00:24:38.526611 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-188" podStartSLOduration=1.52656529 podStartE2EDuration="1.52656529s" podCreationTimestamp="2025-11-01 00:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:24:38.50878872 +0000 UTC m=+1.309025844" watchObservedRunningTime="2025-11-01 00:24:38.52656529 +0000 UTC m=+1.326802414" Nov 1 00:24:38.542338 kubelet[2651]: I1101 00:24:38.542238 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-188" podStartSLOduration=1.542192346 podStartE2EDuration="1.542192346s" podCreationTimestamp="2025-11-01 00:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:24:38.527207098 +0000 UTC m=+1.327444246" watchObservedRunningTime="2025-11-01 00:24:38.542192346 +0000 UTC m=+1.342429458" Nov 1 00:24:38.569967 kubelet[2651]: I1101 00:24:38.569869 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-188" podStartSLOduration=2.569844285 podStartE2EDuration="2.569844285s" podCreationTimestamp="2025-11-01 00:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:24:38.544542452 +0000 UTC m=+1.344779600" watchObservedRunningTime="2025-11-01 00:24:38.569844285 +0000 UTC m=+1.370081421" Nov 1 00:24:38.615101 kubelet[2651]: I1101 00:24:38.614949 2651 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:38.616161 kubelet[2651]: I1101 00:24:38.616117 2651 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-188" Nov 1 00:24:38.638371 kubelet[2651]: E1101 00:24:38.638324 2651 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-188\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-188" Nov 1 00:24:38.641034 kubelet[2651]: E1101 00:24:38.640970 2651 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-188\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-188" Nov 1 00:24:41.580896 sudo[1912]: pam_unix(sudo:session): session closed for user root Nov 1 00:24:41.603923 sshd[1909]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:41.610086 systemd-logind[1655]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:24:41.612227 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:24:41.612682 systemd[1]: session-5.scope: Consumed 11.693s CPU time. Nov 1 00:24:41.614787 systemd-logind[1655]: Removed session 5. Nov 1 00:24:41.616415 systemd[1]: sshd@4-172.31.20.188:22-147.75.109.163:34966.service: Deactivated successfully. Nov 1 00:24:43.010890 kubelet[2651]: I1101 00:24:43.010854 2651 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:24:43.012462 env[1666]: time="2025-11-01T00:24:43.012391112Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:24:43.013491 kubelet[2651]: I1101 00:24:43.013445 2651 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:24:44.068094 systemd[1]: Created slice kubepods-besteffort-poda2480938_fc92_4651_8b57_185c541eaa29.slice. Nov 1 00:24:44.074112 kubelet[2651]: I1101 00:24:44.074027 2651 status_manager.go:895] "Failed to get status for pod" podUID="a2480938-fc92-4651-8b57-185c541eaa29" pod="kube-system/kube-proxy-d9pqz" err="pods \"kube-proxy-d9pqz\" is forbidden: User \"system:node:ip-172-31-20-188\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-20-188' and this object" Nov 1 00:24:44.091094 systemd[1]: Created slice kubepods-burstable-podcedf0374_a112_47e2_b08a_c070801d1920.slice. Nov 1 00:24:44.115850 kubelet[2651]: I1101 00:24:44.115801 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-bpf-maps\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.116162 kubelet[2651]: I1101 00:24:44.116118 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cni-path\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.116373 kubelet[2651]: I1101 00:24:44.116343 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-lib-modules\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.116566 kubelet[2651]: I1101 00:24:44.116529 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2480938-fc92-4651-8b57-185c541eaa29-xtables-lock\") pod \"kube-proxy-d9pqz\" (UID: \"a2480938-fc92-4651-8b57-185c541eaa29\") " pod="kube-system/kube-proxy-d9pqz" Nov 1 00:24:44.116710 kubelet[2651]: I1101 00:24:44.116682 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt987\" (UniqueName: \"kubernetes.io/projected/a2480938-fc92-4651-8b57-185c541eaa29-kube-api-access-gt987\") pod \"kube-proxy-d9pqz\" (UID: \"a2480938-fc92-4651-8b57-185c541eaa29\") " pod="kube-system/kube-proxy-d9pqz" Nov 1 00:24:44.116879 kubelet[2651]: I1101 00:24:44.116854 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-hostproc\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.117037 kubelet[2651]: I1101 00:24:44.117012 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-etc-cni-netd\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.117206 kubelet[2651]: I1101 00:24:44.117179 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cedf0374-a112-47e2-b08a-c070801d1920-cilium-config-path\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.117381 kubelet[2651]: I1101 00:24:44.117356 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cilium-run\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.117565 kubelet[2651]: I1101 00:24:44.117539 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cilium-cgroup\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.117719 kubelet[2651]: I1101 00:24:44.117694 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-xtables-lock\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.117878 kubelet[2651]: I1101 00:24:44.117853 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cedf0374-a112-47e2-b08a-c070801d1920-clustermesh-secrets\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.118032 kubelet[2651]: I1101 00:24:44.118007 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-host-proc-sys-kernel\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.118192 kubelet[2651]: I1101 00:24:44.118167 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7czq\" (UniqueName: \"kubernetes.io/projected/cedf0374-a112-47e2-b08a-c070801d1920-kube-api-access-n7czq\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.118401 kubelet[2651]: I1101 00:24:44.118345 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2480938-fc92-4651-8b57-185c541eaa29-kube-proxy\") pod \"kube-proxy-d9pqz\" (UID: \"a2480938-fc92-4651-8b57-185c541eaa29\") " pod="kube-system/kube-proxy-d9pqz" Nov 1 00:24:44.118583 kubelet[2651]: I1101 00:24:44.118540 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2480938-fc92-4651-8b57-185c541eaa29-lib-modules\") pod \"kube-proxy-d9pqz\" (UID: \"a2480938-fc92-4651-8b57-185c541eaa29\") " pod="kube-system/kube-proxy-d9pqz" Nov 1 00:24:44.118757 kubelet[2651]: I1101 00:24:44.118730 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-host-proc-sys-net\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.118976 kubelet[2651]: I1101 00:24:44.118934 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cedf0374-a112-47e2-b08a-c070801d1920-hubble-tls\") pod \"cilium-lc6rb\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " pod="kube-system/cilium-lc6rb" Nov 1 00:24:44.210230 systemd[1]: Created slice kubepods-besteffort-pod93664fa6_31bd_4e04_9a4c_d8954a2e0702.slice. Nov 1 00:24:44.220624 kubelet[2651]: I1101 00:24:44.220557 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncczl\" (UniqueName: \"kubernetes.io/projected/93664fa6-31bd-4e04-9a4c-d8954a2e0702-kube-api-access-ncczl\") pod \"cilium-operator-6c4d7847fc-f5s89\" (UID: \"93664fa6-31bd-4e04-9a4c-d8954a2e0702\") " pod="kube-system/cilium-operator-6c4d7847fc-f5s89" Nov 1 00:24:44.221156 kubelet[2651]: I1101 00:24:44.221116 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93664fa6-31bd-4e04-9a4c-d8954a2e0702-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-f5s89\" (UID: \"93664fa6-31bd-4e04-9a4c-d8954a2e0702\") " pod="kube-system/cilium-operator-6c4d7847fc-f5s89" Nov 1 00:24:44.224761 kubelet[2651]: I1101 00:24:44.224692 2651 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:24:44.380856 env[1666]: time="2025-11-01T00:24:44.380683537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9pqz,Uid:a2480938-fc92-4651-8b57-185c541eaa29,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:44.398786 env[1666]: time="2025-11-01T00:24:44.398724296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lc6rb,Uid:cedf0374-a112-47e2-b08a-c070801d1920,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:44.416431 env[1666]: time="2025-11-01T00:24:44.416272946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:44.416431 env[1666]: time="2025-11-01T00:24:44.416382614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:44.416763 env[1666]: time="2025-11-01T00:24:44.416691194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:44.417492 env[1666]: time="2025-11-01T00:24:44.417380271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9455f75e46e8d5c5ada08f06a5b50c188c2ea0754233749124adcd3b891b264b pid=2742 runtime=io.containerd.runc.v2 Nov 1 00:24:44.437110 env[1666]: time="2025-11-01T00:24:44.436948582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:44.437110 env[1666]: time="2025-11-01T00:24:44.437035330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:44.437517 env[1666]: time="2025-11-01T00:24:44.437076826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:44.437925 env[1666]: time="2025-11-01T00:24:44.437831302Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5 pid=2761 runtime=io.containerd.runc.v2 Nov 1 00:24:44.454237 systemd[1]: Started cri-containerd-9455f75e46e8d5c5ada08f06a5b50c188c2ea0754233749124adcd3b891b264b.scope. Nov 1 00:24:44.478421 systemd[1]: Started cri-containerd-c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5.scope. Nov 1 00:24:44.518363 env[1666]: time="2025-11-01T00:24:44.518269900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f5s89,Uid:93664fa6-31bd-4e04-9a4c-d8954a2e0702,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:44.588245 env[1666]: time="2025-11-01T00:24:44.588186581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9pqz,Uid:a2480938-fc92-4651-8b57-185c541eaa29,Namespace:kube-system,Attempt:0,} returns sandbox id \"9455f75e46e8d5c5ada08f06a5b50c188c2ea0754233749124adcd3b891b264b\"" Nov 1 00:24:44.601177 env[1666]: time="2025-11-01T00:24:44.601115194Z" level=info msg="CreateContainer within sandbox \"9455f75e46e8d5c5ada08f06a5b50c188c2ea0754233749124adcd3b891b264b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:24:44.607479 env[1666]: time="2025-11-01T00:24:44.607408056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lc6rb,Uid:cedf0374-a112-47e2-b08a-c070801d1920,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\"" Nov 1 00:24:44.612979 env[1666]: time="2025-11-01T00:24:44.612919166Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:24:44.622287 env[1666]: time="2025-11-01T00:24:44.622123734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:44.622621 env[1666]: time="2025-11-01T00:24:44.622360002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:44.622621 env[1666]: time="2025-11-01T00:24:44.622428270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:44.623054 env[1666]: time="2025-11-01T00:24:44.622972830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8 pid=2819 runtime=io.containerd.runc.v2 Nov 1 00:24:44.661790 env[1666]: time="2025-11-01T00:24:44.659712811Z" level=info msg="CreateContainer within sandbox \"9455f75e46e8d5c5ada08f06a5b50c188c2ea0754233749124adcd3b891b264b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ca5ce2295a3a6981455f133c099e423ac9601ec8a45e1ecef4217b874581b537\"" Nov 1 00:24:44.664529 env[1666]: time="2025-11-01T00:24:44.664371273Z" level=info msg="StartContainer for \"ca5ce2295a3a6981455f133c099e423ac9601ec8a45e1ecef4217b874581b537\"" Nov 1 00:24:44.672470 systemd[1]: Started cri-containerd-7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8.scope. Nov 1 00:24:44.716004 systemd[1]: Started cri-containerd-ca5ce2295a3a6981455f133c099e423ac9601ec8a45e1ecef4217b874581b537.scope. Nov 1 00:24:44.805736 env[1666]: time="2025-11-01T00:24:44.802598988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f5s89,Uid:93664fa6-31bd-4e04-9a4c-d8954a2e0702,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\"" Nov 1 00:24:44.837042 env[1666]: time="2025-11-01T00:24:44.836976576Z" level=info msg="StartContainer for \"ca5ce2295a3a6981455f133c099e423ac9601ec8a45e1ecef4217b874581b537\" returns successfully" Nov 1 00:24:45.658741 kubelet[2651]: I1101 00:24:45.658642 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d9pqz" podStartSLOduration=1.658622898 podStartE2EDuration="1.658622898s" podCreationTimestamp="2025-11-01 00:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:24:45.65860455 +0000 UTC m=+8.458841662" watchObservedRunningTime="2025-11-01 00:24:45.658622898 +0000 UTC m=+8.458860022" Nov 1 00:24:51.445919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194570224.mount: Deactivated successfully. Nov 1 00:24:55.544703 env[1666]: time="2025-11-01T00:24:55.544637705Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:55.548991 env[1666]: time="2025-11-01T00:24:55.548935806Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:55.552736 env[1666]: time="2025-11-01T00:24:55.552660811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:55.554318 env[1666]: time="2025-11-01T00:24:55.554218699Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 1 00:24:55.559847 env[1666]: time="2025-11-01T00:24:55.559248344Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:24:55.567141 env[1666]: time="2025-11-01T00:24:55.567085389Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:24:55.591907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679005963.mount: Deactivated successfully. Nov 1 00:24:55.610939 env[1666]: time="2025-11-01T00:24:55.610876409Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\"" Nov 1 00:24:55.612229 env[1666]: time="2025-11-01T00:24:55.612173430Z" level=info msg="StartContainer for \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\"" Nov 1 00:24:55.651228 systemd[1]: Started cri-containerd-d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a.scope. Nov 1 00:24:55.745746 env[1666]: time="2025-11-01T00:24:55.745682346Z" level=info msg="StartContainer for \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\" returns successfully" Nov 1 00:24:55.770925 systemd[1]: cri-containerd-d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a.scope: Deactivated successfully. Nov 1 00:24:56.586551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a-rootfs.mount: Deactivated successfully. Nov 1 00:24:56.770927 env[1666]: time="2025-11-01T00:24:56.770855700Z" level=info msg="shim disconnected" id=d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a Nov 1 00:24:56.771558 env[1666]: time="2025-11-01T00:24:56.770927508Z" level=warning msg="cleaning up after shim disconnected" id=d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a namespace=k8s.io Nov 1 00:24:56.771558 env[1666]: time="2025-11-01T00:24:56.770950428Z" level=info msg="cleaning up dead shim" Nov 1 00:24:56.787391 env[1666]: time="2025-11-01T00:24:56.787260855Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3083 runtime=io.containerd.runc.v2\n" Nov 1 00:24:57.648540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700692813.mount: Deactivated successfully. Nov 1 00:24:57.698930 env[1666]: time="2025-11-01T00:24:57.698846764Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:24:57.729760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151240785.mount: Deactivated successfully. Nov 1 00:24:57.744816 env[1666]: time="2025-11-01T00:24:57.744731688Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\"" Nov 1 00:24:57.763943 env[1666]: time="2025-11-01T00:24:57.763022775Z" level=info msg="StartContainer for \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\"" Nov 1 00:24:57.843129 systemd[1]: Started cri-containerd-da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85.scope. Nov 1 00:24:57.917941 env[1666]: time="2025-11-01T00:24:57.917202639Z" level=info msg="StartContainer for \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\" returns successfully" Nov 1 00:24:57.951244 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:24:57.953034 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:24:57.953364 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:24:57.961535 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:24:57.962392 systemd[1]: cri-containerd-da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85.scope: Deactivated successfully. Nov 1 00:24:57.981551 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:24:58.023259 env[1666]: time="2025-11-01T00:24:58.023183344Z" level=info msg="shim disconnected" id=da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85 Nov 1 00:24:58.023259 env[1666]: time="2025-11-01T00:24:58.023255224Z" level=warning msg="cleaning up after shim disconnected" id=da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85 namespace=k8s.io Nov 1 00:24:58.023719 env[1666]: time="2025-11-01T00:24:58.023277304Z" level=info msg="cleaning up dead shim" Nov 1 00:24:58.053944 env[1666]: time="2025-11-01T00:24:58.053864829Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3148 runtime=io.containerd.runc.v2\n" Nov 1 00:24:58.713172 env[1666]: time="2025-11-01T00:24:58.713073181Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:24:58.752472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1599445188.mount: Deactivated successfully. Nov 1 00:24:58.770600 env[1666]: time="2025-11-01T00:24:58.770516565Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\"" Nov 1 00:24:58.772502 env[1666]: time="2025-11-01T00:24:58.772275766Z" level=info msg="StartContainer for \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\"" Nov 1 00:24:58.809777 env[1666]: time="2025-11-01T00:24:58.809717403Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:58.812541 env[1666]: time="2025-11-01T00:24:58.812486968Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:58.816088 env[1666]: time="2025-11-01T00:24:58.816031504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:24:58.820257 env[1666]: time="2025-11-01T00:24:58.820171637Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 1 00:24:58.828106 env[1666]: time="2025-11-01T00:24:58.828042234Z" level=info msg="CreateContainer within sandbox \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:24:58.834837 systemd[1]: Started cri-containerd-d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001.scope. Nov 1 00:24:58.885893 env[1666]: time="2025-11-01T00:24:58.885756855Z" level=info msg="CreateContainer within sandbox \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\"" Nov 1 00:24:58.888075 env[1666]: time="2025-11-01T00:24:58.887978055Z" level=info msg="StartContainer for \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\"" Nov 1 00:24:58.930723 env[1666]: time="2025-11-01T00:24:58.930642922Z" level=info msg="StartContainer for \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\" returns successfully" Nov 1 00:24:58.941622 systemd[1]: cri-containerd-d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001.scope: Deactivated successfully. Nov 1 00:24:58.948927 systemd[1]: Started cri-containerd-b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317.scope. Nov 1 00:24:59.051714 env[1666]: time="2025-11-01T00:24:59.051638591Z" level=info msg="StartContainer for \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\" returns successfully" Nov 1 00:24:59.065820 env[1666]: time="2025-11-01T00:24:59.065756833Z" level=info msg="shim disconnected" id=d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001 Nov 1 00:24:59.066136 env[1666]: time="2025-11-01T00:24:59.066103033Z" level=warning msg="cleaning up after shim disconnected" id=d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001 namespace=k8s.io Nov 1 00:24:59.066252 env[1666]: time="2025-11-01T00:24:59.066224473Z" level=info msg="cleaning up dead shim" Nov 1 00:24:59.082470 env[1666]: time="2025-11-01T00:24:59.082389292Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3242 runtime=io.containerd.runc.v2\n" Nov 1 00:24:59.631005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001-rootfs.mount: Deactivated successfully. Nov 1 00:24:59.709961 env[1666]: time="2025-11-01T00:24:59.709861065Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:24:59.769011 env[1666]: time="2025-11-01T00:24:59.768940589Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\"" Nov 1 00:24:59.770221 env[1666]: time="2025-11-01T00:24:59.770162046Z" level=info msg="StartContainer for \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\"" Nov 1 00:24:59.840857 systemd[1]: Started cri-containerd-c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e.scope. Nov 1 00:25:00.057896 env[1666]: time="2025-11-01T00:25:00.057811290Z" level=info msg="StartContainer for \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\" returns successfully" Nov 1 00:25:00.072175 systemd[1]: cri-containerd-c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e.scope: Deactivated successfully. Nov 1 00:25:00.134955 env[1666]: time="2025-11-01T00:25:00.134872468Z" level=info msg="shim disconnected" id=c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e Nov 1 00:25:00.134955 env[1666]: time="2025-11-01T00:25:00.134948848Z" level=warning msg="cleaning up after shim disconnected" id=c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e namespace=k8s.io Nov 1 00:25:00.135408 env[1666]: time="2025-11-01T00:25:00.134971360Z" level=info msg="cleaning up dead shim" Nov 1 00:25:00.163122 env[1666]: time="2025-11-01T00:25:00.161670176Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:25:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3302 runtime=io.containerd.runc.v2\n" Nov 1 00:25:00.629427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e-rootfs.mount: Deactivated successfully. Nov 1 00:25:00.727825 env[1666]: time="2025-11-01T00:25:00.727750535Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:25:00.770191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981958660.mount: Deactivated successfully. Nov 1 00:25:00.777494 env[1666]: time="2025-11-01T00:25:00.777421110Z" level=info msg="CreateContainer within sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\"" Nov 1 00:25:00.786415 env[1666]: time="2025-11-01T00:25:00.786351547Z" level=info msg="StartContainer for \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\"" Nov 1 00:25:00.796804 kubelet[2651]: I1101 00:25:00.796682 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-f5s89" podStartSLOduration=2.781979609 podStartE2EDuration="16.796658349s" podCreationTimestamp="2025-11-01 00:24:44 +0000 UTC" firstStartedPulling="2025-11-01 00:24:44.806845849 +0000 UTC m=+7.607082961" lastFinishedPulling="2025-11-01 00:24:58.821524589 +0000 UTC m=+21.621761701" observedRunningTime="2025-11-01 00:25:00.053855561 +0000 UTC m=+22.854092685" watchObservedRunningTime="2025-11-01 00:25:00.796658349 +0000 UTC m=+23.596895461" Nov 1 00:25:00.833812 systemd[1]: Started cri-containerd-d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1.scope. Nov 1 00:25:00.934704 env[1666]: time="2025-11-01T00:25:00.934580451Z" level=info msg="StartContainer for \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\" returns successfully" Nov 1 00:25:01.172104 kubelet[2651]: I1101 00:25:01.171830 2651 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:25:01.227048 systemd[1]: Created slice kubepods-burstable-pod0f3f69dd_e50d_45e5_ac7e_7a7b53f244c4.slice. Nov 1 00:25:01.239439 systemd[1]: Created slice kubepods-burstable-pode40362fe_e7ba_48f1_a194_4359eea8c9a9.slice. Nov 1 00:25:01.261538 kubelet[2651]: I1101 00:25:01.261456 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e40362fe-e7ba-48f1-a194-4359eea8c9a9-config-volume\") pod \"coredns-674b8bbfcf-wczgv\" (UID: \"e40362fe-e7ba-48f1-a194-4359eea8c9a9\") " pod="kube-system/coredns-674b8bbfcf-wczgv" Nov 1 00:25:01.261538 kubelet[2651]: I1101 00:25:01.261532 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f3f69dd-e50d-45e5-ac7e-7a7b53f244c4-config-volume\") pod \"coredns-674b8bbfcf-f9gzf\" (UID: \"0f3f69dd-e50d-45e5-ac7e-7a7b53f244c4\") " pod="kube-system/coredns-674b8bbfcf-f9gzf" Nov 1 00:25:01.261818 kubelet[2651]: I1101 00:25:01.261578 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkjr9\" (UniqueName: \"kubernetes.io/projected/e40362fe-e7ba-48f1-a194-4359eea8c9a9-kube-api-access-kkjr9\") pod \"coredns-674b8bbfcf-wczgv\" (UID: \"e40362fe-e7ba-48f1-a194-4359eea8c9a9\") " pod="kube-system/coredns-674b8bbfcf-wczgv" Nov 1 00:25:01.261818 kubelet[2651]: I1101 00:25:01.261617 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hcb\" (UniqueName: \"kubernetes.io/projected/0f3f69dd-e50d-45e5-ac7e-7a7b53f244c4-kube-api-access-x5hcb\") pod \"coredns-674b8bbfcf-f9gzf\" (UID: \"0f3f69dd-e50d-45e5-ac7e-7a7b53f244c4\") " pod="kube-system/coredns-674b8bbfcf-f9gzf" Nov 1 00:25:01.316347 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:25:01.537107 env[1666]: time="2025-11-01T00:25:01.536937735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f9gzf,Uid:0f3f69dd-e50d-45e5-ac7e-7a7b53f244c4,Namespace:kube-system,Attempt:0,}" Nov 1 00:25:01.547786 env[1666]: time="2025-11-01T00:25:01.547720625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wczgv,Uid:e40362fe-e7ba-48f1-a194-4359eea8c9a9,Namespace:kube-system,Attempt:0,}" Nov 1 00:25:02.219348 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:25:04.041198 (udev-worker)[3480]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:25:04.044074 systemd-networkd[1386]: cilium_host: Link UP Nov 1 00:25:04.047217 (udev-worker)[3440]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:25:04.055794 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:25:04.055927 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:25:04.057361 systemd-networkd[1386]: cilium_net: Link UP Nov 1 00:25:04.058936 systemd-networkd[1386]: cilium_net: Gained carrier Nov 1 00:25:04.060969 systemd-networkd[1386]: cilium_host: Gained carrier Nov 1 00:25:04.223464 (udev-worker)[3490]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:25:04.233454 systemd-networkd[1386]: cilium_vxlan: Link UP Nov 1 00:25:04.233475 systemd-networkd[1386]: cilium_vxlan: Gained carrier Nov 1 00:25:04.394600 systemd-networkd[1386]: cilium_net: Gained IPv6LL Nov 1 00:25:04.426551 systemd-networkd[1386]: cilium_host: Gained IPv6LL Nov 1 00:25:04.824333 kernel: NET: Registered PF_ALG protocol family Nov 1 00:25:05.338486 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Nov 1 00:25:06.220064 (udev-worker)[3491]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:25:06.231552 systemd-networkd[1386]: lxc_health: Link UP Nov 1 00:25:06.248352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:25:06.252423 systemd-networkd[1386]: lxc_health: Gained carrier Nov 1 00:25:06.445617 kubelet[2651]: I1101 00:25:06.445351 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lc6rb" podStartSLOduration=11.49789558 podStartE2EDuration="22.445328314s" podCreationTimestamp="2025-11-01 00:24:44 +0000 UTC" firstStartedPulling="2025-11-01 00:24:44.609652465 +0000 UTC m=+7.409889577" lastFinishedPulling="2025-11-01 00:24:55.557085199 +0000 UTC m=+18.357322311" observedRunningTime="2025-11-01 00:25:01.825001071 +0000 UTC m=+24.625238183" watchObservedRunningTime="2025-11-01 00:25:06.445328314 +0000 UTC m=+29.245565462" Nov 1 00:25:06.700731 systemd-networkd[1386]: lxc47bc439eea8e: Link UP Nov 1 00:25:06.702825 systemd-networkd[1386]: lxcaeb7a3f47bc4: Link UP Nov 1 00:25:06.705402 kernel: eth0: renamed from tmp2608c Nov 1 00:25:06.730335 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc47bc439eea8e: link becomes ready Nov 1 00:25:06.727698 systemd-networkd[1386]: lxc47bc439eea8e: Gained carrier Nov 1 00:25:06.738415 kernel: eth0: renamed from tmp52af7 Nov 1 00:25:06.742490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaeb7a3f47bc4: link becomes ready Nov 1 00:25:06.742550 systemd-networkd[1386]: lxcaeb7a3f47bc4: Gained carrier Nov 1 00:25:07.642591 systemd-networkd[1386]: lxc_health: Gained IPv6LL Nov 1 00:25:08.091409 systemd-networkd[1386]: lxc47bc439eea8e: Gained IPv6LL Nov 1 00:25:08.218459 systemd-networkd[1386]: lxcaeb7a3f47bc4: Gained IPv6LL Nov 1 00:25:15.286617 env[1666]: time="2025-11-01T00:25:15.285677828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:15.286617 env[1666]: time="2025-11-01T00:25:15.285763772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:15.286617 env[1666]: time="2025-11-01T00:25:15.285791456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:15.287363 env[1666]: time="2025-11-01T00:25:15.286786748Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2608c21ad478ae3a3273831aa20b2a990c2e80f5d92008e37de10ccb5d7f58fd pid=3855 runtime=io.containerd.runc.v2 Nov 1 00:25:15.365045 systemd[1]: run-containerd-runc-k8s.io-2608c21ad478ae3a3273831aa20b2a990c2e80f5d92008e37de10ccb5d7f58fd-runc.pHuixb.mount: Deactivated successfully. Nov 1 00:25:15.376218 systemd[1]: Started cri-containerd-2608c21ad478ae3a3273831aa20b2a990c2e80f5d92008e37de10ccb5d7f58fd.scope. Nov 1 00:25:15.400398 env[1666]: time="2025-11-01T00:25:15.398429966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:25:15.400398 env[1666]: time="2025-11-01T00:25:15.398517602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:25:15.400398 env[1666]: time="2025-11-01T00:25:15.398545430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:25:15.400398 env[1666]: time="2025-11-01T00:25:15.399081278Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52af7a3ee117d10d55461969f7ead35e17c1e06f782575377101ef2afa38837c pid=3880 runtime=io.containerd.runc.v2 Nov 1 00:25:15.438343 systemd[1]: Started cri-containerd-52af7a3ee117d10d55461969f7ead35e17c1e06f782575377101ef2afa38837c.scope. Nov 1 00:25:15.573685 env[1666]: time="2025-11-01T00:25:15.573602220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wczgv,Uid:e40362fe-e7ba-48f1-a194-4359eea8c9a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2608c21ad478ae3a3273831aa20b2a990c2e80f5d92008e37de10ccb5d7f58fd\"" Nov 1 00:25:15.585277 env[1666]: time="2025-11-01T00:25:15.585208440Z" level=info msg="CreateContainer within sandbox \"2608c21ad478ae3a3273831aa20b2a990c2e80f5d92008e37de10ccb5d7f58fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:25:15.619015 env[1666]: time="2025-11-01T00:25:15.618934154Z" level=info msg="CreateContainer within sandbox \"2608c21ad478ae3a3273831aa20b2a990c2e80f5d92008e37de10ccb5d7f58fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c62fe688c5a4a6014b4d15b7dad2fbd6ebcdaf69792c27f6d06c7b07d0df9074\"" Nov 1 00:25:15.621085 env[1666]: time="2025-11-01T00:25:15.619862414Z" level=info msg="StartContainer for \"c62fe688c5a4a6014b4d15b7dad2fbd6ebcdaf69792c27f6d06c7b07d0df9074\"" Nov 1 00:25:15.631456 env[1666]: time="2025-11-01T00:25:15.631375683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f9gzf,Uid:0f3f69dd-e50d-45e5-ac7e-7a7b53f244c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"52af7a3ee117d10d55461969f7ead35e17c1e06f782575377101ef2afa38837c\"" Nov 1 00:25:15.645666 env[1666]: time="2025-11-01T00:25:15.645597976Z" level=info msg="CreateContainer within sandbox \"52af7a3ee117d10d55461969f7ead35e17c1e06f782575377101ef2afa38837c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:25:15.693740 systemd[1]: Started cri-containerd-c62fe688c5a4a6014b4d15b7dad2fbd6ebcdaf69792c27f6d06c7b07d0df9074.scope. Nov 1 00:25:15.696799 env[1666]: time="2025-11-01T00:25:15.693624066Z" level=info msg="CreateContainer within sandbox \"52af7a3ee117d10d55461969f7ead35e17c1e06f782575377101ef2afa38837c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b9b24ef72f6a49938e59754395e6258fe6dffc622848db5b9dd35487d27c333\"" Nov 1 00:25:15.696894 env[1666]: time="2025-11-01T00:25:15.696801210Z" level=info msg="StartContainer for \"6b9b24ef72f6a49938e59754395e6258fe6dffc622848db5b9dd35487d27c333\"" Nov 1 00:25:15.746946 systemd[1]: Started cri-containerd-6b9b24ef72f6a49938e59754395e6258fe6dffc622848db5b9dd35487d27c333.scope. Nov 1 00:25:15.820420 env[1666]: time="2025-11-01T00:25:15.820238617Z" level=info msg="StartContainer for \"c62fe688c5a4a6014b4d15b7dad2fbd6ebcdaf69792c27f6d06c7b07d0df9074\" returns successfully" Nov 1 00:25:15.848139 env[1666]: time="2025-11-01T00:25:15.847965194Z" level=info msg="StartContainer for \"6b9b24ef72f6a49938e59754395e6258fe6dffc622848db5b9dd35487d27c333\" returns successfully" Nov 1 00:25:16.295685 systemd[1]: run-containerd-runc-k8s.io-52af7a3ee117d10d55461969f7ead35e17c1e06f782575377101ef2afa38837c-runc.oqwi6c.mount: Deactivated successfully. Nov 1 00:25:16.858174 kubelet[2651]: I1101 00:25:16.858097 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wczgv" podStartSLOduration=32.85807721 podStartE2EDuration="32.85807721s" podCreationTimestamp="2025-11-01 00:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:25:16.826545853 +0000 UTC m=+39.626782989" watchObservedRunningTime="2025-11-01 00:25:16.85807721 +0000 UTC m=+39.658314322" Nov 1 00:25:16.859383 kubelet[2651]: I1101 00:25:16.859323 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f9gzf" podStartSLOduration=32.85928567 podStartE2EDuration="32.85928567s" podCreationTimestamp="2025-11-01 00:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:25:16.85354445 +0000 UTC m=+39.653781574" watchObservedRunningTime="2025-11-01 00:25:16.85928567 +0000 UTC m=+39.659522794" Nov 1 00:25:23.521035 systemd[1]: Started sshd@5-172.31.20.188:22-147.75.109.163:40394.service. Nov 1 00:25:23.691262 sshd[4010]: Accepted publickey for core from 147.75.109.163 port 40394 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:25:23.694546 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:23.703234 systemd-logind[1655]: New session 6 of user core. Nov 1 00:25:23.703647 systemd[1]: Started session-6.scope. Nov 1 00:25:23.976668 sshd[4010]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:23.981621 systemd-logind[1655]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:25:23.982234 systemd[1]: sshd@5-172.31.20.188:22-147.75.109.163:40394.service: Deactivated successfully. Nov 1 00:25:23.983631 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:25:23.985446 systemd-logind[1655]: Removed session 6. Nov 1 00:25:29.006105 systemd[1]: Started sshd@6-172.31.20.188:22-147.75.109.163:40404.service. Nov 1 00:25:29.175919 sshd[4024]: Accepted publickey for core from 147.75.109.163 port 40404 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:25:29.178696 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:29.187902 systemd[1]: Started session-7.scope. Nov 1 00:25:29.189190 systemd-logind[1655]: New session 7 of user core. Nov 1 00:25:29.439643 sshd[4024]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:29.444555 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:25:29.445749 systemd[1]: sshd@6-172.31.20.188:22-147.75.109.163:40404.service: Deactivated successfully. Nov 1 00:25:29.447688 systemd-logind[1655]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:25:29.450232 systemd-logind[1655]: Removed session 7. Nov 1 00:25:34.467175 systemd[1]: Started sshd@7-172.31.20.188:22-147.75.109.163:55936.service. Nov 1 00:25:34.634487 sshd[4037]: Accepted publickey for core from 147.75.109.163 port 55936 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:25:34.637170 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:34.646562 systemd-logind[1655]: New session 8 of user core. Nov 1 00:25:34.647567 systemd[1]: Started session-8.scope. Nov 1 00:25:34.893517 sshd[4037]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:34.899036 systemd-logind[1655]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:25:34.900429 systemd[1]: sshd@7-172.31.20.188:22-147.75.109.163:55936.service: Deactivated successfully. Nov 1 00:25:34.901784 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:25:34.904110 systemd-logind[1655]: Removed session 8. Nov 1 00:25:39.924051 systemd[1]: Started sshd@8-172.31.20.188:22-147.75.109.163:55940.service. Nov 1 00:25:40.097391 sshd[4056]: Accepted publickey for core from 147.75.109.163 port 55940 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:25:40.100380 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:40.108887 systemd-logind[1655]: New session 9 of user core. Nov 1 00:25:40.110947 systemd[1]: Started session-9.scope. Nov 1 00:25:40.376050 sshd[4056]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:40.382083 systemd[1]: sshd@8-172.31.20.188:22-147.75.109.163:55940.service: Deactivated successfully. Nov 1 00:25:40.383684 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:25:40.385582 systemd-logind[1655]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:25:40.387229 systemd-logind[1655]: Removed session 9. Nov 1 00:25:45.408945 systemd[1]: Started sshd@9-172.31.20.188:22-147.75.109.163:38130.service. Nov 1 00:25:45.587516 sshd[4072]: Accepted publickey for core from 147.75.109.163 port 38130 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:25:45.588673 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:45.597714 systemd[1]: Started session-10.scope. Nov 1 00:25:45.598527 systemd-logind[1655]: New session 10 of user core. Nov 1 00:25:45.858694 sshd[4072]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:45.867898 systemd[1]: sshd@9-172.31.20.188:22-147.75.109.163:38130.service: Deactivated successfully. Nov 1 00:25:45.869262 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:25:45.870174 systemd-logind[1655]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:25:45.872083 systemd-logind[1655]: Removed session 10. Nov 1 00:25:50.887352 systemd[1]: Started sshd@10-172.31.20.188:22-147.75.109.163:59464.service. Nov 1 00:25:51.059024 sshd[4085]: Accepted publickey for core from 147.75.109.163 port 59464 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:25:51.061718 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:51.071368 systemd[1]: Started session-11.scope. Nov 1 00:25:51.072177 systemd-logind[1655]: New session 11 of user core. Nov 1 00:25:51.324225 sshd[4085]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:51.330094 systemd-logind[1655]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:25:51.331384 systemd[1]: sshd@10-172.31.20.188:22-147.75.109.163:59464.service: Deactivated successfully. Nov 1 00:25:51.332764 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:25:51.334426 systemd-logind[1655]: Removed session 11. Nov 1 00:25:56.354556 systemd[1]: Started sshd@11-172.31.20.188:22-147.75.109.163:59472.service. Nov 1 00:25:56.522355 sshd[4097]: Accepted publickey for core from 147.75.109.163 port 59472 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:25:56.525187 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:56.534530 systemd[1]: Started session-12.scope. Nov 1 00:25:56.535520 systemd-logind[1655]: New session 12 of user core. Nov 1 00:25:56.786155 sshd[4097]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:56.791763 systemd[1]: sshd@11-172.31.20.188:22-147.75.109.163:59472.service: Deactivated successfully. Nov 1 00:25:56.793091 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:25:56.794232 systemd-logind[1655]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:25:56.796213 systemd-logind[1655]: Removed session 12. Nov 1 00:25:56.816188 systemd[1]: Started sshd@12-172.31.20.188:22-147.75.109.163:59478.service. Nov 1 00:25:56.989909 sshd[4110]: Accepted publickey for core from 147.75.109.163 port 59478 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:25:56.992517 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:57.001763 systemd[1]: Started session-13.scope. Nov 1 00:25:57.002821 systemd-logind[1655]: New session 13 of user core. Nov 1 00:25:57.351460 sshd[4110]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:57.357673 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:25:57.359373 systemd-logind[1655]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:25:57.359767 systemd[1]: sshd@12-172.31.20.188:22-147.75.109.163:59478.service: Deactivated successfully. Nov 1 00:25:57.364678 systemd-logind[1655]: Removed session 13. Nov 1 00:25:57.391635 systemd[1]: Started sshd@13-172.31.20.188:22-147.75.109.163:59490.service. Nov 1 00:25:57.563753 sshd[4119]: Accepted publickey for core from 147.75.109.163 port 59490 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:25:57.566683 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:57.575746 systemd[1]: Started session-14.scope. Nov 1 00:25:57.575984 systemd-logind[1655]: New session 14 of user core. Nov 1 00:25:57.854660 sshd[4119]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:57.860357 systemd[1]: sshd@13-172.31.20.188:22-147.75.109.163:59490.service: Deactivated successfully. Nov 1 00:25:57.861736 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:25:57.863077 systemd-logind[1655]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:25:57.866024 systemd-logind[1655]: Removed session 14. Nov 1 00:26:02.881620 systemd[1]: Started sshd@14-172.31.20.188:22-147.75.109.163:40456.service. Nov 1 00:26:03.051660 sshd[4132]: Accepted publickey for core from 147.75.109.163 port 40456 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:03.054761 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:03.063154 systemd-logind[1655]: New session 15 of user core. Nov 1 00:26:03.065813 systemd[1]: Started session-15.scope. Nov 1 00:26:03.321213 sshd[4132]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:03.326722 systemd-logind[1655]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:26:03.327251 systemd[1]: sshd@14-172.31.20.188:22-147.75.109.163:40456.service: Deactivated successfully. Nov 1 00:26:03.328678 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:26:03.330784 systemd-logind[1655]: Removed session 15. Nov 1 00:26:08.351147 systemd[1]: Started sshd@15-172.31.20.188:22-147.75.109.163:40472.service. Nov 1 00:26:08.522082 sshd[4144]: Accepted publickey for core from 147.75.109.163 port 40472 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:08.525479 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:08.534583 systemd-logind[1655]: New session 16 of user core. Nov 1 00:26:08.534725 systemd[1]: Started session-16.scope. Nov 1 00:26:08.787008 sshd[4144]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:08.792617 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:26:08.795003 systemd-logind[1655]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:26:08.795603 systemd[1]: sshd@15-172.31.20.188:22-147.75.109.163:40472.service: Deactivated successfully. Nov 1 00:26:08.798188 systemd-logind[1655]: Removed session 16. Nov 1 00:26:13.816104 systemd[1]: Started sshd@16-172.31.20.188:22-147.75.109.163:51858.service. Nov 1 00:26:13.985949 sshd[4157]: Accepted publickey for core from 147.75.109.163 port 51858 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:13.988526 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:13.996386 systemd-logind[1655]: New session 17 of user core. Nov 1 00:26:13.997720 systemd[1]: Started session-17.scope. Nov 1 00:26:14.261279 sshd[4157]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:14.267023 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:26:14.268251 systemd[1]: sshd@16-172.31.20.188:22-147.75.109.163:51858.service: Deactivated successfully. Nov 1 00:26:14.270051 systemd-logind[1655]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:26:14.272336 systemd-logind[1655]: Removed session 17. Nov 1 00:26:19.290795 systemd[1]: Started sshd@17-172.31.20.188:22-147.75.109.163:51862.service. Nov 1 00:26:19.461139 sshd[4171]: Accepted publickey for core from 147.75.109.163 port 51862 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:19.469078 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:19.481358 systemd[1]: Started session-18.scope. Nov 1 00:26:19.482357 systemd-logind[1655]: New session 18 of user core. Nov 1 00:26:19.726727 sshd[4171]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:19.732474 systemd[1]: sshd@17-172.31.20.188:22-147.75.109.163:51862.service: Deactivated successfully. Nov 1 00:26:19.734401 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:26:19.736545 systemd-logind[1655]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:26:19.739632 systemd-logind[1655]: Removed session 18. Nov 1 00:26:19.757754 systemd[1]: Started sshd@18-172.31.20.188:22-147.75.109.163:51878.service. Nov 1 00:26:19.928034 sshd[4183]: Accepted publickey for core from 147.75.109.163 port 51878 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:19.930695 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:19.940711 systemd-logind[1655]: New session 19 of user core. Nov 1 00:26:19.940950 systemd[1]: Started session-19.scope. Nov 1 00:26:20.290475 sshd[4183]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:20.296700 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:26:20.298006 systemd[1]: sshd@18-172.31.20.188:22-147.75.109.163:51878.service: Deactivated successfully. Nov 1 00:26:20.299751 systemd-logind[1655]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:26:20.303012 systemd-logind[1655]: Removed session 19. Nov 1 00:26:20.319077 systemd[1]: Started sshd@19-172.31.20.188:22-147.75.109.163:50102.service. Nov 1 00:26:20.487065 sshd[4193]: Accepted publickey for core from 147.75.109.163 port 50102 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:20.490190 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:20.498117 systemd-logind[1655]: New session 20 of user core. Nov 1 00:26:20.499158 systemd[1]: Started session-20.scope. Nov 1 00:26:21.446282 sshd[4193]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:21.453314 systemd[1]: sshd@19-172.31.20.188:22-147.75.109.163:50102.service: Deactivated successfully. Nov 1 00:26:21.454712 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:26:21.457071 systemd-logind[1655]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:26:21.459044 systemd-logind[1655]: Removed session 20. Nov 1 00:26:21.482883 systemd[1]: Started sshd@20-172.31.20.188:22-147.75.109.163:50108.service. Nov 1 00:26:21.652681 sshd[4210]: Accepted publickey for core from 147.75.109.163 port 50108 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:21.655347 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:21.663934 systemd-logind[1655]: New session 21 of user core. Nov 1 00:26:21.665274 systemd[1]: Started session-21.scope. Nov 1 00:26:22.178972 sshd[4210]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:22.185117 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:26:22.186556 systemd-logind[1655]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:26:22.187052 systemd[1]: sshd@20-172.31.20.188:22-147.75.109.163:50108.service: Deactivated successfully. Nov 1 00:26:22.190056 systemd-logind[1655]: Removed session 21. Nov 1 00:26:22.207606 systemd[1]: Started sshd@21-172.31.20.188:22-147.75.109.163:50116.service. Nov 1 00:26:22.375485 sshd[4220]: Accepted publickey for core from 147.75.109.163 port 50116 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:22.377284 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:22.386856 systemd[1]: Started session-22.scope. Nov 1 00:26:22.387684 systemd-logind[1655]: New session 22 of user core. Nov 1 00:26:22.632728 sshd[4220]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:22.637658 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:26:22.638985 systemd[1]: sshd@21-172.31.20.188:22-147.75.109.163:50116.service: Deactivated successfully. Nov 1 00:26:22.640739 systemd-logind[1655]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:26:22.643716 systemd-logind[1655]: Removed session 22. Nov 1 00:26:27.659949 systemd[1]: Started sshd@22-172.31.20.188:22-147.75.109.163:50128.service. Nov 1 00:26:27.829104 sshd[4232]: Accepted publickey for core from 147.75.109.163 port 50128 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:27.832470 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:27.842773 systemd[1]: Started session-23.scope. Nov 1 00:26:27.843731 systemd-logind[1655]: New session 23 of user core. Nov 1 00:26:28.103445 sshd[4232]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:28.108891 systemd[1]: sshd@22-172.31.20.188:22-147.75.109.163:50128.service: Deactivated successfully. Nov 1 00:26:28.110244 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:26:28.113138 systemd-logind[1655]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:26:28.115259 systemd-logind[1655]: Removed session 23. Nov 1 00:26:33.135965 systemd[1]: Started sshd@23-172.31.20.188:22-147.75.109.163:38484.service. Nov 1 00:26:33.306743 sshd[4246]: Accepted publickey for core from 147.75.109.163 port 38484 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:33.309356 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:33.318950 systemd[1]: Started session-24.scope. Nov 1 00:26:33.319759 systemd-logind[1655]: New session 24 of user core. Nov 1 00:26:33.574654 sshd[4246]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:33.579108 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:26:33.580350 systemd[1]: sshd@23-172.31.20.188:22-147.75.109.163:38484.service: Deactivated successfully. Nov 1 00:26:33.582108 systemd-logind[1655]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:26:33.584583 systemd-logind[1655]: Removed session 24. Nov 1 00:26:36.568628 amazon-ssm-agent[1643]: 2025-11-01 00:26:36 INFO [HealthCheck] HealthCheck reporting agent health. Nov 1 00:26:38.604759 systemd[1]: Started sshd@24-172.31.20.188:22-147.75.109.163:38494.service. Nov 1 00:26:38.774255 sshd[4261]: Accepted publickey for core from 147.75.109.163 port 38494 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:38.777775 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:38.785387 systemd-logind[1655]: New session 25 of user core. Nov 1 00:26:38.786493 systemd[1]: Started session-25.scope. Nov 1 00:26:39.038253 sshd[4261]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:39.043960 systemd-logind[1655]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:26:39.044455 systemd[1]: sshd@24-172.31.20.188:22-147.75.109.163:38494.service: Deactivated successfully. Nov 1 00:26:39.045774 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:26:39.047537 systemd-logind[1655]: Removed session 25. Nov 1 00:26:39.066985 systemd[1]: Started sshd@25-172.31.20.188:22-147.75.109.163:38504.service. Nov 1 00:26:39.237410 sshd[4273]: Accepted publickey for core from 147.75.109.163 port 38504 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:39.240517 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:39.249182 systemd[1]: Started session-26.scope. Nov 1 00:26:39.250618 systemd-logind[1655]: New session 26 of user core. Nov 1 00:26:42.193384 env[1666]: time="2025-11-01T00:26:42.192690637Z" level=info msg="StopContainer for \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\" with timeout 30 (s)" Nov 1 00:26:42.195605 env[1666]: time="2025-11-01T00:26:42.194886256Z" level=info msg="Stop container \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\" with signal terminated" Nov 1 00:26:42.212735 systemd[1]: run-containerd-runc-k8s.io-d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1-runc.OuFjcl.mount: Deactivated successfully. Nov 1 00:26:42.228140 systemd[1]: cri-containerd-b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317.scope: Deactivated successfully. Nov 1 00:26:42.280270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317-rootfs.mount: Deactivated successfully. Nov 1 00:26:42.298548 env[1666]: time="2025-11-01T00:26:42.298485102Z" level=info msg="shim disconnected" id=b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317 Nov 1 00:26:42.299003 env[1666]: time="2025-11-01T00:26:42.298967287Z" level=warning msg="cleaning up after shim disconnected" id=b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317 namespace=k8s.io Nov 1 00:26:42.299151 env[1666]: time="2025-11-01T00:26:42.299122963Z" level=info msg="cleaning up dead shim" Nov 1 00:26:42.315607 env[1666]: time="2025-11-01T00:26:42.315525652Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:26:42.319972 env[1666]: time="2025-11-01T00:26:42.319916950Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4316 runtime=io.containerd.runc.v2\n" Nov 1 00:26:42.324761 env[1666]: time="2025-11-01T00:26:42.324702460Z" level=info msg="StopContainer for \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\" returns successfully" Nov 1 00:26:42.325892 env[1666]: time="2025-11-01T00:26:42.325842977Z" level=info msg="StopPodSandbox for \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\"" Nov 1 00:26:42.326366 env[1666]: time="2025-11-01T00:26:42.326327082Z" level=info msg="Container to stop \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:26:42.332956 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8-shm.mount: Deactivated successfully. Nov 1 00:26:42.339369 env[1666]: time="2025-11-01T00:26:42.339202859Z" level=info msg="StopContainer for \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\" with timeout 2 (s)" Nov 1 00:26:42.340038 env[1666]: time="2025-11-01T00:26:42.339972648Z" level=info msg="Stop container \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\" with signal terminated" Nov 1 00:26:42.352528 systemd[1]: cri-containerd-7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8.scope: Deactivated successfully. Nov 1 00:26:42.367023 systemd-networkd[1386]: lxc_health: Link DOWN Nov 1 00:26:42.367037 systemd-networkd[1386]: lxc_health: Lost carrier Nov 1 00:26:42.393891 systemd[1]: cri-containerd-d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1.scope: Deactivated successfully. Nov 1 00:26:42.394505 systemd[1]: cri-containerd-d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1.scope: Consumed 14.764s CPU time. Nov 1 00:26:42.432711 env[1666]: time="2025-11-01T00:26:42.432646019Z" level=info msg="shim disconnected" id=7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8 Nov 1 00:26:42.433458 env[1666]: time="2025-11-01T00:26:42.433396836Z" level=warning msg="cleaning up after shim disconnected" id=7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8 namespace=k8s.io Nov 1 00:26:42.433458 env[1666]: time="2025-11-01T00:26:42.433445604Z" level=info msg="cleaning up dead shim" Nov 1 00:26:42.439629 env[1666]: time="2025-11-01T00:26:42.439566104Z" level=info msg="shim disconnected" id=d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1 Nov 1 00:26:42.439964 env[1666]: time="2025-11-01T00:26:42.439914153Z" level=warning msg="cleaning up after shim disconnected" id=d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1 namespace=k8s.io Nov 1 00:26:42.440101 env[1666]: time="2025-11-01T00:26:42.440073849Z" level=info msg="cleaning up dead shim" Nov 1 00:26:42.456464 env[1666]: time="2025-11-01T00:26:42.456271134Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4373 runtime=io.containerd.runc.v2\n" Nov 1 00:26:42.459014 env[1666]: time="2025-11-01T00:26:42.458951205Z" level=info msg="TearDown network for sandbox \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\" successfully" Nov 1 00:26:42.459201 env[1666]: time="2025-11-01T00:26:42.459012585Z" level=info msg="StopPodSandbox for \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\" returns successfully" Nov 1 00:26:42.459920 env[1666]: time="2025-11-01T00:26:42.459708430Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4377 runtime=io.containerd.runc.v2\n" Nov 1 00:26:42.476121 env[1666]: time="2025-11-01T00:26:42.475866535Z" level=info msg="StopContainer for \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\" returns successfully" Nov 1 00:26:42.476874 env[1666]: time="2025-11-01T00:26:42.476822012Z" level=info msg="StopPodSandbox for \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\"" Nov 1 00:26:42.477245 env[1666]: time="2025-11-01T00:26:42.477191121Z" level=info msg="Container to stop \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:26:42.478252 env[1666]: time="2025-11-01T00:26:42.478198678Z" level=info msg="Container to stop \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:26:42.478824 env[1666]: time="2025-11-01T00:26:42.478777235Z" level=info msg="Container to stop \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:26:42.479248 env[1666]: time="2025-11-01T00:26:42.479181239Z" level=info msg="Container to stop \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:26:42.479248 env[1666]: time="2025-11-01T00:26:42.479233224Z" level=info msg="Container to stop \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:26:42.505007 systemd[1]: cri-containerd-c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5.scope: Deactivated successfully. Nov 1 00:26:42.547250 env[1666]: time="2025-11-01T00:26:42.547180251Z" level=info msg="shim disconnected" id=c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5 Nov 1 00:26:42.547539 env[1666]: time="2025-11-01T00:26:42.547253175Z" level=warning msg="cleaning up after shim disconnected" id=c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5 namespace=k8s.io Nov 1 00:26:42.547539 env[1666]: time="2025-11-01T00:26:42.547275675Z" level=info msg="cleaning up dead shim" Nov 1 00:26:42.561587 env[1666]: time="2025-11-01T00:26:42.561521890Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4418 runtime=io.containerd.runc.v2\n" Nov 1 00:26:42.562214 env[1666]: time="2025-11-01T00:26:42.562161587Z" level=info msg="TearDown network for sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" successfully" Nov 1 00:26:42.562407 env[1666]: time="2025-11-01T00:26:42.562215959Z" level=info msg="StopPodSandbox for \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" returns successfully" Nov 1 00:26:42.588342 kubelet[2651]: I1101 00:26:42.587590 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncczl\" (UniqueName: \"kubernetes.io/projected/93664fa6-31bd-4e04-9a4c-d8954a2e0702-kube-api-access-ncczl\") pod \"93664fa6-31bd-4e04-9a4c-d8954a2e0702\" (UID: \"93664fa6-31bd-4e04-9a4c-d8954a2e0702\") " Nov 1 00:26:42.588342 kubelet[2651]: I1101 00:26:42.587678 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93664fa6-31bd-4e04-9a4c-d8954a2e0702-cilium-config-path\") pod \"93664fa6-31bd-4e04-9a4c-d8954a2e0702\" (UID: \"93664fa6-31bd-4e04-9a4c-d8954a2e0702\") " Nov 1 00:26:42.594475 kubelet[2651]: I1101 00:26:42.593848 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93664fa6-31bd-4e04-9a4c-d8954a2e0702-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "93664fa6-31bd-4e04-9a4c-d8954a2e0702" (UID: "93664fa6-31bd-4e04-9a4c-d8954a2e0702"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:26:42.599610 kubelet[2651]: I1101 00:26:42.599551 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93664fa6-31bd-4e04-9a4c-d8954a2e0702-kube-api-access-ncczl" (OuterVolumeSpecName: "kube-api-access-ncczl") pod "93664fa6-31bd-4e04-9a4c-d8954a2e0702" (UID: "93664fa6-31bd-4e04-9a4c-d8954a2e0702"). InnerVolumeSpecName "kube-api-access-ncczl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:26:42.688371 kubelet[2651]: I1101 00:26:42.688280 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cni-path\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688557 kubelet[2651]: I1101 00:26:42.688390 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cedf0374-a112-47e2-b08a-c070801d1920-cilium-config-path\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688557 kubelet[2651]: I1101 00:26:42.688507 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-lib-modules\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688557 kubelet[2651]: I1101 00:26:42.688544 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-bpf-maps\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688754 kubelet[2651]: I1101 00:26:42.688607 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-host-proc-sys-net\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688754 kubelet[2651]: I1101 00:26:42.688648 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cilium-cgroup\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688754 kubelet[2651]: I1101 00:26:42.688695 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cedf0374-a112-47e2-b08a-c070801d1920-hubble-tls\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688754 kubelet[2651]: I1101 00:26:42.688731 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-hostproc\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688989 kubelet[2651]: I1101 00:26:42.688763 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-xtables-lock\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688989 kubelet[2651]: I1101 00:26:42.688803 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cedf0374-a112-47e2-b08a-c070801d1920-clustermesh-secrets\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688989 kubelet[2651]: I1101 00:26:42.688838 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-host-proc-sys-kernel\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688989 kubelet[2651]: I1101 00:26:42.688876 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-etc-cni-netd\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688989 kubelet[2651]: I1101 00:26:42.688913 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7czq\" (UniqueName: \"kubernetes.io/projected/cedf0374-a112-47e2-b08a-c070801d1920-kube-api-access-n7czq\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.688989 kubelet[2651]: I1101 00:26:42.688951 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cilium-run\") pod \"cedf0374-a112-47e2-b08a-c070801d1920\" (UID: \"cedf0374-a112-47e2-b08a-c070801d1920\") " Nov 1 00:26:42.689355 kubelet[2651]: I1101 00:26:42.689061 2651 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ncczl\" (UniqueName: \"kubernetes.io/projected/93664fa6-31bd-4e04-9a4c-d8954a2e0702-kube-api-access-ncczl\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.689355 kubelet[2651]: I1101 00:26:42.689091 2651 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93664fa6-31bd-4e04-9a4c-d8954a2e0702-cilium-config-path\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.689355 kubelet[2651]: I1101 00:26:42.689154 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.689355 kubelet[2651]: I1101 00:26:42.689214 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cni-path" (OuterVolumeSpecName: "cni-path") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.690435 kubelet[2651]: I1101 00:26:42.690372 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-hostproc" (OuterVolumeSpecName: "hostproc") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.691680 kubelet[2651]: I1101 00:26:42.690610 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.691906 kubelet[2651]: I1101 00:26:42.690641 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.692045 kubelet[2651]: I1101 00:26:42.690685 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.692191 kubelet[2651]: I1101 00:26:42.690712 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.692488 kubelet[2651]: I1101 00:26:42.692414 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.692713 kubelet[2651]: I1101 00:26:42.692670 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.693374 kubelet[2651]: I1101 00:26:42.693334 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:42.706108 kubelet[2651]: I1101 00:26:42.698888 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cedf0374-a112-47e2-b08a-c070801d1920-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:26:42.709526 kubelet[2651]: I1101 00:26:42.706602 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedf0374-a112-47e2-b08a-c070801d1920-kube-api-access-n7czq" (OuterVolumeSpecName: "kube-api-access-n7czq") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "kube-api-access-n7czq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:26:42.710760 kubelet[2651]: I1101 00:26:42.710712 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cedf0374-a112-47e2-b08a-c070801d1920-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:26:42.714623 kubelet[2651]: I1101 00:26:42.714565 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedf0374-a112-47e2-b08a-c070801d1920-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cedf0374-a112-47e2-b08a-c070801d1920" (UID: "cedf0374-a112-47e2-b08a-c070801d1920"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:26:42.718225 kubelet[2651]: E1101 00:26:42.718169 2651 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:26:42.789498 kubelet[2651]: I1101 00:26:42.789453 2651 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cilium-cgroup\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.789745 kubelet[2651]: I1101 00:26:42.789716 2651 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cedf0374-a112-47e2-b08a-c070801d1920-hubble-tls\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.789929 kubelet[2651]: I1101 00:26:42.789906 2651 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-hostproc\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.790083 kubelet[2651]: I1101 00:26:42.790060 2651 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-xtables-lock\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.790236 kubelet[2651]: I1101 00:26:42.790214 2651 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cedf0374-a112-47e2-b08a-c070801d1920-clustermesh-secrets\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.790422 kubelet[2651]: I1101 00:26:42.790400 2651 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-host-proc-sys-kernel\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.790646 kubelet[2651]: I1101 00:26:42.790625 2651 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-etc-cni-netd\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.790791 kubelet[2651]: I1101 00:26:42.790768 2651 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7czq\" (UniqueName: \"kubernetes.io/projected/cedf0374-a112-47e2-b08a-c070801d1920-kube-api-access-n7czq\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.790927 kubelet[2651]: I1101 00:26:42.790904 2651 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cilium-run\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.791061 kubelet[2651]: I1101 00:26:42.791040 2651 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-cni-path\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.791205 kubelet[2651]: I1101 00:26:42.791182 2651 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cedf0374-a112-47e2-b08a-c070801d1920-cilium-config-path\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.791365 kubelet[2651]: I1101 00:26:42.791344 2651 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-lib-modules\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.791515 kubelet[2651]: I1101 00:26:42.791494 2651 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-bpf-maps\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:42.791656 kubelet[2651]: I1101 00:26:42.791635 2651 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cedf0374-a112-47e2-b08a-c070801d1920-host-proc-sys-net\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:43.038980 kubelet[2651]: I1101 00:26:43.037562 2651 scope.go:117] "RemoveContainer" containerID="b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317" Nov 1 00:26:43.042171 env[1666]: time="2025-11-01T00:26:43.042070814Z" level=info msg="RemoveContainer for \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\"" Nov 1 00:26:43.053642 env[1666]: time="2025-11-01T00:26:43.053162488Z" level=info msg="RemoveContainer for \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\" returns successfully" Nov 1 00:26:43.053631 systemd[1]: Removed slice kubepods-besteffort-pod93664fa6_31bd_4e04_9a4c_d8954a2e0702.slice. Nov 1 00:26:43.056425 kubelet[2651]: I1101 00:26:43.056390 2651 scope.go:117] "RemoveContainer" containerID="b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317" Nov 1 00:26:43.057890 env[1666]: time="2025-11-01T00:26:43.057711106Z" level=error msg="ContainerStatus for \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\": not found" Nov 1 00:26:43.058386 kubelet[2651]: E1101 00:26:43.058340 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\": not found" containerID="b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317" Nov 1 00:26:43.058749 kubelet[2651]: I1101 00:26:43.058639 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317"} err="failed to get container status \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\": rpc error: code = NotFound desc = an error occurred when try to find container \"b15e502fbebe4252c1bbd190d14f1167c4ef6edc4d76ceff7252644ebc955317\": not found" Nov 1 00:26:43.059551 kubelet[2651]: I1101 00:26:43.059505 2651 scope.go:117] "RemoveContainer" containerID="d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1" Nov 1 00:26:43.071916 env[1666]: time="2025-11-01T00:26:43.071677000Z" level=info msg="RemoveContainer for \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\"" Nov 1 00:26:43.077918 systemd[1]: Removed slice kubepods-burstable-podcedf0374_a112_47e2_b08a_c070801d1920.slice. Nov 1 00:26:43.078132 systemd[1]: kubepods-burstable-podcedf0374_a112_47e2_b08a_c070801d1920.slice: Consumed 15.059s CPU time. Nov 1 00:26:43.081809 env[1666]: time="2025-11-01T00:26:43.081670312Z" level=info msg="RemoveContainer for \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\" returns successfully" Nov 1 00:26:43.086465 kubelet[2651]: I1101 00:26:43.086411 2651 scope.go:117] "RemoveContainer" containerID="c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e" Nov 1 00:26:43.098388 env[1666]: time="2025-11-01T00:26:43.097154316Z" level=info msg="RemoveContainer for \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\"" Nov 1 00:26:43.107676 env[1666]: time="2025-11-01T00:26:43.107616278Z" level=info msg="RemoveContainer for \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\" returns successfully" Nov 1 00:26:43.108745 kubelet[2651]: I1101 00:26:43.108504 2651 scope.go:117] "RemoveContainer" containerID="d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001" Nov 1 00:26:43.117933 env[1666]: time="2025-11-01T00:26:43.117840855Z" level=info msg="RemoveContainer for \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\"" Nov 1 00:26:43.124744 env[1666]: time="2025-11-01T00:26:43.124687331Z" level=info msg="RemoveContainer for \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\" returns successfully" Nov 1 00:26:43.125568 kubelet[2651]: I1101 00:26:43.125386 2651 scope.go:117] "RemoveContainer" containerID="da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85" Nov 1 00:26:43.128014 env[1666]: time="2025-11-01T00:26:43.127920652Z" level=info msg="RemoveContainer for \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\"" Nov 1 00:26:43.134195 env[1666]: time="2025-11-01T00:26:43.134101499Z" level=info msg="RemoveContainer for \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\" returns successfully" Nov 1 00:26:43.134601 kubelet[2651]: I1101 00:26:43.134529 2651 scope.go:117] "RemoveContainer" containerID="d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a" Nov 1 00:26:43.136570 env[1666]: time="2025-11-01T00:26:43.136519995Z" level=info msg="RemoveContainer for \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\"" Nov 1 00:26:43.142610 env[1666]: time="2025-11-01T00:26:43.142554586Z" level=info msg="RemoveContainer for \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\" returns successfully" Nov 1 00:26:43.143252 kubelet[2651]: I1101 00:26:43.143090 2651 scope.go:117] "RemoveContainer" containerID="d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1" Nov 1 00:26:43.143662 env[1666]: time="2025-11-01T00:26:43.143544312Z" level=error msg="ContainerStatus for \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\": not found" Nov 1 00:26:43.144050 kubelet[2651]: E1101 00:26:43.143975 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\": not found" containerID="d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1" Nov 1 00:26:43.144161 kubelet[2651]: I1101 00:26:43.144059 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1"} err="failed to get container status \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1\": not found" Nov 1 00:26:43.144161 kubelet[2651]: I1101 00:26:43.144118 2651 scope.go:117] "RemoveContainer" containerID="c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e" Nov 1 00:26:43.144749 env[1666]: time="2025-11-01T00:26:43.144660817Z" level=error msg="ContainerStatus for \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\": not found" Nov 1 00:26:43.145325 kubelet[2651]: E1101 00:26:43.145089 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\": not found" containerID="c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e" Nov 1 00:26:43.145325 kubelet[2651]: I1101 00:26:43.145135 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e"} err="failed to get container status \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5198b336c576ef86b76c74b6e87a002777234512b3439fbe7f2c12f6313aa9e\": not found" Nov 1 00:26:43.145325 kubelet[2651]: I1101 00:26:43.145176 2651 scope.go:117] "RemoveContainer" containerID="d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001" Nov 1 00:26:43.145661 env[1666]: time="2025-11-01T00:26:43.145550090Z" level=error msg="ContainerStatus for \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\": not found" Nov 1 00:26:43.146059 kubelet[2651]: E1101 00:26:43.145987 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\": not found" containerID="d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001" Nov 1 00:26:43.146168 kubelet[2651]: I1101 00:26:43.146070 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001"} err="failed to get container status \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\": rpc error: code = NotFound desc = an error occurred when try to find container \"d476ad9dbac14ad65671391e3d955ab95213471d56bc96e41cada3b5d4aab001\": not found" Nov 1 00:26:43.146168 kubelet[2651]: I1101 00:26:43.146129 2651 scope.go:117] "RemoveContainer" containerID="da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85" Nov 1 00:26:43.146754 env[1666]: time="2025-11-01T00:26:43.146640183Z" level=error msg="ContainerStatus for \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\": not found" Nov 1 00:26:43.147340 kubelet[2651]: E1101 00:26:43.147089 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\": not found" containerID="da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85" Nov 1 00:26:43.147340 kubelet[2651]: I1101 00:26:43.147140 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85"} err="failed to get container status \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\": rpc error: code = NotFound desc = an error occurred when try to find container \"da0accc29aa1772e6078f7734d5b42cad08bdcc7889fbabf10c5b72bd3649f85\": not found" Nov 1 00:26:43.147340 kubelet[2651]: I1101 00:26:43.147172 2651 scope.go:117] "RemoveContainer" containerID="d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a" Nov 1 00:26:43.147653 env[1666]: time="2025-11-01T00:26:43.147542417Z" level=error msg="ContainerStatus for \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\": not found" Nov 1 00:26:43.148003 kubelet[2651]: E1101 00:26:43.147923 2651 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\": not found" containerID="d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a" Nov 1 00:26:43.148098 kubelet[2651]: I1101 00:26:43.148000 2651 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a"} err="failed to get container status \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8467187225268e5d6d408da61e341ba04cbd2aab85a880a1f0e97a938b1fe2a\": not found" Nov 1 00:26:43.180330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8b842e77b46011993117f35c09b6a990ac5b6d5ee17007eca3352d8293977b1-rootfs.mount: Deactivated successfully. Nov 1 00:26:43.180508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8-rootfs.mount: Deactivated successfully. Nov 1 00:26:43.180639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5-rootfs.mount: Deactivated successfully. Nov 1 00:26:43.180771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5-shm.mount: Deactivated successfully. Nov 1 00:26:43.180902 systemd[1]: var-lib-kubelet-pods-93664fa6\x2d31bd\x2d4e04\x2d9a4c\x2dd8954a2e0702-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncczl.mount: Deactivated successfully. Nov 1 00:26:43.181034 systemd[1]: var-lib-kubelet-pods-cedf0374\x2da112\x2d47e2\x2db08a\x2dc070801d1920-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn7czq.mount: Deactivated successfully. Nov 1 00:26:43.181164 systemd[1]: var-lib-kubelet-pods-cedf0374\x2da112\x2d47e2\x2db08a\x2dc070801d1920-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:26:43.181307 systemd[1]: var-lib-kubelet-pods-cedf0374\x2da112\x2d47e2\x2db08a\x2dc070801d1920-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:26:43.569141 kubelet[2651]: I1101 00:26:43.569075 2651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93664fa6-31bd-4e04-9a4c-d8954a2e0702" path="/var/lib/kubelet/pods/93664fa6-31bd-4e04-9a4c-d8954a2e0702/volumes" Nov 1 00:26:43.571068 kubelet[2651]: I1101 00:26:43.570687 2651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cedf0374-a112-47e2-b08a-c070801d1920" path="/var/lib/kubelet/pods/cedf0374-a112-47e2-b08a-c070801d1920/volumes" Nov 1 00:26:44.097467 sshd[4273]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:44.103476 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:26:44.103769 systemd[1]: session-26.scope: Consumed 2.111s CPU time. Nov 1 00:26:44.104775 systemd[1]: sshd@25-172.31.20.188:22-147.75.109.163:38504.service: Deactivated successfully. Nov 1 00:26:44.106560 systemd-logind[1655]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:26:44.109211 systemd-logind[1655]: Removed session 26. Nov 1 00:26:44.125688 systemd[1]: Started sshd@26-172.31.20.188:22-147.75.109.163:38076.service. Nov 1 00:26:44.294431 sshd[4437]: Accepted publickey for core from 147.75.109.163 port 38076 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:44.297703 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:44.308735 systemd-logind[1655]: New session 27 of user core. Nov 1 00:26:44.309525 systemd[1]: Started session-27.scope. Nov 1 00:26:47.280745 sshd[4437]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:47.286462 systemd-logind[1655]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:26:47.286893 systemd[1]: sshd@26-172.31.20.188:22-147.75.109.163:38076.service: Deactivated successfully. Nov 1 00:26:47.288203 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:26:47.289821 systemd[1]: session-27.scope: Consumed 2.692s CPU time. Nov 1 00:26:47.291344 systemd-logind[1655]: Removed session 27. Nov 1 00:26:47.317556 systemd[1]: Started sshd@27-172.31.20.188:22-147.75.109.163:38082.service. Nov 1 00:26:47.343707 systemd[1]: Created slice kubepods-burstable-pod3073c049_3cbc_4566_8992_694bc68bd443.slice. Nov 1 00:26:47.426344 kubelet[2651]: I1101 00:26:47.426266 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-xtables-lock\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.427019 kubelet[2651]: I1101 00:26:47.426978 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3073c049-3cbc-4566-8992-694bc68bd443-clustermesh-secrets\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.427234 kubelet[2651]: I1101 00:26:47.427198 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t4ns\" (UniqueName: \"kubernetes.io/projected/3073c049-3cbc-4566-8992-694bc68bd443-kube-api-access-6t4ns\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.427447 kubelet[2651]: I1101 00:26:47.427417 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3073c049-3cbc-4566-8992-694bc68bd443-hubble-tls\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.427699 kubelet[2651]: I1101 00:26:47.427664 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-hostproc\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.427927 kubelet[2651]: I1101 00:26:47.427895 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-etc-cni-netd\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.428100 kubelet[2651]: I1101 00:26:47.428071 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-host-proc-sys-kernel\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.428275 kubelet[2651]: I1101 00:26:47.428245 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-host-proc-sys-net\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.428469 kubelet[2651]: I1101 00:26:47.428442 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-bpf-maps\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.428647 kubelet[2651]: I1101 00:26:47.428620 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cilium-cgroup\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.428798 kubelet[2651]: I1101 00:26:47.428772 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cni-path\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.428957 kubelet[2651]: I1101 00:26:47.428931 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-lib-modules\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.429132 kubelet[2651]: I1101 00:26:47.429101 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3073c049-3cbc-4566-8992-694bc68bd443-cilium-config-path\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.429388 kubelet[2651]: I1101 00:26:47.429312 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3073c049-3cbc-4566-8992-694bc68bd443-cilium-ipsec-secrets\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.429609 kubelet[2651]: I1101 00:26:47.429580 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cilium-run\") pod \"cilium-5zz6j\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " pod="kube-system/cilium-5zz6j" Nov 1 00:26:47.506583 sshd[4450]: Accepted publickey for core from 147.75.109.163 port 38082 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:47.509320 sshd[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:47.518402 systemd-logind[1655]: New session 28 of user core. Nov 1 00:26:47.519519 systemd[1]: Started session-28.scope. Nov 1 00:26:47.658119 env[1666]: time="2025-11-01T00:26:47.658051313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zz6j,Uid:3073c049-3cbc-4566-8992-694bc68bd443,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:47.692790 env[1666]: time="2025-11-01T00:26:47.692684507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:47.693119 env[1666]: time="2025-11-01T00:26:47.693061608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:47.693325 env[1666]: time="2025-11-01T00:26:47.693252456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:47.693793 env[1666]: time="2025-11-01T00:26:47.693733669Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4 pid=4472 runtime=io.containerd.runc.v2 Nov 1 00:26:47.720139 kubelet[2651]: E1101 00:26:47.720003 2651 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:26:47.730623 systemd[1]: Started cri-containerd-a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4.scope. Nov 1 00:26:47.822119 env[1666]: time="2025-11-01T00:26:47.822064082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zz6j,Uid:3073c049-3cbc-4566-8992-694bc68bd443,Namespace:kube-system,Attempt:0,} returns sandbox id \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\"" Nov 1 00:26:47.851532 env[1666]: time="2025-11-01T00:26:47.851455418Z" level=info msg="CreateContainer within sandbox \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:26:47.899776 env[1666]: time="2025-11-01T00:26:47.899674429Z" level=info msg="CreateContainer within sandbox \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a\"" Nov 1 00:26:47.900834 env[1666]: time="2025-11-01T00:26:47.900745178Z" level=info msg="StartContainer for \"c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a\"" Nov 1 00:26:47.910583 sshd[4450]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:47.917025 systemd[1]: sshd@27-172.31.20.188:22-147.75.109.163:38082.service: Deactivated successfully. Nov 1 00:26:47.918638 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:26:47.920480 systemd-logind[1655]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:26:47.923972 systemd-logind[1655]: Removed session 28. Nov 1 00:26:47.940930 systemd[1]: Started sshd@28-172.31.20.188:22-147.75.109.163:38090.service. Nov 1 00:26:47.987148 systemd[1]: Started cri-containerd-c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a.scope. Nov 1 00:26:48.028283 systemd[1]: cri-containerd-c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a.scope: Deactivated successfully. Nov 1 00:26:48.051030 env[1666]: time="2025-11-01T00:26:48.050954713Z" level=info msg="shim disconnected" id=c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a Nov 1 00:26:48.051325 env[1666]: time="2025-11-01T00:26:48.051034789Z" level=warning msg="cleaning up after shim disconnected" id=c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a namespace=k8s.io Nov 1 00:26:48.051325 env[1666]: time="2025-11-01T00:26:48.051059581Z" level=info msg="cleaning up dead shim" Nov 1 00:26:48.069559 env[1666]: time="2025-11-01T00:26:48.069469392Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4540 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:26:48Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 00:26:48.070103 env[1666]: time="2025-11-01T00:26:48.069926364Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Nov 1 00:26:48.070480 env[1666]: time="2025-11-01T00:26:48.070389193Z" level=error msg="Failed to pipe stdout of container \"c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a\"" error="reading from a closed fifo" Nov 1 00:26:48.070729 env[1666]: time="2025-11-01T00:26:48.070665661Z" level=error msg="Failed to pipe stderr of container \"c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a\"" error="reading from a closed fifo" Nov 1 00:26:48.075284 env[1666]: time="2025-11-01T00:26:48.074983086Z" level=error msg="StartContainer for \"c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 00:26:48.076017 kubelet[2651]: E1101 00:26:48.075693 2651 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a" Nov 1 00:26:48.076017 kubelet[2651]: E1101 00:26:48.075940 2651 kuberuntime_manager.go:1358] "Unhandled Error" err=< Nov 1 00:26:48.076017 kubelet[2651]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Nov 1 00:26:48.076017 kubelet[2651]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Nov 1 00:26:48.076017 kubelet[2651]: rm /hostbin/cilium-mount Nov 1 00:26:48.078693 kubelet[2651]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6t4ns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-5zz6j_kube-system(3073c049-3cbc-4566-8992-694bc68bd443): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Nov 1 00:26:48.078693 kubelet[2651]: > logger="UnhandledError" Nov 1 00:26:48.078983 kubelet[2651]: E1101 00:26:48.078602 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5zz6j" podUID="3073c049-3cbc-4566-8992-694bc68bd443" Nov 1 00:26:48.115466 sshd[4524]: Accepted publickey for core from 147.75.109.163 port 38090 ssh2: RSA SHA256:aAD9CLUYU0QQWdwX+YyKEh9CSGkGN9W8ZZnhMhFDgQk Nov 1 00:26:48.121043 sshd[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:26:48.133442 systemd-logind[1655]: New session 29 of user core. Nov 1 00:26:48.133575 systemd[1]: Started session-29.scope. Nov 1 00:26:49.085218 env[1666]: time="2025-11-01T00:26:49.084826897Z" level=info msg="StopPodSandbox for \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\"" Nov 1 00:26:49.085218 env[1666]: time="2025-11-01T00:26:49.084922813Z" level=info msg="Container to stop \"c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:26:49.092877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4-shm.mount: Deactivated successfully. Nov 1 00:26:49.105151 systemd[1]: cri-containerd-a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4.scope: Deactivated successfully. Nov 1 00:26:49.151543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4-rootfs.mount: Deactivated successfully. Nov 1 00:26:49.162083 env[1666]: time="2025-11-01T00:26:49.162021262Z" level=info msg="shim disconnected" id=a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4 Nov 1 00:26:49.162816 env[1666]: time="2025-11-01T00:26:49.162773831Z" level=warning msg="cleaning up after shim disconnected" id=a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4 namespace=k8s.io Nov 1 00:26:49.162973 env[1666]: time="2025-11-01T00:26:49.162943571Z" level=info msg="cleaning up dead shim" Nov 1 00:26:49.177402 env[1666]: time="2025-11-01T00:26:49.177344896Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4580 runtime=io.containerd.runc.v2\n" Nov 1 00:26:49.178140 env[1666]: time="2025-11-01T00:26:49.178093937Z" level=info msg="TearDown network for sandbox \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\" successfully" Nov 1 00:26:49.178325 env[1666]: time="2025-11-01T00:26:49.178260689Z" level=info msg="StopPodSandbox for \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\" returns successfully" Nov 1 00:26:49.346710 kubelet[2651]: I1101 00:26:49.346561 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-lib-modules\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.346710 kubelet[2651]: I1101 00:26:49.346651 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-xtables-lock\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347427 kubelet[2651]: I1101 00:26:49.346757 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3073c049-3cbc-4566-8992-694bc68bd443-hubble-tls\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347427 kubelet[2651]: I1101 00:26:49.346823 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-host-proc-sys-kernel\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347427 kubelet[2651]: I1101 00:26:49.346868 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t4ns\" (UniqueName: \"kubernetes.io/projected/3073c049-3cbc-4566-8992-694bc68bd443-kube-api-access-6t4ns\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347427 kubelet[2651]: I1101 00:26:49.346928 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cni-path\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347427 kubelet[2651]: I1101 00:26:49.347010 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3073c049-3cbc-4566-8992-694bc68bd443-cilium-config-path\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347427 kubelet[2651]: I1101 00:26:49.347065 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-hostproc\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347893 kubelet[2651]: I1101 00:26:49.347134 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-etc-cni-netd\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347893 kubelet[2651]: I1101 00:26:49.347196 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cilium-cgroup\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347893 kubelet[2651]: I1101 00:26:49.347243 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3073c049-3cbc-4566-8992-694bc68bd443-clustermesh-secrets\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347893 kubelet[2651]: I1101 00:26:49.347345 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-host-proc-sys-net\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347893 kubelet[2651]: I1101 00:26:49.347390 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3073c049-3cbc-4566-8992-694bc68bd443-cilium-ipsec-secrets\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.347893 kubelet[2651]: I1101 00:26:49.347456 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-bpf-maps\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.348258 kubelet[2651]: I1101 00:26:49.347517 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cilium-run\") pod \"3073c049-3cbc-4566-8992-694bc68bd443\" (UID: \"3073c049-3cbc-4566-8992-694bc68bd443\") " Nov 1 00:26:49.348258 kubelet[2651]: I1101 00:26:49.347650 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.348258 kubelet[2651]: I1101 00:26:49.347724 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.348258 kubelet[2651]: I1101 00:26:49.347806 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.350745 kubelet[2651]: I1101 00:26:49.350638 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.350942 kubelet[2651]: I1101 00:26:49.350786 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.351130 kubelet[2651]: I1101 00:26:49.351084 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.358768 kubelet[2651]: I1101 00:26:49.351160 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cni-path" (OuterVolumeSpecName: "cni-path") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.371586 kubelet[2651]: I1101 00:26:49.355475 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.371586 kubelet[2651]: I1101 00:26:49.356481 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-hostproc" (OuterVolumeSpecName: "hostproc") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.371586 kubelet[2651]: I1101 00:26:49.358680 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3073c049-3cbc-4566-8992-694bc68bd443-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:26:49.371586 kubelet[2651]: I1101 00:26:49.363477 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:26:49.365996 systemd[1]: var-lib-kubelet-pods-3073c049\x2d3cbc\x2d4566\x2d8992\x2d694bc68bd443-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:26:49.372444 kubelet[2651]: I1101 00:26:49.372388 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3073c049-3cbc-4566-8992-694bc68bd443-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:26:49.373281 systemd[1]: var-lib-kubelet-pods-3073c049\x2d3cbc\x2d4566\x2d8992\x2d694bc68bd443-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:26:49.376410 kubelet[2651]: I1101 00:26:49.376212 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3073c049-3cbc-4566-8992-694bc68bd443-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:26:49.377584 kubelet[2651]: I1101 00:26:49.377505 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3073c049-3cbc-4566-8992-694bc68bd443-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:26:49.377996 kubelet[2651]: I1101 00:26:49.377939 2651 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3073c049-3cbc-4566-8992-694bc68bd443-kube-api-access-6t4ns" (OuterVolumeSpecName: "kube-api-access-6t4ns") pod "3073c049-3cbc-4566-8992-694bc68bd443" (UID: "3073c049-3cbc-4566-8992-694bc68bd443"). InnerVolumeSpecName "kube-api-access-6t4ns". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:26:49.448268 kubelet[2651]: I1101 00:26:49.448197 2651 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cilium-run\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.448268 kubelet[2651]: I1101 00:26:49.448259 2651 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-lib-modules\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.448529 kubelet[2651]: I1101 00:26:49.448283 2651 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-xtables-lock\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.448529 kubelet[2651]: I1101 00:26:49.448333 2651 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3073c049-3cbc-4566-8992-694bc68bd443-hubble-tls\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.448529 kubelet[2651]: I1101 00:26:49.448360 2651 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-host-proc-sys-kernel\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.448529 kubelet[2651]: I1101 00:26:49.448384 2651 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6t4ns\" (UniqueName: \"kubernetes.io/projected/3073c049-3cbc-4566-8992-694bc68bd443-kube-api-access-6t4ns\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.448529 kubelet[2651]: I1101 00:26:49.448407 2651 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cni-path\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.448529 kubelet[2651]: I1101 00:26:49.448433 2651 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3073c049-3cbc-4566-8992-694bc68bd443-cilium-config-path\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.448529 kubelet[2651]: I1101 00:26:49.448455 2651 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-hostproc\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.448529 kubelet[2651]: I1101 00:26:49.448477 2651 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-etc-cni-netd\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.449014 kubelet[2651]: I1101 00:26:49.448498 2651 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-cilium-cgroup\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.449014 kubelet[2651]: I1101 00:26:49.448525 2651 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3073c049-3cbc-4566-8992-694bc68bd443-clustermesh-secrets\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.449014 kubelet[2651]: I1101 00:26:49.448547 2651 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-host-proc-sys-net\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.449014 kubelet[2651]: I1101 00:26:49.448567 2651 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3073c049-3cbc-4566-8992-694bc68bd443-cilium-ipsec-secrets\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.449014 kubelet[2651]: I1101 00:26:49.448593 2651 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3073c049-3cbc-4566-8992-694bc68bd443-bpf-maps\") on node \"ip-172-31-20-188\" DevicePath \"\"" Nov 1 00:26:49.558489 systemd[1]: var-lib-kubelet-pods-3073c049\x2d3cbc\x2d4566\x2d8992\x2d694bc68bd443-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:26:49.558693 systemd[1]: var-lib-kubelet-pods-3073c049\x2d3cbc\x2d4566\x2d8992\x2d694bc68bd443-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6t4ns.mount: Deactivated successfully. Nov 1 00:26:49.578164 systemd[1]: Removed slice kubepods-burstable-pod3073c049_3cbc_4566_8992_694bc68bd443.slice. Nov 1 00:26:50.089194 kubelet[2651]: I1101 00:26:50.089158 2651 scope.go:117] "RemoveContainer" containerID="c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a" Nov 1 00:26:50.094590 env[1666]: time="2025-11-01T00:26:50.094515476Z" level=info msg="RemoveContainer for \"c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a\"" Nov 1 00:26:50.102847 env[1666]: time="2025-11-01T00:26:50.102732077Z" level=info msg="RemoveContainer for \"c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a\" returns successfully" Nov 1 00:26:50.194325 systemd[1]: Created slice kubepods-burstable-pod1eed1af9_ff7f_4363_b041_4e1bc40b4c46.slice. Nov 1 00:26:50.254543 kubelet[2651]: I1101 00:26:50.254498 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-cilium-run\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.254893 kubelet[2651]: I1101 00:26:50.254835 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcw8d\" (UniqueName: \"kubernetes.io/projected/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-kube-api-access-jcw8d\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.255128 kubelet[2651]: I1101 00:26:50.255073 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-cni-path\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.255346 kubelet[2651]: I1101 00:26:50.255283 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-xtables-lock\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.255580 kubelet[2651]: I1101 00:26:50.255533 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-clustermesh-secrets\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.255797 kubelet[2651]: I1101 00:26:50.255740 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-bpf-maps\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.255990 kubelet[2651]: I1101 00:26:50.255952 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-lib-modules\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.256186 kubelet[2651]: I1101 00:26:50.256147 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-cilium-ipsec-secrets\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.256414 kubelet[2651]: I1101 00:26:50.256364 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-host-proc-sys-net\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.256657 kubelet[2651]: I1101 00:26:50.256609 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-host-proc-sys-kernel\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.256863 kubelet[2651]: I1101 00:26:50.256824 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-hubble-tls\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.257069 kubelet[2651]: I1101 00:26:50.257022 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-cilium-cgroup\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.257286 kubelet[2651]: I1101 00:26:50.257239 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-cilium-config-path\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.257510 kubelet[2651]: I1101 00:26:50.257470 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-hostproc\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.257744 kubelet[2651]: I1101 00:26:50.257683 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1eed1af9-ff7f-4363-b041-4e1bc40b4c46-etc-cni-netd\") pod \"cilium-wvv46\" (UID: \"1eed1af9-ff7f-4363-b041-4e1bc40b4c46\") " pod="kube-system/cilium-wvv46" Nov 1 00:26:50.471417 kubelet[2651]: I1101 00:26:50.468063 2651 setters.go:618] "Node became not ready" node="ip-172-31-20-188" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:26:50Z","lastTransitionTime":"2025-11-01T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:26:50.500115 env[1666]: time="2025-11-01T00:26:50.500050558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvv46,Uid:1eed1af9-ff7f-4363-b041-4e1bc40b4c46,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:50.528805 env[1666]: time="2025-11-01T00:26:50.528668960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:50.528977 env[1666]: time="2025-11-01T00:26:50.528844544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:50.529048 env[1666]: time="2025-11-01T00:26:50.528968924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:50.529582 env[1666]: time="2025-11-01T00:26:50.529460229Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9 pid=4610 runtime=io.containerd.runc.v2 Nov 1 00:26:50.552751 systemd[1]: Started cri-containerd-6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9.scope. Nov 1 00:26:50.632534 env[1666]: time="2025-11-01T00:26:50.632476259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvv46,Uid:1eed1af9-ff7f-4363-b041-4e1bc40b4c46,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\"" Nov 1 00:26:50.646342 env[1666]: time="2025-11-01T00:26:50.646251159Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:26:50.669152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073097208.mount: Deactivated successfully. Nov 1 00:26:50.681421 env[1666]: time="2025-11-01T00:26:50.681335865Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209\"" Nov 1 00:26:50.682424 env[1666]: time="2025-11-01T00:26:50.682218190Z" level=info msg="StartContainer for \"8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209\"" Nov 1 00:26:50.724752 systemd[1]: Started cri-containerd-8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209.scope. Nov 1 00:26:50.797117 env[1666]: time="2025-11-01T00:26:50.797039571Z" level=info msg="StartContainer for \"8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209\" returns successfully" Nov 1 00:26:50.819472 systemd[1]: cri-containerd-8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209.scope: Deactivated successfully. Nov 1 00:26:50.869643 env[1666]: time="2025-11-01T00:26:50.869567297Z" level=info msg="shim disconnected" id=8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209 Nov 1 00:26:50.869935 env[1666]: time="2025-11-01T00:26:50.869643401Z" level=warning msg="cleaning up after shim disconnected" id=8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209 namespace=k8s.io Nov 1 00:26:50.869935 env[1666]: time="2025-11-01T00:26:50.869666837Z" level=info msg="cleaning up dead shim" Nov 1 00:26:50.883703 env[1666]: time="2025-11-01T00:26:50.883638946Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4695 runtime=io.containerd.runc.v2\n" Nov 1 00:26:51.113756 env[1666]: time="2025-11-01T00:26:51.113676746Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:26:51.140212 env[1666]: time="2025-11-01T00:26:51.140115921Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d59c8a939604e785026e1e88af3e5e1292869e72d03d721c3ed6f966de1a17f\"" Nov 1 00:26:51.142072 env[1666]: time="2025-11-01T00:26:51.142021139Z" level=info msg="StartContainer for \"0d59c8a939604e785026e1e88af3e5e1292869e72d03d721c3ed6f966de1a17f\"" Nov 1 00:26:51.161259 kubelet[2651]: W1101 00:26:51.161189 2651 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3073c049_3cbc_4566_8992_694bc68bd443.slice/cri-containerd-c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a.scope WatchSource:0}: container "c0d642390dd034a7e209c57eea964066c765ea9ecf02ed882ae60a80be120f4a" in namespace "k8s.io": not found Nov 1 00:26:51.186743 systemd[1]: Started cri-containerd-0d59c8a939604e785026e1e88af3e5e1292869e72d03d721c3ed6f966de1a17f.scope. Nov 1 00:26:51.275154 env[1666]: time="2025-11-01T00:26:51.275086836Z" level=info msg="StartContainer for \"0d59c8a939604e785026e1e88af3e5e1292869e72d03d721c3ed6f966de1a17f\" returns successfully" Nov 1 00:26:51.288596 systemd[1]: cri-containerd-0d59c8a939604e785026e1e88af3e5e1292869e72d03d721c3ed6f966de1a17f.scope: Deactivated successfully. Nov 1 00:26:51.331924 env[1666]: time="2025-11-01T00:26:51.331852219Z" level=info msg="shim disconnected" id=0d59c8a939604e785026e1e88af3e5e1292869e72d03d721c3ed6f966de1a17f Nov 1 00:26:51.331924 env[1666]: time="2025-11-01T00:26:51.331923355Z" level=warning msg="cleaning up after shim disconnected" id=0d59c8a939604e785026e1e88af3e5e1292869e72d03d721c3ed6f966de1a17f namespace=k8s.io Nov 1 00:26:51.332279 env[1666]: time="2025-11-01T00:26:51.331945867Z" level=info msg="cleaning up dead shim" Nov 1 00:26:51.347842 env[1666]: time="2025-11-01T00:26:51.347741281Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4756 runtime=io.containerd.runc.v2\n" Nov 1 00:26:51.558799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209-rootfs.mount: Deactivated successfully. Nov 1 00:26:51.564978 kubelet[2651]: I1101 00:26:51.564911 2651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3073c049-3cbc-4566-8992-694bc68bd443" path="/var/lib/kubelet/pods/3073c049-3cbc-4566-8992-694bc68bd443/volumes" Nov 1 00:26:52.112591 env[1666]: time="2025-11-01T00:26:52.112532180Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:26:52.144580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575624609.mount: Deactivated successfully. Nov 1 00:26:52.164412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723796941.mount: Deactivated successfully. Nov 1 00:26:52.169710 env[1666]: time="2025-11-01T00:26:52.169622139Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fbd0782e0e0f75d78f398728d0aa3982b77b24d18e96264e4247b7183f9000d9\"" Nov 1 00:26:52.171169 env[1666]: time="2025-11-01T00:26:52.171073073Z" level=info msg="StartContainer for \"fbd0782e0e0f75d78f398728d0aa3982b77b24d18e96264e4247b7183f9000d9\"" Nov 1 00:26:52.204115 systemd[1]: Started cri-containerd-fbd0782e0e0f75d78f398728d0aa3982b77b24d18e96264e4247b7183f9000d9.scope. Nov 1 00:26:52.287200 env[1666]: time="2025-11-01T00:26:52.287119616Z" level=info msg="StartContainer for \"fbd0782e0e0f75d78f398728d0aa3982b77b24d18e96264e4247b7183f9000d9\" returns successfully" Nov 1 00:26:52.292029 systemd[1]: cri-containerd-fbd0782e0e0f75d78f398728d0aa3982b77b24d18e96264e4247b7183f9000d9.scope: Deactivated successfully. Nov 1 00:26:52.348867 env[1666]: time="2025-11-01T00:26:52.348789128Z" level=info msg="shim disconnected" id=fbd0782e0e0f75d78f398728d0aa3982b77b24d18e96264e4247b7183f9000d9 Nov 1 00:26:52.349144 env[1666]: time="2025-11-01T00:26:52.348868628Z" level=warning msg="cleaning up after shim disconnected" id=fbd0782e0e0f75d78f398728d0aa3982b77b24d18e96264e4247b7183f9000d9 namespace=k8s.io Nov 1 00:26:52.349144 env[1666]: time="2025-11-01T00:26:52.348890612Z" level=info msg="cleaning up dead shim" Nov 1 00:26:52.380870 env[1666]: time="2025-11-01T00:26:52.379818992Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4816 runtime=io.containerd.runc.v2\n" Nov 1 00:26:52.720951 kubelet[2651]: E1101 00:26:52.720825 2651 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:26:53.118395 env[1666]: time="2025-11-01T00:26:53.118242408Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:26:53.158682 env[1666]: time="2025-11-01T00:26:53.158593679Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"24fb4f077081a089ddfadf2ac77da0efd0f4573039b5244000a2bb5aadbd969f\"" Nov 1 00:26:53.162809 env[1666]: time="2025-11-01T00:26:53.162757576Z" level=info msg="StartContainer for \"24fb4f077081a089ddfadf2ac77da0efd0f4573039b5244000a2bb5aadbd969f\"" Nov 1 00:26:53.167183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3247323581.mount: Deactivated successfully. Nov 1 00:26:53.206701 systemd[1]: Started cri-containerd-24fb4f077081a089ddfadf2ac77da0efd0f4573039b5244000a2bb5aadbd969f.scope. Nov 1 00:26:53.271698 systemd[1]: cri-containerd-24fb4f077081a089ddfadf2ac77da0efd0f4573039b5244000a2bb5aadbd969f.scope: Deactivated successfully. Nov 1 00:26:53.275756 env[1666]: time="2025-11-01T00:26:53.275276246Z" level=info msg="StartContainer for \"24fb4f077081a089ddfadf2ac77da0efd0f4573039b5244000a2bb5aadbd969f\" returns successfully" Nov 1 00:26:53.324313 env[1666]: time="2025-11-01T00:26:53.324231995Z" level=info msg="shim disconnected" id=24fb4f077081a089ddfadf2ac77da0efd0f4573039b5244000a2bb5aadbd969f Nov 1 00:26:53.324691 env[1666]: time="2025-11-01T00:26:53.324654767Z" level=warning msg="cleaning up after shim disconnected" id=24fb4f077081a089ddfadf2ac77da0efd0f4573039b5244000a2bb5aadbd969f namespace=k8s.io Nov 1 00:26:53.324815 env[1666]: time="2025-11-01T00:26:53.324787331Z" level=info msg="cleaning up dead shim" Nov 1 00:26:53.340871 env[1666]: time="2025-11-01T00:26:53.340812342Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:26:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4872 runtime=io.containerd.runc.v2\n" Nov 1 00:26:54.124255 env[1666]: time="2025-11-01T00:26:54.124172898Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:26:54.168440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116671948.mount: Deactivated successfully. Nov 1 00:26:54.183867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983749858.mount: Deactivated successfully. Nov 1 00:26:54.192057 env[1666]: time="2025-11-01T00:26:54.191981532Z" level=info msg="CreateContainer within sandbox \"6a5deaf7e358066574229d61a3c0ab2339d083731ff08639f634b8c0adc96ad9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d3beec253b930f3946077c9bf629c512b0cfe49de1024e96d07dce66c655e68f\"" Nov 1 00:26:54.193008 env[1666]: time="2025-11-01T00:26:54.192927901Z" level=info msg="StartContainer for \"d3beec253b930f3946077c9bf629c512b0cfe49de1024e96d07dce66c655e68f\"" Nov 1 00:26:54.230847 systemd[1]: Started cri-containerd-d3beec253b930f3946077c9bf629c512b0cfe49de1024e96d07dce66c655e68f.scope. Nov 1 00:26:54.315978 env[1666]: time="2025-11-01T00:26:54.315892966Z" level=info msg="StartContainer for \"d3beec253b930f3946077c9bf629c512b0cfe49de1024e96d07dce66c655e68f\" returns successfully" Nov 1 00:26:54.355407 kubelet[2651]: W1101 00:26:54.355269 2651 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1eed1af9_ff7f_4363_b041_4e1bc40b4c46.slice/cri-containerd-8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209.scope WatchSource:0}: task 8dce5383b673e4b70855cc35f1e332eb46ba88f8465fbfd7dbdc6a8b05fb1209 not found Nov 1 00:26:55.301334 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Nov 1 00:26:57.483957 kubelet[2651]: W1101 00:26:57.483905 2651 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1eed1af9_ff7f_4363_b041_4e1bc40b4c46.slice/cri-containerd-0d59c8a939604e785026e1e88af3e5e1292869e72d03d721c3ed6f966de1a17f.scope WatchSource:0}: task 0d59c8a939604e785026e1e88af3e5e1292869e72d03d721c3ed6f966de1a17f not found Nov 1 00:26:58.999625 systemd[1]: run-containerd-runc-k8s.io-d3beec253b930f3946077c9bf629c512b0cfe49de1024e96d07dce66c655e68f-runc.AvjFJr.mount: Deactivated successfully. Nov 1 00:26:59.495411 (udev-worker)[5433]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:26:59.497585 (udev-worker)[5434]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:26:59.501627 systemd-networkd[1386]: lxc_health: Link UP Nov 1 00:26:59.527696 systemd-networkd[1386]: lxc_health: Gained carrier Nov 1 00:26:59.528365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:27:00.537059 kubelet[2651]: I1101 00:27:00.536945 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wvv46" podStartSLOduration=10.536922489 podStartE2EDuration="10.536922489s" podCreationTimestamp="2025-11-01 00:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:26:55.155632231 +0000 UTC m=+137.955869355" watchObservedRunningTime="2025-11-01 00:27:00.536922489 +0000 UTC m=+143.337159601" Nov 1 00:27:00.598137 kubelet[2651]: W1101 00:27:00.598064 2651 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1eed1af9_ff7f_4363_b041_4e1bc40b4c46.slice/cri-containerd-fbd0782e0e0f75d78f398728d0aa3982b77b24d18e96264e4247b7183f9000d9.scope WatchSource:0}: task fbd0782e0e0f75d78f398728d0aa3982b77b24d18e96264e4247b7183f9000d9 not found Nov 1 00:27:00.602501 systemd-networkd[1386]: lxc_health: Gained IPv6LL Nov 1 00:27:01.356267 systemd[1]: run-containerd-runc-k8s.io-d3beec253b930f3946077c9bf629c512b0cfe49de1024e96d07dce66c655e68f-runc.4ouDUJ.mount: Deactivated successfully. Nov 1 00:27:03.657868 systemd[1]: run-containerd-runc-k8s.io-d3beec253b930f3946077c9bf629c512b0cfe49de1024e96d07dce66c655e68f-runc.7snLMj.mount: Deactivated successfully. Nov 1 00:27:03.711394 kubelet[2651]: W1101 00:27:03.711180 2651 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1eed1af9_ff7f_4363_b041_4e1bc40b4c46.slice/cri-containerd-24fb4f077081a089ddfadf2ac77da0efd0f4573039b5244000a2bb5aadbd969f.scope WatchSource:0}: task 24fb4f077081a089ddfadf2ac77da0efd0f4573039b5244000a2bb5aadbd969f not found Nov 1 00:27:05.956645 systemd[1]: run-containerd-runc-k8s.io-d3beec253b930f3946077c9bf629c512b0cfe49de1024e96d07dce66c655e68f-runc.mrx34L.mount: Deactivated successfully. Nov 1 00:27:06.104861 sshd[4524]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:06.111714 systemd[1]: sshd@28-172.31.20.188:22-147.75.109.163:38090.service: Deactivated successfully. Nov 1 00:27:06.113202 systemd[1]: session-29.scope: Deactivated successfully. Nov 1 00:27:06.115349 systemd-logind[1655]: Session 29 logged out. Waiting for processes to exit. Nov 1 00:27:06.117643 systemd-logind[1655]: Removed session 29. Nov 1 00:27:20.890987 systemd[1]: cri-containerd-15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff.scope: Deactivated successfully. Nov 1 00:27:20.891581 systemd[1]: cri-containerd-15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff.scope: Consumed 6.377s CPU time. Nov 1 00:27:20.931834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff-rootfs.mount: Deactivated successfully. Nov 1 00:27:20.956105 env[1666]: time="2025-11-01T00:27:20.956040785Z" level=info msg="shim disconnected" id=15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff Nov 1 00:27:20.957039 env[1666]: time="2025-11-01T00:27:20.956954406Z" level=warning msg="cleaning up after shim disconnected" id=15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff namespace=k8s.io Nov 1 00:27:20.957174 env[1666]: time="2025-11-01T00:27:20.957145422Z" level=info msg="cleaning up dead shim" Nov 1 00:27:20.971575 env[1666]: time="2025-11-01T00:27:20.971521112Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:27:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5557 runtime=io.containerd.runc.v2\n" Nov 1 00:27:21.046262 kubelet[2651]: E1101 00:27:21.046168 2651 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 1 00:27:21.204281 kubelet[2651]: I1101 00:27:21.203279 2651 scope.go:117] "RemoveContainer" containerID="15d01210b6fd6aa1f3cc1a9563fd0e5a53c84eb8effa08ca8282b21699ead1ff" Nov 1 00:27:21.208153 env[1666]: time="2025-11-01T00:27:21.208077173Z" level=info msg="CreateContainer within sandbox \"edcd7427acb9e276a9a724bfd642f8bcc0113c5df09cd60b89c8d3959262fcbd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 1 00:27:21.232770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324619343.mount: Deactivated successfully. Nov 1 00:27:21.247858 env[1666]: time="2025-11-01T00:27:21.247765951Z" level=info msg="CreateContainer within sandbox \"edcd7427acb9e276a9a724bfd642f8bcc0113c5df09cd60b89c8d3959262fcbd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e5e6d64498cf93b8f3d04543cd11f3e107999f6baabe2afb6a3962d18f19c045\"" Nov 1 00:27:21.248807 env[1666]: time="2025-11-01T00:27:21.248758868Z" level=info msg="StartContainer for \"e5e6d64498cf93b8f3d04543cd11f3e107999f6baabe2afb6a3962d18f19c045\"" Nov 1 00:27:21.298799 systemd[1]: Started cri-containerd-e5e6d64498cf93b8f3d04543cd11f3e107999f6baabe2afb6a3962d18f19c045.scope. Nov 1 00:27:21.393451 env[1666]: time="2025-11-01T00:27:21.393366435Z" level=info msg="StartContainer for \"e5e6d64498cf93b8f3d04543cd11f3e107999f6baabe2afb6a3962d18f19c045\" returns successfully" Nov 1 00:27:26.016687 systemd[1]: cri-containerd-d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8.scope: Deactivated successfully. Nov 1 00:27:26.017385 systemd[1]: cri-containerd-d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8.scope: Consumed 5.001s CPU time. Nov 1 00:27:26.056686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8-rootfs.mount: Deactivated successfully. Nov 1 00:27:26.073949 env[1666]: time="2025-11-01T00:27:26.073870615Z" level=info msg="shim disconnected" id=d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8 Nov 1 00:27:26.074766 env[1666]: time="2025-11-01T00:27:26.073947307Z" level=warning msg="cleaning up after shim disconnected" id=d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8 namespace=k8s.io Nov 1 00:27:26.074766 env[1666]: time="2025-11-01T00:27:26.073970971Z" level=info msg="cleaning up dead shim" Nov 1 00:27:26.088329 env[1666]: time="2025-11-01T00:27:26.088223121Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:27:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5620 runtime=io.containerd.runc.v2\n" Nov 1 00:27:26.221839 kubelet[2651]: I1101 00:27:26.221502 2651 scope.go:117] "RemoveContainer" containerID="d34ccbb596a9938e803c96a906b904dff31f56e5d695714ac167b8c5224dc8b8" Nov 1 00:27:26.225373 env[1666]: time="2025-11-01T00:27:26.225262231Z" level=info msg="CreateContainer within sandbox \"ec97c64ef3f69e4bc17b0cf0883a27ac04058a10b3d4ca18a0faa6036c6b53b2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 1 00:27:26.253923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145174454.mount: Deactivated successfully. Nov 1 00:27:26.266632 env[1666]: time="2025-11-01T00:27:26.266568490Z" level=info msg="CreateContainer within sandbox \"ec97c64ef3f69e4bc17b0cf0883a27ac04058a10b3d4ca18a0faa6036c6b53b2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"85a9f98641229a2d360660802924429682ae11507e7501b8eaed489b4df9eb3a\"" Nov 1 00:27:26.267608 env[1666]: time="2025-11-01T00:27:26.267560051Z" level=info msg="StartContainer for \"85a9f98641229a2d360660802924429682ae11507e7501b8eaed489b4df9eb3a\"" Nov 1 00:27:26.306131 systemd[1]: Started cri-containerd-85a9f98641229a2d360660802924429682ae11507e7501b8eaed489b4df9eb3a.scope. Nov 1 00:27:26.394999 env[1666]: time="2025-11-01T00:27:26.394856603Z" level=info msg="StartContainer for \"85a9f98641229a2d360660802924429682ae11507e7501b8eaed489b4df9eb3a\" returns successfully" Nov 1 00:27:31.047434 kubelet[2651]: E1101 00:27:31.047375 2651 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 1 00:27:37.507144 env[1666]: time="2025-11-01T00:27:37.507075788Z" level=info msg="StopPodSandbox for \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\"" Nov 1 00:27:37.507832 env[1666]: time="2025-11-01T00:27:37.507273212Z" level=info msg="TearDown network for sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" successfully" Nov 1 00:27:37.507832 env[1666]: time="2025-11-01T00:27:37.507367340Z" level=info msg="StopPodSandbox for \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" returns successfully" Nov 1 00:27:37.508391 env[1666]: time="2025-11-01T00:27:37.508344813Z" level=info msg="RemovePodSandbox for \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\"" Nov 1 00:27:37.508612 env[1666]: time="2025-11-01T00:27:37.508546414Z" level=info msg="Forcibly stopping sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\"" Nov 1 00:27:37.508831 env[1666]: time="2025-11-01T00:27:37.508795858Z" level=info msg="TearDown network for sandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" successfully" Nov 1 00:27:37.515511 env[1666]: time="2025-11-01T00:27:37.515450296Z" level=info msg="RemovePodSandbox \"c7a164801e048997af9f0cf9ee486af30ed4c3e1fe17e464501460c6b7b6a3d5\" returns successfully" Nov 1 00:27:37.516738 env[1666]: time="2025-11-01T00:27:37.516694061Z" level=info msg="StopPodSandbox for \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\"" Nov 1 00:27:37.517036 env[1666]: time="2025-11-01T00:27:37.516972869Z" level=info msg="TearDown network for sandbox \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\" successfully" Nov 1 00:27:37.517198 env[1666]: time="2025-11-01T00:27:37.517163609Z" level=info msg="StopPodSandbox for \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\" returns successfully" Nov 1 00:27:37.517900 env[1666]: time="2025-11-01T00:27:37.517854102Z" level=info msg="RemovePodSandbox for \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\"" Nov 1 00:27:37.518027 env[1666]: time="2025-11-01T00:27:37.517906866Z" level=info msg="Forcibly stopping sandbox \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\"" Nov 1 00:27:37.518110 env[1666]: time="2025-11-01T00:27:37.518062302Z" level=info msg="TearDown network for sandbox \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\" successfully" Nov 1 00:27:37.524683 env[1666]: time="2025-11-01T00:27:37.524549880Z" level=info msg="RemovePodSandbox \"7f82ad5cc1b3a25f1845c361c28ab2bd09581cfdf524f587d303837e758866b8\" returns successfully" Nov 1 00:27:37.525517 env[1666]: time="2025-11-01T00:27:37.525473689Z" level=info msg="StopPodSandbox for \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\"" Nov 1 00:27:37.525840 env[1666]: time="2025-11-01T00:27:37.525777109Z" level=info msg="TearDown network for sandbox \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\" successfully" Nov 1 00:27:37.525978 env[1666]: time="2025-11-01T00:27:37.525940465Z" level=info msg="StopPodSandbox for \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\" returns successfully" Nov 1 00:27:37.526816 env[1666]: time="2025-11-01T00:27:37.526753550Z" level=info msg="RemovePodSandbox for \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\"" Nov 1 00:27:37.526983 env[1666]: time="2025-11-01T00:27:37.526817306Z" level=info msg="Forcibly stopping sandbox \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\"" Nov 1 00:27:37.527053 env[1666]: time="2025-11-01T00:27:37.527006330Z" level=info msg="TearDown network for sandbox \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\" successfully" Nov 1 00:27:37.533582 env[1666]: time="2025-11-01T00:27:37.533472020Z" level=info msg="RemovePodSandbox \"a30a15df3c8ee1b71c601da9e6aebcecff6b183c7e83bc39168ab0a45edd17b4\" returns successfully" Nov 1 00:27:41.049086 kubelet[2651]: E1101 00:27:41.048498 2651 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"