Jul 12 00:25:26.019054 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 12 00:25:26.019092 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Jul 11 23:15:18 -00 2025 Jul 12 00:25:26.019115 kernel: efi: EFI v2.70 by EDK II Jul 12 00:25:26.019130 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Jul 12 00:25:26.019144 kernel: ACPI: Early table checksum verification disabled Jul 12 00:25:26.019157 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 12 00:25:26.019172 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 12 00:25:26.019230 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 12 00:25:26.019248 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 12 00:25:26.019262 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 12 00:25:26.019281 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 12 00:25:26.019295 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 12 00:25:26.019309 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 12 00:25:26.019323 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 12 00:25:26.019339 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 12 00:25:26.019358 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 12 00:25:26.019373 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 12 00:25:26.019387 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 12 00:25:26.019401 kernel: printk: bootconsole [uart0] enabled Jul 12 00:25:26.019415 kernel: NUMA: Failed to initialise from firmware Jul 12 00:25:26.019430 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:25:26.019445 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Jul 12 00:25:26.019460 kernel: Zone ranges: Jul 12 00:25:26.019474 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:25:26.019488 kernel: DMA32 empty Jul 12 00:25:26.019503 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 12 00:25:26.019521 kernel: Movable zone start for each node Jul 12 00:25:26.019536 kernel: Early memory node ranges Jul 12 00:25:26.019550 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 12 00:25:26.019565 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 12 00:25:26.019579 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 12 00:25:26.019593 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 12 00:25:26.019607 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 12 00:25:26.019621 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 12 00:25:26.019635 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 12 00:25:26.019649 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 12 00:25:26.019663 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:25:26.019678 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 12 00:25:26.019724 kernel: psci: probing for conduit method from ACPI. Jul 12 00:25:26.019744 kernel: psci: PSCIv1.0 detected in firmware. Jul 12 00:25:26.019767 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:25:26.019782 kernel: psci: Trusted OS migration not required Jul 12 00:25:26.019797 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:25:26.019816 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 12 00:25:26.019845 kernel: ACPI: SRAT not present Jul 12 00:25:26.019864 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 12 00:25:26.019879 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 12 00:25:26.019894 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:25:26.019909 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:25:26.019924 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:25:26.019939 kernel: CPU features: detected: Spectre-v2 Jul 12 00:25:26.019954 kernel: CPU features: detected: Spectre-v3a Jul 12 00:25:26.019969 kernel: CPU features: detected: Spectre-BHB Jul 12 00:25:26.019984 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:25:26.020004 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:25:26.020019 kernel: CPU features: detected: ARM erratum 1742098 Jul 12 00:25:26.020034 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 12 00:25:26.020049 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 12 00:25:26.020064 kernel: Policy zone: Normal Jul 12 00:25:26.020081 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:25:26.020097 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:25:26.020113 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:25:26.020128 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:25:26.020143 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:25:26.020162 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 12 00:25:26.020178 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Jul 12 00:25:26.020220 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:25:26.020241 kernel: trace event string verifier disabled Jul 12 00:25:26.020256 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:25:26.020272 kernel: rcu: RCU event tracing is enabled. Jul 12 00:25:26.020287 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:25:26.020303 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:25:26.020318 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:25:26.020333 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:25:26.020348 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:25:26.020362 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:25:26.020383 kernel: GICv3: 96 SPIs implemented Jul 12 00:25:26.020411 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:25:26.020429 kernel: GICv3: Distributor has no Range Selector support Jul 12 00:25:26.020444 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:25:26.020459 kernel: GICv3: 16 PPIs implemented Jul 12 00:25:26.020474 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 12 00:25:26.020488 kernel: ACPI: SRAT not present Jul 12 00:25:26.020503 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 12 00:25:26.020518 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:25:26.020534 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:25:26.020549 kernel: GICv3: using LPI property table @0x00000004000b0000 Jul 12 00:25:26.020569 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 12 00:25:26.020584 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jul 12 00:25:26.020599 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 12 00:25:26.020615 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 12 00:25:26.020630 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 12 00:25:26.020646 kernel: Console: colour dummy device 80x25 Jul 12 00:25:26.020661 kernel: printk: console [tty1] enabled Jul 12 00:25:26.020677 kernel: ACPI: Core revision 20210730 Jul 12 00:25:26.020693 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 12 00:25:26.020708 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:25:26.020728 kernel: LSM: Security Framework initializing Jul 12 00:25:26.020743 kernel: SELinux: Initializing. Jul 12 00:25:26.020759 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:25:26.020774 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:25:26.020790 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:25:26.020805 kernel: Platform MSI: ITS@0x10080000 domain created Jul 12 00:25:26.020820 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 12 00:25:26.020835 kernel: Remapping and enabling EFI services. Jul 12 00:25:26.020851 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:25:26.020866 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:25:26.020886 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 12 00:25:26.020902 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jul 12 00:25:26.020917 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 12 00:25:26.020932 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:25:26.020948 kernel: SMP: Total of 2 processors activated. Jul 12 00:25:26.020963 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:25:26.020978 kernel: CPU features: detected: 32-bit EL1 Support Jul 12 00:25:26.020994 kernel: CPU features: detected: CRC32 instructions Jul 12 00:25:26.021009 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:25:26.021028 kernel: alternatives: patching kernel code Jul 12 00:25:26.021044 kernel: devtmpfs: initialized Jul 12 00:25:26.021070 kernel: KASLR disabled due to lack of seed Jul 12 00:25:26.021090 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:25:26.021106 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:25:26.021122 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:25:26.021138 kernel: SMBIOS 3.0.0 present. Jul 12 00:25:26.021153 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 12 00:25:26.021169 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:25:26.021186 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:25:26.021272 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:25:26.021297 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:25:26.021313 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:25:26.021329 kernel: audit: type=2000 audit(0.296:1): state=initialized audit_enabled=0 res=1 Jul 12 00:25:26.021345 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:25:26.021362 kernel: cpuidle: using governor menu Jul 12 00:25:26.021382 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:25:26.021399 kernel: ASID allocator initialised with 32768 entries Jul 12 00:25:26.021416 kernel: ACPI: bus type PCI registered Jul 12 00:25:26.021432 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:25:26.021447 kernel: Serial: AMBA PL011 UART driver Jul 12 00:25:26.021463 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:25:26.021480 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:25:26.021496 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:25:26.021512 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:25:26.021533 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:25:26.021549 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:25:26.021582 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:25:26.021599 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:25:26.021616 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:25:26.021632 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 12 00:25:26.021648 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 12 00:25:26.021664 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 12 00:25:26.021681 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:25:26.021697 kernel: ACPI: Interpreter enabled Jul 12 00:25:26.021718 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:25:26.021734 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:25:26.021750 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 12 00:25:26.022031 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:25:26.022274 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:25:26.022468 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:25:26.022661 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 12 00:25:26.027350 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 12 00:25:26.027396 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 12 00:25:26.027414 kernel: acpiphp: Slot [1] registered Jul 12 00:25:26.027431 kernel: acpiphp: Slot [2] registered Jul 12 00:25:26.027447 kernel: acpiphp: Slot [3] registered Jul 12 00:25:26.027463 kernel: acpiphp: Slot [4] registered Jul 12 00:25:26.027479 kernel: acpiphp: Slot [5] registered Jul 12 00:25:26.027495 kernel: acpiphp: Slot [6] registered Jul 12 00:25:26.027511 kernel: acpiphp: Slot [7] registered Jul 12 00:25:26.027536 kernel: acpiphp: Slot [8] registered Jul 12 00:25:26.027552 kernel: acpiphp: Slot [9] registered Jul 12 00:25:26.027568 kernel: acpiphp: Slot [10] registered Jul 12 00:25:26.027584 kernel: acpiphp: Slot [11] registered Jul 12 00:25:26.027599 kernel: acpiphp: Slot [12] registered Jul 12 00:25:26.027615 kernel: acpiphp: Slot [13] registered Jul 12 00:25:26.027644 kernel: acpiphp: Slot [14] registered Jul 12 00:25:26.027665 kernel: acpiphp: Slot [15] registered Jul 12 00:25:26.027682 kernel: acpiphp: Slot [16] registered Jul 12 00:25:26.027703 kernel: acpiphp: Slot [17] registered Jul 12 00:25:26.027719 kernel: acpiphp: Slot [18] registered Jul 12 00:25:26.027735 kernel: acpiphp: Slot [19] registered Jul 12 00:25:26.027751 kernel: acpiphp: Slot [20] registered Jul 12 00:25:26.027766 kernel: acpiphp: Slot [21] registered Jul 12 00:25:26.027782 kernel: acpiphp: Slot [22] registered Jul 12 00:25:26.027798 kernel: acpiphp: Slot [23] registered Jul 12 00:25:26.027814 kernel: acpiphp: Slot [24] registered Jul 12 00:25:26.027830 kernel: acpiphp: Slot [25] registered Jul 12 00:25:26.027846 kernel: acpiphp: Slot [26] registered Jul 12 00:25:26.027867 kernel: acpiphp: Slot [27] registered Jul 12 00:25:26.027883 kernel: acpiphp: Slot [28] registered Jul 12 00:25:26.027898 kernel: acpiphp: Slot [29] registered Jul 12 00:25:26.027914 kernel: acpiphp: Slot [30] registered Jul 12 00:25:26.027944 kernel: acpiphp: Slot [31] registered Jul 12 00:25:26.027961 kernel: PCI host bridge to bus 0000:00 Jul 12 00:25:26.028211 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 12 00:25:26.028399 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:25:26.028579 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 12 00:25:26.028754 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 12 00:25:26.028991 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 12 00:25:26.034427 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 12 00:25:26.034693 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 12 00:25:26.034916 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 12 00:25:26.035120 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 12 00:25:26.035374 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:25:26.035591 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 12 00:25:26.035779 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 12 00:25:26.035973 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 12 00:25:26.036166 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 12 00:25:26.036385 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:25:26.036586 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 12 00:25:26.036782 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 12 00:25:26.036975 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 12 00:25:26.037167 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 12 00:25:26.037398 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 12 00:25:26.037588 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 12 00:25:26.037768 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:25:26.037965 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 12 00:25:26.037992 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:25:26.038010 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:25:26.038026 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:25:26.038043 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:25:26.038059 kernel: iommu: Default domain type: Translated Jul 12 00:25:26.038076 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:25:26.038092 kernel: vgaarb: loaded Jul 12 00:25:26.038109 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:25:26.038131 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:25:26.038148 kernel: PTP clock support registered Jul 12 00:25:26.038164 kernel: Registered efivars operations Jul 12 00:25:26.038180 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:25:26.038247 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:25:26.038266 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:25:26.038282 kernel: pnp: PnP ACPI init Jul 12 00:25:26.038500 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 12 00:25:26.038530 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:25:26.038547 kernel: NET: Registered PF_INET protocol family Jul 12 00:25:26.038564 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:25:26.038580 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:25:26.038597 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:25:26.038613 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:25:26.038630 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 12 00:25:26.038646 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:25:26.038663 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:25:26.038683 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:25:26.038699 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:25:26.038715 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:25:26.038731 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 12 00:25:26.038748 kernel: kvm [1]: HYP mode not available Jul 12 00:25:26.038764 kernel: Initialise system trusted keyrings Jul 12 00:25:26.038780 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:25:26.038796 kernel: Key type asymmetric registered Jul 12 00:25:26.038812 kernel: Asymmetric key parser 'x509' registered Jul 12 00:25:26.038832 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:25:26.038848 kernel: io scheduler mq-deadline registered Jul 12 00:25:26.038865 kernel: io scheduler kyber registered Jul 12 00:25:26.038881 kernel: io scheduler bfq registered Jul 12 00:25:26.039087 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 12 00:25:26.039113 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:25:26.039130 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:25:26.039146 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 12 00:25:26.039162 kernel: ACPI: button: Sleep Button [SLPB] Jul 12 00:25:26.039227 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:25:26.039251 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:25:26.039450 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 12 00:25:26.039474 kernel: printk: console [ttyS0] disabled Jul 12 00:25:26.039491 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 12 00:25:26.039508 kernel: printk: console [ttyS0] enabled Jul 12 00:25:26.039524 kernel: printk: bootconsole [uart0] disabled Jul 12 00:25:26.039540 kernel: thunder_xcv, ver 1.0 Jul 12 00:25:26.039556 kernel: thunder_bgx, ver 1.0 Jul 12 00:25:26.039578 kernel: nicpf, ver 1.0 Jul 12 00:25:26.039594 kernel: nicvf, ver 1.0 Jul 12 00:25:26.039797 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:25:26.039975 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:25:25 UTC (1752279925) Jul 12 00:25:26.039998 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:25:26.040014 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:25:26.040030 kernel: Segment Routing with IPv6 Jul 12 00:25:26.040046 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:25:26.040067 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:25:26.040083 kernel: Key type dns_resolver registered Jul 12 00:25:26.040098 kernel: registered taskstats version 1 Jul 12 00:25:26.040114 kernel: Loading compiled-in X.509 certificates Jul 12 00:25:26.040130 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: de2ee1d04443f96c763927c453375bbe23b5752a' Jul 12 00:25:26.040146 kernel: Key type .fscrypt registered Jul 12 00:25:26.040162 kernel: Key type fscrypt-provisioning registered Jul 12 00:25:26.040178 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:25:26.040386 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:25:26.040417 kernel: ima: No architecture policies found Jul 12 00:25:26.040433 kernel: clk: Disabling unused clocks Jul 12 00:25:26.040449 kernel: Freeing unused kernel memory: 36416K Jul 12 00:25:26.040465 kernel: Run /init as init process Jul 12 00:25:26.040481 kernel: with arguments: Jul 12 00:25:26.040497 kernel: /init Jul 12 00:25:26.040512 kernel: with environment: Jul 12 00:25:26.040529 kernel: HOME=/ Jul 12 00:25:26.040545 kernel: TERM=linux Jul 12 00:25:26.040565 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:25:26.040602 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:25:26.040624 systemd[1]: Detected virtualization amazon. Jul 12 00:25:26.040642 systemd[1]: Detected architecture arm64. Jul 12 00:25:26.040660 systemd[1]: Running in initrd. Jul 12 00:25:26.040677 systemd[1]: No hostname configured, using default hostname. Jul 12 00:25:26.040695 systemd[1]: Hostname set to . Jul 12 00:25:26.040718 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:25:26.040736 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:25:26.040794 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:25:26.040812 systemd[1]: Reached target cryptsetup.target. Jul 12 00:25:26.040829 systemd[1]: Reached target paths.target. Jul 12 00:25:26.040846 systemd[1]: Reached target slices.target. Jul 12 00:25:26.040864 systemd[1]: Reached target swap.target. Jul 12 00:25:26.040881 systemd[1]: Reached target timers.target. Jul 12 00:25:26.040904 systemd[1]: Listening on iscsid.socket. Jul 12 00:25:26.040922 systemd[1]: Listening on iscsiuio.socket. Jul 12 00:25:26.040939 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:25:26.040956 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:25:26.040974 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:25:26.040991 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:25:26.041009 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:25:26.041026 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:25:26.041048 systemd[1]: Reached target sockets.target. Jul 12 00:25:26.041065 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:25:26.041083 systemd[1]: Finished network-cleanup.service. Jul 12 00:25:26.041100 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:25:26.041117 systemd[1]: Starting systemd-journald.service... Jul 12 00:25:26.041135 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:25:26.041152 systemd[1]: Starting systemd-resolved.service... Jul 12 00:25:26.041170 systemd[1]: Starting systemd-vconsole-setup.service... Jul 12 00:25:26.041188 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:25:26.041233 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:25:26.041252 systemd[1]: Finished systemd-vconsole-setup.service. Jul 12 00:25:26.041269 systemd[1]: Starting dracut-cmdline-ask.service... Jul 12 00:25:26.041286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:25:26.041304 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:25:26.041327 systemd-journald[310]: Journal started Jul 12 00:25:26.041421 systemd-journald[310]: Runtime Journal (/run/log/journal/ec2ea5b23982245e3aac5e8b2f1512c1) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:25:25.977028 systemd-modules-load[311]: Inserted module 'overlay' Jul 12 00:25:26.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.065236 kernel: audit: type=1130 audit(1752279926.046:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.065305 systemd[1]: Started systemd-journald.service. Jul 12 00:25:26.094227 kernel: audit: type=1130 audit(1752279926.071:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.094311 kernel: audit: type=1130 audit(1752279926.082:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.073959 systemd[1]: Finished dracut-cmdline-ask.service. Jul 12 00:25:26.085132 systemd[1]: Starting dracut-cmdline.service... Jul 12 00:25:26.113031 systemd-resolved[312]: Positive Trust Anchors: Jul 12 00:25:26.113422 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:25:26.127805 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:25:26.127852 dracut-cmdline[327]: dracut-dracut-053 Jul 12 00:25:26.113478 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:25:26.151781 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:25:26.188014 kernel: Bridge firewalling registered Jul 12 00:25:26.177107 systemd-modules-load[311]: Inserted module 'br_netfilter' Jul 12 00:25:26.203764 kernel: SCSI subsystem initialized Jul 12 00:25:26.225389 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:25:26.225476 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:25:26.227045 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 12 00:25:26.236902 systemd-modules-load[311]: Inserted module 'dm_multipath' Jul 12 00:25:26.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.238294 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:25:26.257090 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:25:26.260899 kernel: audit: type=1130 audit(1752279926.241:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.284356 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:25:26.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.295223 kernel: audit: type=1130 audit(1752279926.286:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.341231 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:25:26.363233 kernel: iscsi: registered transport (tcp) Jul 12 00:25:26.391348 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:25:26.391420 kernel: QLogic iSCSI HBA Driver Jul 12 00:25:26.574235 kernel: random: crng init done Jul 12 00:25:26.574465 systemd-resolved[312]: Defaulting to hostname 'linux'. Jul 12 00:25:26.578010 systemd[1]: Started systemd-resolved.service. Jul 12 00:25:26.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.588603 systemd[1]: Reached target nss-lookup.target. Jul 12 00:25:26.592089 kernel: audit: type=1130 audit(1752279926.578:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.607616 systemd[1]: Finished dracut-cmdline.service. Jul 12 00:25:26.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.623477 kernel: audit: type=1130 audit(1752279926.610:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:26.621514 systemd[1]: Starting dracut-pre-udev.service... Jul 12 00:25:26.687256 kernel: raid6: neonx8 gen() 6379 MB/s Jul 12 00:25:26.705229 kernel: raid6: neonx8 xor() 4739 MB/s Jul 12 00:25:26.723228 kernel: raid6: neonx4 gen() 6506 MB/s Jul 12 00:25:26.741229 kernel: raid6: neonx4 xor() 4932 MB/s Jul 12 00:25:26.759227 kernel: raid6: neonx2 gen() 5745 MB/s Jul 12 00:25:26.777231 kernel: raid6: neonx2 xor() 4521 MB/s Jul 12 00:25:26.795228 kernel: raid6: neonx1 gen() 4461 MB/s Jul 12 00:25:26.813230 kernel: raid6: neonx1 xor() 3669 MB/s Jul 12 00:25:26.831228 kernel: raid6: int64x8 gen() 3426 MB/s Jul 12 00:25:26.849229 kernel: raid6: int64x8 xor() 2081 MB/s Jul 12 00:25:26.867228 kernel: raid6: int64x4 gen() 3802 MB/s Jul 12 00:25:26.885227 kernel: raid6: int64x4 xor() 2184 MB/s Jul 12 00:25:26.903228 kernel: raid6: int64x2 gen() 3591 MB/s Jul 12 00:25:26.921228 kernel: raid6: int64x2 xor() 1936 MB/s Jul 12 00:25:26.939229 kernel: raid6: int64x1 gen() 2746 MB/s Jul 12 00:25:26.958778 kernel: raid6: int64x1 xor() 1444 MB/s Jul 12 00:25:26.958810 kernel: raid6: using algorithm neonx4 gen() 6506 MB/s Jul 12 00:25:26.958833 kernel: raid6: .... xor() 4932 MB/s, rmw enabled Jul 12 00:25:26.960618 kernel: raid6: using neon recovery algorithm Jul 12 00:25:26.979236 kernel: xor: measuring software checksum speed Jul 12 00:25:26.981228 kernel: 8regs : 8526 MB/sec Jul 12 00:25:26.983227 kernel: 32regs : 10120 MB/sec Jul 12 00:25:26.986909 kernel: arm64_neon : 8628 MB/sec Jul 12 00:25:26.986943 kernel: xor: using function: 32regs (10120 MB/sec) Jul 12 00:25:27.084241 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 12 00:25:27.102070 systemd[1]: Finished dracut-pre-udev.service. Jul 12 00:25:27.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:27.113224 kernel: audit: type=1130 audit(1752279927.102:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:27.112000 audit: BPF prog-id=7 op=LOAD Jul 12 00:25:27.114233 systemd[1]: Starting systemd-udevd.service... Jul 12 00:25:27.119572 kernel: audit: type=1334 audit(1752279927.112:10): prog-id=7 op=LOAD Jul 12 00:25:27.112000 audit: BPF prog-id=8 op=LOAD Jul 12 00:25:27.143946 systemd-udevd[509]: Using default interface naming scheme 'v252'. Jul 12 00:25:27.154972 systemd[1]: Started systemd-udevd.service. Jul 12 00:25:27.161035 systemd[1]: Starting dracut-pre-trigger.service... Jul 12 00:25:27.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:27.193647 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation Jul 12 00:25:27.255114 systemd[1]: Finished dracut-pre-trigger.service. Jul 12 00:25:27.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:27.259884 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:25:27.359588 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:25:27.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:27.487678 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:25:27.487742 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 12 00:25:27.506841 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 12 00:25:27.507070 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 12 00:25:27.507349 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:25:27.507375 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 12 00:25:27.507623 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:cf:8e:a5:bc:63 Jul 12 00:25:27.510243 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 12 00:25:27.520680 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:25:27.520745 kernel: GPT:9289727 != 16777215 Jul 12 00:25:27.520769 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:25:27.522875 kernel: GPT:9289727 != 16777215 Jul 12 00:25:27.524206 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:25:27.526125 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:25:27.533957 (udev-worker)[565]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:27.604240 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (561) Jul 12 00:25:27.646917 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 12 00:25:27.695905 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 12 00:25:27.704044 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 12 00:25:27.712147 systemd[1]: Starting disk-uuid.service... Jul 12 00:25:27.727393 disk-uuid[666]: Primary Header is updated. Jul 12 00:25:27.727393 disk-uuid[666]: Secondary Entries is updated. Jul 12 00:25:27.727393 disk-uuid[666]: Secondary Header is updated. Jul 12 00:25:27.739102 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 12 00:25:27.753541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:25:28.760089 disk-uuid[667]: The operation has completed successfully. Jul 12 00:25:28.762393 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:25:28.951452 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:25:28.952024 systemd[1]: Finished disk-uuid.service. Jul 12 00:25:28.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:28.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:28.968780 systemd[1]: Starting verity-setup.service... Jul 12 00:25:29.009545 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:25:29.108242 systemd[1]: Found device dev-mapper-usr.device. Jul 12 00:25:29.114182 systemd[1]: Mounting sysusr-usr.mount... Jul 12 00:25:29.121778 systemd[1]: Finished verity-setup.service. Jul 12 00:25:29.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.224241 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 12 00:25:29.225078 systemd[1]: Mounted sysusr-usr.mount. Jul 12 00:25:29.228305 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 12 00:25:29.232533 systemd[1]: Starting ignition-setup.service... Jul 12 00:25:29.239564 systemd[1]: Starting parse-ip-for-networkd.service... Jul 12 00:25:29.283575 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:25:29.283654 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:25:29.285832 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:25:29.319236 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:25:29.337329 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:25:29.368402 systemd[1]: Finished ignition-setup.service. Jul 12 00:25:29.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.370142 systemd[1]: Starting ignition-fetch-offline.service... Jul 12 00:25:29.391804 systemd[1]: Finished parse-ip-for-networkd.service. Jul 12 00:25:29.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.396000 audit: BPF prog-id=9 op=LOAD Jul 12 00:25:29.399435 systemd[1]: Starting systemd-networkd.service... Jul 12 00:25:29.449426 systemd-networkd[1012]: lo: Link UP Jul 12 00:25:29.449448 systemd-networkd[1012]: lo: Gained carrier Jul 12 00:25:29.450948 systemd-networkd[1012]: Enumeration completed Jul 12 00:25:29.451128 systemd[1]: Started systemd-networkd.service. Jul 12 00:25:29.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.453668 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:25:29.457167 systemd[1]: Reached target network.target. Jul 12 00:25:29.463806 systemd-networkd[1012]: eth0: Link UP Jul 12 00:25:29.463816 systemd-networkd[1012]: eth0: Gained carrier Jul 12 00:25:29.463965 systemd[1]: Starting iscsiuio.service... Jul 12 00:25:29.481990 systemd-networkd[1012]: eth0: DHCPv4 address 172.31.19.35/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:25:29.489681 systemd[1]: Started iscsiuio.service. Jul 12 00:25:29.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.496082 systemd[1]: Starting iscsid.service... Jul 12 00:25:29.505310 iscsid[1018]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:25:29.505310 iscsid[1018]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 12 00:25:29.505310 iscsid[1018]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 12 00:25:29.505310 iscsid[1018]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 12 00:25:29.505310 iscsid[1018]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:25:29.527169 iscsid[1018]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 12 00:25:29.534847 systemd[1]: Started iscsid.service. Jul 12 00:25:29.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.539470 systemd[1]: Starting dracut-initqueue.service... Jul 12 00:25:29.564537 systemd[1]: Finished dracut-initqueue.service. Jul 12 00:25:29.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.572283 systemd[1]: Reached target remote-fs-pre.target. Jul 12 00:25:29.572467 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:25:29.573143 systemd[1]: Reached target remote-fs.target. Jul 12 00:25:29.575129 systemd[1]: Starting dracut-pre-mount.service... Jul 12 00:25:29.602454 systemd[1]: Finished dracut-pre-mount.service. Jul 12 00:25:29.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.835438 ignition[1008]: Ignition 2.14.0 Jul 12 00:25:29.835467 ignition[1008]: Stage: fetch-offline Jul 12 00:25:29.837230 ignition[1008]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:29.837329 ignition[1008]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:29.865728 ignition[1008]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:29.868443 ignition[1008]: Ignition finished successfully Jul 12 00:25:29.871595 systemd[1]: Finished ignition-fetch-offline.service. Jul 12 00:25:29.882247 kernel: kauditd_printk_skb: 15 callbacks suppressed Jul 12 00:25:29.882317 kernel: audit: type=1130 audit(1752279929.874:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.876770 systemd[1]: Starting ignition-fetch.service... Jul 12 00:25:29.893984 ignition[1037]: Ignition 2.14.0 Jul 12 00:25:29.894022 ignition[1037]: Stage: fetch Jul 12 00:25:29.894350 ignition[1037]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:29.894409 ignition[1037]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:29.910350 ignition[1037]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:29.912872 ignition[1037]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:29.924378 ignition[1037]: INFO : PUT result: OK Jul 12 00:25:29.927696 ignition[1037]: DEBUG : parsed url from cmdline: "" Jul 12 00:25:29.927696 ignition[1037]: INFO : no config URL provided Jul 12 00:25:29.927696 ignition[1037]: INFO : reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:25:29.934324 ignition[1037]: INFO : no config at "/usr/lib/ignition/user.ign" Jul 12 00:25:29.934324 ignition[1037]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:29.934324 ignition[1037]: INFO : PUT result: OK Jul 12 00:25:29.934324 ignition[1037]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 12 00:25:29.943625 ignition[1037]: INFO : GET result: OK Jul 12 00:25:29.943625 ignition[1037]: DEBUG : parsing config with SHA512: 263c2fac3b006798acb98ca012bedd4748140710ebccd10b5ba5af6d4e7a45dec9e743abdb9d404e27a394f483f9d7190c1335e99b8387310819938bcb02c755 Jul 12 00:25:29.955224 unknown[1037]: fetched base config from "system" Jul 12 00:25:29.957210 unknown[1037]: fetched base config from "system" Jul 12 00:25:29.957628 unknown[1037]: fetched user config from "aws" Jul 12 00:25:29.958864 ignition[1037]: fetch: fetch complete Jul 12 00:25:29.965414 systemd[1]: Finished ignition-fetch.service. Jul 12 00:25:29.958877 ignition[1037]: fetch: fetch passed Jul 12 00:25:29.958967 ignition[1037]: Ignition finished successfully Jul 12 00:25:29.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:29.980490 systemd[1]: Starting ignition-kargs.service... Jul 12 00:25:29.989765 kernel: audit: type=1130 audit(1752279929.972:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.004154 ignition[1043]: Ignition 2.14.0 Jul 12 00:25:30.004181 ignition[1043]: Stage: kargs Jul 12 00:25:30.004515 ignition[1043]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:30.004569 ignition[1043]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:30.019656 ignition[1043]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:30.022266 ignition[1043]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:30.025524 ignition[1043]: INFO : PUT result: OK Jul 12 00:25:30.030551 ignition[1043]: kargs: kargs passed Jul 12 00:25:30.030655 ignition[1043]: Ignition finished successfully Jul 12 00:25:30.033139 systemd[1]: Finished ignition-kargs.service. Jul 12 00:25:30.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.047786 kernel: audit: type=1130 audit(1752279930.036:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.047167 systemd[1]: Starting ignition-disks.service... Jul 12 00:25:30.062448 ignition[1050]: Ignition 2.14.0 Jul 12 00:25:30.062476 ignition[1050]: Stage: disks Jul 12 00:25:30.062775 ignition[1050]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:30.062834 ignition[1050]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:30.078246 ignition[1050]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:30.080673 ignition[1050]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:30.083806 ignition[1050]: INFO : PUT result: OK Jul 12 00:25:30.088960 ignition[1050]: disks: disks passed Jul 12 00:25:30.089278 ignition[1050]: Ignition finished successfully Jul 12 00:25:30.093655 systemd[1]: Finished ignition-disks.service. Jul 12 00:25:30.106319 kernel: audit: type=1130 audit(1752279930.092:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.094019 systemd[1]: Reached target initrd-root-device.target. Jul 12 00:25:30.108328 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:25:30.113262 systemd[1]: Reached target local-fs.target. Jul 12 00:25:30.116506 systemd[1]: Reached target sysinit.target. Jul 12 00:25:30.118271 systemd[1]: Reached target basic.target. Jul 12 00:25:30.122986 systemd[1]: Starting systemd-fsck-root.service... Jul 12 00:25:30.168130 systemd-fsck[1058]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 12 00:25:30.172922 systemd[1]: Finished systemd-fsck-root.service. Jul 12 00:25:30.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.187509 kernel: audit: type=1130 audit(1752279930.171:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.174431 systemd[1]: Mounting sysroot.mount... Jul 12 00:25:30.207250 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 12 00:25:30.209254 systemd[1]: Mounted sysroot.mount. Jul 12 00:25:30.209537 systemd[1]: Reached target initrd-root-fs.target. Jul 12 00:25:30.225496 systemd[1]: Mounting sysroot-usr.mount... Jul 12 00:25:30.227859 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 12 00:25:30.227938 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:25:30.227992 systemd[1]: Reached target ignition-diskful.target. Jul 12 00:25:30.250552 systemd[1]: Mounted sysroot-usr.mount. Jul 12 00:25:30.261663 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:25:30.266981 systemd[1]: Starting initrd-setup-root.service... Jul 12 00:25:30.285008 initrd-setup-root[1080]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:25:30.296230 initrd-setup-root[1088]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:25:30.303215 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1075) Jul 12 00:25:30.313363 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:25:30.313430 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:25:30.316339 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:25:30.317255 initrd-setup-root[1096]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:25:30.325667 initrd-setup-root[1120]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:25:30.342264 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:25:30.353637 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:25:30.430135 systemd[1]: Finished initrd-setup-root.service. Jul 12 00:25:30.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.435139 systemd[1]: Starting ignition-mount.service... Jul 12 00:25:30.454681 kernel: audit: type=1130 audit(1752279930.432:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.445762 systemd[1]: Starting sysroot-boot.service... Jul 12 00:25:30.458715 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 12 00:25:30.461155 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 12 00:25:30.480164 ignition[1140]: INFO : Ignition 2.14.0 Jul 12 00:25:30.485308 ignition[1140]: INFO : Stage: mount Jul 12 00:25:30.485308 ignition[1140]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:30.485308 ignition[1140]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:30.505512 systemd[1]: Finished sysroot-boot.service. Jul 12 00:25:30.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.523019 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:30.523019 ignition[1140]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:30.528321 kernel: audit: type=1130 audit(1752279930.514:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.528359 ignition[1140]: INFO : PUT result: OK Jul 12 00:25:30.533785 ignition[1140]: INFO : mount: mount passed Jul 12 00:25:30.535751 ignition[1140]: INFO : Ignition finished successfully Jul 12 00:25:30.536140 systemd[1]: Finished ignition-mount.service. Jul 12 00:25:30.555342 kernel: audit: type=1130 audit(1752279930.539:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.542715 systemd[1]: Starting ignition-files.service... Jul 12 00:25:30.564144 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:25:30.588681 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1151) Jul 12 00:25:30.594268 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:25:30.594314 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:25:30.594339 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:25:30.610231 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:25:30.615762 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:25:30.634745 ignition[1170]: INFO : Ignition 2.14.0 Jul 12 00:25:30.634745 ignition[1170]: INFO : Stage: files Jul 12 00:25:30.638301 ignition[1170]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:30.638301 ignition[1170]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:30.653711 ignition[1170]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:30.656774 ignition[1170]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:30.660255 ignition[1170]: INFO : PUT result: OK Jul 12 00:25:30.665564 ignition[1170]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:25:30.671250 ignition[1170]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:25:30.671250 ignition[1170]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:25:30.695033 ignition[1170]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:25:30.698490 ignition[1170]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:25:30.702666 unknown[1170]: wrote ssh authorized keys file for user: core Jul 12 00:25:30.705051 ignition[1170]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:25:30.708892 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:25:30.713031 ignition[1170]: INFO : GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:25:30.823070 ignition[1170]: INFO : GET result: OK Jul 12 00:25:30.893383 systemd-networkd[1012]: eth0: Gained IPv6LL Jul 12 00:25:31.038066 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:25:31.042114 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:25:31.042114 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:25:31.042114 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:25:31.042114 ignition[1170]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:25:31.065401 ignition[1170]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3180071889" Jul 12 00:25:31.068613 ignition[1170]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3180071889": device or resource busy Jul 12 00:25:31.068613 ignition[1170]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3180071889", trying btrfs: device or resource busy Jul 12 00:25:31.068613 ignition[1170]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3180071889" Jul 12 00:25:31.078866 ignition[1170]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3180071889" Jul 12 00:25:31.084478 ignition[1170]: INFO : op(3): [started] unmounting "/mnt/oem3180071889" Jul 12 00:25:31.087045 ignition[1170]: INFO : op(3): [finished] unmounting "/mnt/oem3180071889" Jul 12 00:25:31.087045 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:25:31.087045 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:25:31.087045 ignition[1170]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:25:31.304988 ignition[1170]: INFO : GET result: OK Jul 12 00:25:31.477343 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:25:31.481279 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:25:31.521877 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:25:31.521877 ignition[1170]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:25:31.537260 ignition[1170]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4009286138" Jul 12 00:25:31.537260 ignition[1170]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4009286138": device or resource busy Jul 12 00:25:31.537260 ignition[1170]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4009286138", trying btrfs: device or resource busy Jul 12 00:25:31.537260 ignition[1170]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4009286138" Jul 12 00:25:31.537260 ignition[1170]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4009286138" Jul 12 00:25:31.537260 ignition[1170]: INFO : op(6): [started] unmounting "/mnt/oem4009286138" Jul 12 00:25:31.537260 ignition[1170]: INFO : op(6): [finished] unmounting "/mnt/oem4009286138" Jul 12 00:25:31.537260 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:25:31.537260 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:25:31.537260 ignition[1170]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:25:31.575255 systemd[1]: mnt-oem4009286138.mount: Deactivated successfully. Jul 12 00:25:31.592420 ignition[1170]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2309138237" Jul 12 00:25:31.595729 ignition[1170]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2309138237": device or resource busy Jul 12 00:25:31.595729 ignition[1170]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2309138237", trying btrfs: device or resource busy Jul 12 00:25:31.595729 ignition[1170]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2309138237" Jul 12 00:25:31.595729 ignition[1170]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2309138237" Jul 12 00:25:31.608876 ignition[1170]: INFO : op(9): [started] unmounting "/mnt/oem2309138237" Jul 12 00:25:31.611249 ignition[1170]: INFO : op(9): [finished] unmounting "/mnt/oem2309138237" Jul 12 00:25:31.613645 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:25:31.617574 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:25:31.617574 ignition[1170]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:25:32.169662 ignition[1170]: INFO : GET result: OK Jul 12 00:25:32.748906 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:25:32.753470 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:25:32.753470 ignition[1170]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:25:32.769830 ignition[1170]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2371790802" Jul 12 00:25:32.772821 ignition[1170]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2371790802": device or resource busy Jul 12 00:25:32.772821 ignition[1170]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2371790802", trying btrfs: device or resource busy Jul 12 00:25:32.772821 ignition[1170]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2371790802" Jul 12 00:25:32.787529 ignition[1170]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2371790802" Jul 12 00:25:32.787529 ignition[1170]: INFO : op(c): [started] unmounting "/mnt/oem2371790802" Jul 12 00:25:32.792937 ignition[1170]: INFO : op(c): [finished] unmounting "/mnt/oem2371790802" Jul 12 00:25:32.795533 ignition[1170]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:25:32.806928 systemd[1]: mnt-oem2371790802.mount: Deactivated successfully. Jul 12 00:25:32.812270 ignition[1170]: INFO : files: op(10): [started] processing unit "nvidia.service" Jul 12 00:25:32.814948 ignition[1170]: INFO : files: op(10): [finished] processing unit "nvidia.service" Jul 12 00:25:32.817633 ignition[1170]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:25:32.823332 ignition[1170]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:25:32.826340 ignition[1170]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" Jul 12 00:25:32.829182 ignition[1170]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:25:32.838013 ignition[1170]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:25:32.842258 ignition[1170]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" Jul 12 00:25:32.844981 ignition[1170]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Jul 12 00:25:32.847709 ignition[1170]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:25:32.851846 ignition[1170]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:25:32.851846 ignition[1170]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Jul 12 00:25:32.858660 ignition[1170]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Jul 12 00:25:32.858660 ignition[1170]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Jul 12 00:25:32.858660 ignition[1170]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:25:32.858660 ignition[1170]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:25:32.858660 ignition[1170]: INFO : files: op(18): [started] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:25:32.879475 ignition[1170]: INFO : files: op(18): [finished] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:25:32.879475 ignition[1170]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:25:32.879475 ignition[1170]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:25:32.879475 ignition[1170]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:25:32.879475 ignition[1170]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:25:32.879475 ignition[1170]: INFO : files: files passed Jul 12 00:25:32.879475 ignition[1170]: INFO : Ignition finished successfully Jul 12 00:25:32.900754 systemd[1]: Finished ignition-files.service. Jul 12 00:25:32.922250 kernel: audit: type=1130 audit(1752279932.899:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:32.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:32.911394 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 12 00:25:32.922589 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 12 00:25:32.932988 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:25:32.924050 systemd[1]: Starting ignition-quench.service... Jul 12 00:25:32.940459 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 12 00:25:32.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:32.945688 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:25:32.956122 kernel: audit: type=1130 audit(1752279932.943:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:32.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:32.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:32.945874 systemd[1]: Finished ignition-quench.service. Jul 12 00:25:32.954655 systemd[1]: Reached target ignition-complete.target. Jul 12 00:25:32.959789 systemd[1]: Starting initrd-parse-etc.service... Jul 12 00:25:32.989898 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:25:32.992069 systemd[1]: Finished initrd-parse-etc.service. Jul 12 00:25:32.998179 systemd[1]: Reached target initrd-fs.target. Jul 12 00:25:32.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:32.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.001531 systemd[1]: Reached target initrd.target. Jul 12 00:25:33.004903 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 12 00:25:33.009086 systemd[1]: Starting dracut-pre-pivot.service... Jul 12 00:25:33.030486 systemd[1]: Finished dracut-pre-pivot.service. Jul 12 00:25:33.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.035556 systemd[1]: Starting initrd-cleanup.service... Jul 12 00:25:33.054354 systemd[1]: Stopped target nss-lookup.target. Jul 12 00:25:33.058295 systemd[1]: Stopped target remote-cryptsetup.target. Jul 12 00:25:33.062225 systemd[1]: Stopped target timers.target. Jul 12 00:25:33.065594 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:25:33.067866 systemd[1]: Stopped dracut-pre-pivot.service. Jul 12 00:25:33.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.071626 systemd[1]: Stopped target initrd.target. Jul 12 00:25:33.074945 systemd[1]: Stopped target basic.target. Jul 12 00:25:33.078116 systemd[1]: Stopped target ignition-complete.target. Jul 12 00:25:33.085329 systemd[1]: Stopped target ignition-diskful.target. Jul 12 00:25:33.089019 systemd[1]: Stopped target initrd-root-device.target. Jul 12 00:25:33.092837 systemd[1]: Stopped target remote-fs.target. Jul 12 00:25:33.096285 systemd[1]: Stopped target remote-fs-pre.target. Jul 12 00:25:33.099752 systemd[1]: Stopped target sysinit.target. Jul 12 00:25:33.103075 systemd[1]: Stopped target local-fs.target. Jul 12 00:25:33.106395 systemd[1]: Stopped target local-fs-pre.target. Jul 12 00:25:33.109934 systemd[1]: Stopped target swap.target. Jul 12 00:25:33.112953 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:25:33.115276 systemd[1]: Stopped dracut-pre-mount.service. Jul 12 00:25:33.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.118861 systemd[1]: Stopped target cryptsetup.target. Jul 12 00:25:33.125045 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:25:33.127273 systemd[1]: Stopped dracut-initqueue.service. Jul 12 00:25:33.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.131015 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:25:33.133548 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 12 00:25:33.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.137806 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:25:33.140018 systemd[1]: Stopped ignition-files.service. Jul 12 00:25:33.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.144966 systemd[1]: Stopping ignition-mount.service... Jul 12 00:25:33.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.147286 systemd[1]: Stopping iscsiuio.service... Jul 12 00:25:33.148709 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:25:33.148938 systemd[1]: Stopped kmod-static-nodes.service. Jul 12 00:25:33.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.162759 systemd[1]: Stopping sysroot-boot.service... Jul 12 00:25:33.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.166217 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:25:33.166517 systemd[1]: Stopped systemd-udev-trigger.service. Jul 12 00:25:33.169125 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:25:33.170403 systemd[1]: Stopped dracut-pre-trigger.service. Jul 12 00:25:33.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.178486 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 12 00:25:33.179820 systemd[1]: Stopped iscsiuio.service. Jul 12 00:25:33.194494 ignition[1208]: INFO : Ignition 2.14.0 Jul 12 00:25:33.196500 ignition[1208]: INFO : Stage: umount Jul 12 00:25:33.197252 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:25:33.197457 systemd[1]: Finished initrd-cleanup.service. Jul 12 00:25:33.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.209147 ignition[1208]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:33.209147 ignition[1208]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:33.228718 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:25:33.230905 ignition[1208]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:33.233883 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:33.237361 ignition[1208]: INFO : PUT result: OK Jul 12 00:25:33.242935 ignition[1208]: INFO : umount: umount passed Jul 12 00:25:33.244961 ignition[1208]: INFO : Ignition finished successfully Jul 12 00:25:33.249771 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:25:33.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.249958 systemd[1]: Stopped ignition-mount.service. Jul 12 00:25:33.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.252073 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:25:33.252170 systemd[1]: Stopped ignition-disks.service. Jul 12 00:25:33.254044 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:25:33.254133 systemd[1]: Stopped ignition-kargs.service. Jul 12 00:25:33.256549 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:25:33.256632 systemd[1]: Stopped ignition-fetch.service. Jul 12 00:25:33.258589 systemd[1]: Stopped target network.target. Jul 12 00:25:33.262154 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:25:33.262273 systemd[1]: Stopped ignition-fetch-offline.service. Jul 12 00:25:33.264270 systemd[1]: Stopped target paths.target. Jul 12 00:25:33.267431 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:25:33.271283 systemd[1]: Stopped systemd-ask-password-console.path. Jul 12 00:25:33.284324 systemd[1]: Stopped target slices.target. Jul 12 00:25:33.285967 systemd[1]: Stopped target sockets.target. Jul 12 00:25:33.289297 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:25:33.293078 systemd[1]: Closed iscsid.socket. Jul 12 00:25:33.299673 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:25:33.299758 systemd[1]: Closed iscsiuio.socket. Jul 12 00:25:33.302810 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:25:33.305986 systemd[1]: Stopped ignition-setup.service. Jul 12 00:25:33.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.314140 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:25:33.317466 systemd[1]: Stopping systemd-resolved.service... Jul 12 00:25:33.321174 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:25:33.321255 systemd-networkd[1012]: eth0: DHCPv6 lease lost Jul 12 00:25:33.326153 systemd[1]: Stopped sysroot-boot.service. Jul 12 00:25:33.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.329836 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:25:33.332180 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:25:33.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.346000 audit: BPF prog-id=9 op=UNLOAD Jul 12 00:25:33.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.343844 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:25:33.371000 audit: BPF prog-id=6 op=UNLOAD Jul 12 00:25:33.344037 systemd[1]: Stopped systemd-resolved.service. Jul 12 00:25:33.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.347831 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:25:33.347899 systemd[1]: Closed systemd-networkd.socket. Jul 12 00:25:33.348013 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:25:33.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.348091 systemd[1]: Stopped initrd-setup-root.service. Jul 12 00:25:33.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.351699 systemd[1]: Stopping network-cleanup.service... Jul 12 00:25:33.353277 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:25:33.353391 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 12 00:25:33.355317 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:25:33.355421 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:25:33.357460 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:25:33.357551 systemd[1]: Stopped systemd-modules-load.service. Jul 12 00:25:33.359190 systemd[1]: Stopping systemd-udevd.service... Jul 12 00:25:33.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:33.361392 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:25:33.379784 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:25:33.380094 systemd[1]: Stopped systemd-udevd.service. Jul 12 00:25:33.382668 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:25:33.382867 systemd[1]: Stopped network-cleanup.service. Jul 12 00:25:33.384936 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:25:33.385020 systemd[1]: Closed systemd-udevd-control.socket. Jul 12 00:25:33.393331 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:25:33.393485 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 12 00:25:33.396803 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:25:33.396898 systemd[1]: Stopped dracut-pre-udev.service. Jul 12 00:25:33.398731 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:25:33.398814 systemd[1]: Stopped dracut-cmdline.service. Jul 12 00:25:33.400651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:25:33.400725 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 12 00:25:33.405876 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 12 00:25:33.411590 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:25:33.411716 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 12 00:25:33.433548 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:25:33.433764 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 12 00:25:33.440145 systemd[1]: Reached target initrd-switch-root.target. Jul 12 00:25:33.457513 systemd[1]: Starting initrd-switch-root.service... Jul 12 00:25:33.496488 systemd[1]: Switching root. Jul 12 00:25:33.525263 iscsid[1018]: iscsid shutting down. Jul 12 00:25:33.526877 systemd-journald[310]: Received SIGTERM from PID 1 (n/a). Jul 12 00:25:33.526953 systemd-journald[310]: Journal stopped Jul 12 00:25:38.142305 kernel: SELinux: Class mctp_socket not defined in policy. Jul 12 00:25:38.142418 kernel: SELinux: Class anon_inode not defined in policy. Jul 12 00:25:38.142451 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 12 00:25:38.142481 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:25:38.142516 kernel: SELinux: policy capability open_perms=1 Jul 12 00:25:38.142550 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:25:38.142581 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:25:38.142616 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:25:38.142646 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:25:38.142677 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:25:38.142706 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:25:38.142736 systemd[1]: Successfully loaded SELinux policy in 72.997ms. Jul 12 00:25:38.142794 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.609ms. Jul 12 00:25:38.142830 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:25:38.142863 systemd[1]: Detected virtualization amazon. Jul 12 00:25:38.142894 systemd[1]: Detected architecture arm64. Jul 12 00:25:38.142924 systemd[1]: Detected first boot. Jul 12 00:25:38.142963 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:25:38.142994 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 12 00:25:38.143026 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:25:38.143069 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:25:38.143126 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:25:38.143163 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:25:38.143309 kernel: kauditd_printk_skb: 55 callbacks suppressed Jul 12 00:25:38.143347 kernel: audit: type=1334 audit(1752279937.667:84): prog-id=12 op=LOAD Jul 12 00:25:38.143399 kernel: audit: type=1334 audit(1752279937.667:85): prog-id=3 op=UNLOAD Jul 12 00:25:38.143433 kernel: audit: type=1334 audit(1752279937.669:86): prog-id=13 op=LOAD Jul 12 00:25:38.143470 kernel: audit: type=1334 audit(1752279937.672:87): prog-id=14 op=LOAD Jul 12 00:25:38.143502 kernel: audit: type=1334 audit(1752279937.672:88): prog-id=4 op=UNLOAD Jul 12 00:25:38.143954 kernel: audit: type=1334 audit(1752279937.672:89): prog-id=5 op=UNLOAD Jul 12 00:25:38.144674 kernel: audit: type=1334 audit(1752279937.677:90): prog-id=15 op=LOAD Jul 12 00:25:38.145117 systemd[1]: iscsid.service: Deactivated successfully. Jul 12 00:25:38.145178 kernel: audit: type=1334 audit(1752279937.677:91): prog-id=12 op=UNLOAD Jul 12 00:25:38.145229 kernel: audit: type=1334 audit(1752279937.679:92): prog-id=16 op=LOAD Jul 12 00:25:38.145262 systemd[1]: Stopped iscsid.service. Jul 12 00:25:38.145294 kernel: audit: type=1334 audit(1752279937.682:93): prog-id=17 op=LOAD Jul 12 00:25:38.145329 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:25:38.145362 systemd[1]: Stopped initrd-switch-root.service. Jul 12 00:25:38.145394 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:25:38.145427 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 12 00:25:38.145460 systemd[1]: Created slice system-addon\x2drun.slice. Jul 12 00:25:38.145492 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 12 00:25:38.145521 systemd[1]: Created slice system-getty.slice. Jul 12 00:25:38.145555 systemd[1]: Created slice system-modprobe.slice. Jul 12 00:25:38.145587 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 12 00:25:38.145619 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 12 00:25:38.145651 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 12 00:25:38.145681 systemd[1]: Created slice user.slice. Jul 12 00:25:38.145712 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:25:38.145743 systemd[1]: Started systemd-ask-password-wall.path. Jul 12 00:25:38.148046 systemd[1]: Set up automount boot.automount. Jul 12 00:25:38.148178 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 12 00:25:38.149866 systemd[1]: Stopped target initrd-switch-root.target. Jul 12 00:25:38.149903 systemd[1]: Stopped target initrd-fs.target. Jul 12 00:25:38.149933 systemd[1]: Stopped target initrd-root-fs.target. Jul 12 00:25:38.149965 systemd[1]: Reached target integritysetup.target. Jul 12 00:25:38.149997 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:25:38.150028 systemd[1]: Reached target remote-fs.target. Jul 12 00:25:38.150060 systemd[1]: Reached target slices.target. Jul 12 00:25:38.150091 systemd[1]: Reached target swap.target. Jul 12 00:25:38.150120 systemd[1]: Reached target torcx.target. Jul 12 00:25:38.150154 systemd[1]: Reached target veritysetup.target. Jul 12 00:25:38.150188 systemd[1]: Listening on systemd-coredump.socket. Jul 12 00:25:38.150247 systemd[1]: Listening on systemd-initctl.socket. Jul 12 00:25:38.150278 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:25:38.150308 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:25:38.150338 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:25:38.150369 systemd[1]: Listening on systemd-userdbd.socket. Jul 12 00:25:38.150399 systemd[1]: Mounting dev-hugepages.mount... Jul 12 00:25:38.150428 systemd[1]: Mounting dev-mqueue.mount... Jul 12 00:25:38.150457 systemd[1]: Mounting media.mount... Jul 12 00:25:38.152891 systemd[1]: Mounting sys-kernel-debug.mount... Jul 12 00:25:38.154291 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 12 00:25:38.154333 systemd[1]: Mounting tmp.mount... Jul 12 00:25:38.154365 systemd[1]: Starting flatcar-tmpfiles.service... Jul 12 00:25:38.154398 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:25:38.154428 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:25:38.154457 systemd[1]: Starting modprobe@configfs.service... Jul 12 00:25:38.154497 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:25:38.154529 systemd[1]: Starting modprobe@drm.service... Jul 12 00:25:38.154567 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:25:38.154599 systemd[1]: Starting modprobe@fuse.service... Jul 12 00:25:38.154628 systemd[1]: Starting modprobe@loop.service... Jul 12 00:25:38.154658 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:25:38.154687 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:25:38.154716 systemd[1]: Stopped systemd-fsck-root.service. Jul 12 00:25:38.154747 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:25:38.154777 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:25:38.154807 systemd[1]: Stopped systemd-journald.service. Jul 12 00:25:38.154842 systemd[1]: Starting systemd-journald.service... Jul 12 00:25:38.154871 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:25:38.154900 systemd[1]: Starting systemd-network-generator.service... Jul 12 00:25:38.154929 systemd[1]: Starting systemd-remount-fs.service... Jul 12 00:25:38.154958 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:25:38.154989 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:25:38.155018 systemd[1]: Stopped verity-setup.service. Jul 12 00:25:38.155049 systemd[1]: Mounted dev-hugepages.mount. Jul 12 00:25:38.155096 systemd[1]: Mounted dev-mqueue.mount. Jul 12 00:25:38.155134 kernel: fuse: init (API version 7.34) Jul 12 00:25:38.155164 systemd[1]: Mounted media.mount. Jul 12 00:25:38.155209 systemd[1]: Mounted sys-kernel-debug.mount. Jul 12 00:25:38.155244 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 12 00:25:38.155273 systemd[1]: Mounted tmp.mount. Jul 12 00:25:38.155302 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:25:38.155332 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:25:38.155364 systemd[1]: Finished modprobe@configfs.service. Jul 12 00:25:38.155398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:25:38.155430 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:25:38.155460 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:25:38.155489 systemd[1]: Finished modprobe@drm.service. Jul 12 00:25:38.155523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:25:38.155554 kernel: loop: module loaded Jul 12 00:25:38.155586 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:25:38.155616 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:25:38.155649 systemd[1]: Finished modprobe@fuse.service. Jul 12 00:25:38.155678 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:25:38.155707 systemd[1]: Finished modprobe@loop.service. Jul 12 00:25:38.155738 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:25:38.155770 systemd[1]: Finished systemd-network-generator.service. Jul 12 00:25:38.155805 systemd-journald[1328]: Journal started Jul 12 00:25:38.155913 systemd-journald[1328]: Runtime Journal (/run/log/journal/ec2ea5b23982245e3aac5e8b2f1512c1) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:25:33.882000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:25:33.992000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:25:33.992000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:25:33.992000 audit: BPF prog-id=10 op=LOAD Jul 12 00:25:33.992000 audit: BPF prog-id=10 op=UNLOAD Jul 12 00:25:33.992000 audit: BPF prog-id=11 op=LOAD Jul 12 00:25:33.992000 audit: BPF prog-id=11 op=UNLOAD Jul 12 00:25:34.142000 audit[1241]: AVC avc: denied { associate } for pid=1241 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 12 00:25:34.142000 audit[1241]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1224 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.142000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:25:34.146000 audit[1241]: AVC avc: denied { associate } for pid=1241 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 12 00:25:34.146000 audit[1241]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145975 a2=1ed a3=0 items=2 ppid=1224 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.146000 audit: CWD cwd="/" Jul 12 00:25:34.146000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:25:34.146000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:25:34.146000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:25:37.667000 audit: BPF prog-id=12 op=LOAD Jul 12 00:25:37.667000 audit: BPF prog-id=3 op=UNLOAD Jul 12 00:25:37.669000 audit: BPF prog-id=13 op=LOAD Jul 12 00:25:37.672000 audit: BPF prog-id=14 op=LOAD Jul 12 00:25:37.672000 audit: BPF prog-id=4 op=UNLOAD Jul 12 00:25:37.672000 audit: BPF prog-id=5 op=UNLOAD Jul 12 00:25:37.677000 audit: BPF prog-id=15 op=LOAD Jul 12 00:25:37.677000 audit: BPF prog-id=12 op=UNLOAD Jul 12 00:25:37.679000 audit: BPF prog-id=16 op=LOAD Jul 12 00:25:37.682000 audit: BPF prog-id=17 op=LOAD Jul 12 00:25:37.682000 audit: BPF prog-id=13 op=UNLOAD Jul 12 00:25:37.682000 audit: BPF prog-id=14 op=UNLOAD Jul 12 00:25:37.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:37.695000 audit: BPF prog-id=15 op=UNLOAD Jul 12 00:25:37.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:37.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:37.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:37.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:37.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:37.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:37.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:37.997000 audit: BPF prog-id=18 op=LOAD Jul 12 00:25:37.997000 audit: BPF prog-id=19 op=LOAD Jul 12 00:25:37.997000 audit: BPF prog-id=20 op=LOAD Jul 12 00:25:37.997000 audit: BPF prog-id=16 op=UNLOAD Jul 12 00:25:37.997000 audit: BPF prog-id=17 op=UNLOAD Jul 12 00:25:38.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.164095 systemd[1]: Started systemd-journald.service. Jul 12 00:25:38.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.138000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 12 00:25:38.138000 audit[1328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffcc2b2fb0 a2=4000 a3=1 items=0 ppid=1 pid=1328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:38.138000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 12 00:25:38.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:34.138118 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:25:37.665700 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:25:38.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:34.139004 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 12 00:25:37.665722 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Jul 12 00:25:34.139052 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 12 00:25:37.685449 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:25:34.139134 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 12 00:25:38.165499 systemd[1]: Finished systemd-remount-fs.service. Jul 12 00:25:34.139161 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 12 00:25:38.168104 systemd[1]: Reached target network-pre.target. Jul 12 00:25:34.139251 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 12 00:25:34.139282 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 12 00:25:34.139694 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 12 00:25:34.139774 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 12 00:25:38.172420 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 12 00:25:34.139810 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 12 00:25:34.142291 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 12 00:25:34.142379 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 12 00:25:34.142429 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 12 00:25:34.142470 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 12 00:25:34.142521 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 12 00:25:34.142559 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 12 00:25:36.881609 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:36Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:25:36.882124 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:36Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:25:36.882393 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:36Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:25:36.882820 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:36Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:25:36.882924 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:36Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 12 00:25:36.883060 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2025-07-12T00:25:36Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 12 00:25:38.180024 systemd[1]: Mounting sys-kernel-config.mount... Jul 12 00:25:38.181731 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:25:38.185482 systemd[1]: Starting systemd-hwdb-update.service... Jul 12 00:25:38.189574 systemd[1]: Starting systemd-journal-flush.service... Jul 12 00:25:38.191521 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:25:38.195497 systemd[1]: Starting systemd-random-seed.service... Jul 12 00:25:38.197390 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:25:38.199814 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:25:38.207768 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 12 00:25:38.210757 systemd[1]: Mounted sys-kernel-config.mount. Jul 12 00:25:38.231715 systemd-journald[1328]: Time spent on flushing to /var/log/journal/ec2ea5b23982245e3aac5e8b2f1512c1 is 42.051ms for 1132 entries. Jul 12 00:25:38.231715 systemd-journald[1328]: System Journal (/var/log/journal/ec2ea5b23982245e3aac5e8b2f1512c1) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:25:38.307332 systemd-journald[1328]: Received client request to flush runtime journal. Jul 12 00:25:38.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.237543 systemd[1]: Finished systemd-random-seed.service. Jul 12 00:25:38.239929 systemd[1]: Reached target first-boot-complete.target. Jul 12 00:25:38.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.254472 systemd[1]: Finished flatcar-tmpfiles.service. Jul 12 00:25:38.258931 systemd[1]: Starting systemd-sysusers.service... Jul 12 00:25:38.278025 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:25:38.308862 systemd[1]: Finished systemd-journal-flush.service. Jul 12 00:25:38.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.338001 systemd[1]: Finished systemd-sysusers.service. Jul 12 00:25:38.369998 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:25:38.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:38.374275 systemd[1]: Starting systemd-udev-settle.service... Jul 12 00:25:38.391332 udevadm[1361]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:25:39.043815 systemd[1]: Finished systemd-hwdb-update.service. Jul 12 00:25:39.049622 systemd[1]: Starting systemd-udevd.service... Jul 12 00:25:39.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:39.047000 audit: BPF prog-id=21 op=LOAD Jul 12 00:25:39.047000 audit: BPF prog-id=22 op=LOAD Jul 12 00:25:39.047000 audit: BPF prog-id=7 op=UNLOAD Jul 12 00:25:39.047000 audit: BPF prog-id=8 op=UNLOAD Jul 12 00:25:39.087374 systemd-udevd[1362]: Using default interface naming scheme 'v252'. Jul 12 00:25:39.132442 systemd[1]: Started systemd-udevd.service. Jul 12 00:25:39.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:39.137000 audit: BPF prog-id=23 op=LOAD Jul 12 00:25:39.140501 systemd[1]: Starting systemd-networkd.service... Jul 12 00:25:39.153000 audit: BPF prog-id=24 op=LOAD Jul 12 00:25:39.153000 audit: BPF prog-id=25 op=LOAD Jul 12 00:25:39.153000 audit: BPF prog-id=26 op=LOAD Jul 12 00:25:39.156145 systemd[1]: Starting systemd-userdbd.service... Jul 12 00:25:39.231314 systemd[1]: Started systemd-userdbd.service. Jul 12 00:25:39.232568 (udev-worker)[1370]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:39.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:39.243860 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 12 00:25:39.396319 systemd-networkd[1365]: lo: Link UP Jul 12 00:25:39.396840 systemd-networkd[1365]: lo: Gained carrier Jul 12 00:25:39.398009 systemd-networkd[1365]: Enumeration completed Jul 12 00:25:39.398356 systemd[1]: Started systemd-networkd.service. Jul 12 00:25:39.398806 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:25:39.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:39.404643 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:25:39.411738 systemd-networkd[1365]: eth0: Link UP Jul 12 00:25:39.412246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:25:39.412446 systemd-networkd[1365]: eth0: Gained carrier Jul 12 00:25:39.423477 systemd-networkd[1365]: eth0: DHCPv4 address 172.31.19.35/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:25:39.599622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:25:39.610859 systemd[1]: Finished systemd-udev-settle.service. Jul 12 00:25:39.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:39.615287 systemd[1]: Starting lvm2-activation-early.service... Jul 12 00:25:39.644619 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:25:39.682733 systemd[1]: Finished lvm2-activation-early.service. Jul 12 00:25:39.687427 systemd[1]: Reached target cryptsetup.target. Jul 12 00:25:39.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:39.691632 systemd[1]: Starting lvm2-activation.service... Jul 12 00:25:39.700250 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:25:39.739824 systemd[1]: Finished lvm2-activation.service. Jul 12 00:25:39.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:39.741906 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:25:39.743863 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:25:39.743921 systemd[1]: Reached target local-fs.target. Jul 12 00:25:39.745907 systemd[1]: Reached target machines.target. Jul 12 00:25:39.749978 systemd[1]: Starting ldconfig.service... Jul 12 00:25:39.753054 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:25:39.753157 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:25:39.755671 systemd[1]: Starting systemd-boot-update.service... Jul 12 00:25:39.761003 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 12 00:25:39.767865 systemd[1]: Starting systemd-machine-id-commit.service... Jul 12 00:25:39.777618 systemd[1]: Starting systemd-sysext.service... Jul 12 00:25:39.781073 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1484 (bootctl) Jul 12 00:25:39.784601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 12 00:25:39.806948 systemd[1]: Unmounting usr-share-oem.mount... Jul 12 00:25:39.824816 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 12 00:25:39.825260 systemd[1]: Unmounted usr-share-oem.mount. Jul 12 00:25:39.850309 kernel: loop0: detected capacity change from 0 to 207008 Jul 12 00:25:39.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:39.859727 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 12 00:25:39.937364 systemd-fsck[1494]: fsck.fat 4.2 (2021-01-31) Jul 12 00:25:39.937364 systemd-fsck[1494]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Jul 12 00:25:39.942165 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 12 00:25:39.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:39.947258 systemd[1]: Mounting boot.mount... Jul 12 00:25:39.975606 systemd[1]: Mounted boot.mount. Jul 12 00:25:40.008861 systemd[1]: Finished systemd-boot-update.service. Jul 12 00:25:40.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.034986 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:25:40.066837 kernel: loop1: detected capacity change from 0 to 207008 Jul 12 00:25:40.086501 (sd-sysext)[1510]: Using extensions 'kubernetes'. Jul 12 00:25:40.087348 (sd-sysext)[1510]: Merged extensions into '/usr'. Jul 12 00:25:40.142554 systemd[1]: Mounting usr-share-oem.mount... Jul 12 00:25:40.144816 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.150498 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:25:40.156254 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:25:40.162854 systemd[1]: Starting modprobe@loop.service... Jul 12 00:25:40.166291 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.166769 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:25:40.177471 systemd[1]: Mounted usr-share-oem.mount. Jul 12 00:25:40.180702 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:25:40.181487 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:25:40.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.184463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:25:40.184735 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:25:40.187884 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:25:40.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.188159 systemd[1]: Finished modprobe@loop.service. Jul 12 00:25:40.191258 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:25:40.191529 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.195563 systemd[1]: Finished systemd-sysext.service. Jul 12 00:25:40.202960 systemd[1]: Starting ensure-sysext.service... Jul 12 00:25:40.207462 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 12 00:25:40.227566 systemd[1]: Reloading. Jul 12 00:25:40.267221 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 12 00:25:40.276405 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:25:40.291485 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:25:40.367130 /usr/lib/systemd/system-generators/torcx-generator[1537]: time="2025-07-12T00:25:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:25:40.369960 /usr/lib/systemd/system-generators/torcx-generator[1537]: time="2025-07-12T00:25:40Z" level=info msg="torcx already run" Jul 12 00:25:40.471773 ldconfig[1483]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:25:40.588081 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:25:40.588358 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:25:40.630701 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:25:40.767849 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:25:40.773000 audit: BPF prog-id=27 op=LOAD Jul 12 00:25:40.774000 audit: BPF prog-id=23 op=UNLOAD Jul 12 00:25:40.776000 audit: BPF prog-id=28 op=LOAD Jul 12 00:25:40.776000 audit: BPF prog-id=24 op=UNLOAD Jul 12 00:25:40.776000 audit: BPF prog-id=29 op=LOAD Jul 12 00:25:40.777000 audit: BPF prog-id=30 op=LOAD Jul 12 00:25:40.777000 audit: BPF prog-id=25 op=UNLOAD Jul 12 00:25:40.777000 audit: BPF prog-id=26 op=UNLOAD Jul 12 00:25:40.778000 audit: BPF prog-id=31 op=LOAD Jul 12 00:25:40.779000 audit: BPF prog-id=18 op=UNLOAD Jul 12 00:25:40.779000 audit: BPF prog-id=32 op=LOAD Jul 12 00:25:40.779000 audit: BPF prog-id=33 op=LOAD Jul 12 00:25:40.779000 audit: BPF prog-id=19 op=UNLOAD Jul 12 00:25:40.779000 audit: BPF prog-id=20 op=UNLOAD Jul 12 00:25:40.780000 audit: BPF prog-id=34 op=LOAD Jul 12 00:25:40.781000 audit: BPF prog-id=35 op=LOAD Jul 12 00:25:40.781000 audit: BPF prog-id=21 op=UNLOAD Jul 12 00:25:40.781000 audit: BPF prog-id=22 op=UNLOAD Jul 12 00:25:40.793500 systemd[1]: Finished ldconfig.service. Jul 12 00:25:40.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.796367 systemd[1]: Finished systemd-machine-id-commit.service. Jul 12 00:25:40.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.804734 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 12 00:25:40.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.814098 systemd[1]: Starting audit-rules.service... Jul 12 00:25:40.817967 systemd[1]: Starting clean-ca-certificates.service... Jul 12 00:25:40.829000 audit: BPF prog-id=36 op=LOAD Jul 12 00:25:40.835000 audit: BPF prog-id=37 op=LOAD Jul 12 00:25:40.827506 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 12 00:25:40.832947 systemd[1]: Starting systemd-resolved.service... Jul 12 00:25:40.840543 systemd[1]: Starting systemd-timesyncd.service... Jul 12 00:25:40.844697 systemd[1]: Starting systemd-update-utmp.service... Jul 12 00:25:40.862000 audit[1598]: SYSTEM_BOOT pid=1598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.868051 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.872891 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:25:40.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.877315 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:25:40.881909 systemd[1]: Starting modprobe@loop.service... Jul 12 00:25:40.886428 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.886855 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:25:40.894500 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.894842 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.895090 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:25:40.898394 systemd[1]: Finished systemd-update-utmp.service. Jul 12 00:25:40.907603 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.911658 systemd[1]: Starting modprobe@drm.service... Jul 12 00:25:40.913513 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.913826 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:25:40.916982 systemd[1]: Finished ensure-sysext.service. Jul 12 00:25:40.927089 systemd[1]: Finished clean-ca-certificates.service. Jul 12 00:25:40.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.929711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:25:40.929977 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:25:40.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.932212 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:25:40.932267 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:25:40.934110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:25:40.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.934406 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:25:40.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.945803 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:25:40.946084 systemd[1]: Finished modprobe@loop.service. Jul 12 00:25:40.948583 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:25:40.953600 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:25:40.953891 systemd[1]: Finished modprobe@drm.service. Jul 12 00:25:40.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.973588 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 12 00:25:40.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:40.980010 systemd[1]: Starting systemd-update-done.service... Jul 12 00:25:41.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:41.003756 systemd[1]: Finished systemd-update-done.service. Jul 12 00:25:41.032000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 12 00:25:41.032000 audit[1618]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2fb0350 a2=420 a3=0 items=0 ppid=1593 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:41.032000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 12 00:25:41.034333 augenrules[1618]: No rules Jul 12 00:25:41.035858 systemd[1]: Finished audit-rules.service. Jul 12 00:25:41.058977 systemd[1]: Started systemd-timesyncd.service. Jul 12 00:25:41.061155 systemd[1]: Reached target time-set.target. Jul 12 00:25:41.071272 systemd-resolved[1596]: Positive Trust Anchors: Jul 12 00:25:41.071301 systemd-resolved[1596]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:25:41.071352 systemd-resolved[1596]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:25:41.103585 systemd-resolved[1596]: Defaulting to hostname 'linux'. Jul 12 00:25:41.106637 systemd[1]: Started systemd-resolved.service. Jul 12 00:25:41.108677 systemd[1]: Reached target network.target. Jul 12 00:25:41.110450 systemd[1]: Reached target nss-lookup.target. Jul 12 00:25:41.112302 systemd[1]: Reached target sysinit.target. Jul 12 00:25:41.114241 systemd[1]: Started motdgen.path. Jul 12 00:25:41.115858 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 12 00:25:41.118630 systemd[1]: Started logrotate.timer. Jul 12 00:25:41.120420 systemd[1]: Started mdadm.timer. Jul 12 00:25:41.121980 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 12 00:25:41.123866 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:25:41.123918 systemd[1]: Reached target paths.target. Jul 12 00:25:41.125538 systemd[1]: Reached target timers.target. Jul 12 00:25:41.127987 systemd[1]: Listening on dbus.socket. Jul 12 00:25:41.131662 systemd[1]: Starting docker.socket... Jul 12 00:25:41.133429 systemd-networkd[1365]: eth0: Gained IPv6LL Jul 12 00:25:41.139440 systemd[1]: Listening on sshd.socket. Jul 12 00:25:41.141363 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:25:41.142669 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:25:41.145299 systemd[1]: Listening on docker.socket. Jul 12 00:25:41.147177 systemd[1]: Reached target network-online.target. Jul 12 00:25:41.149067 systemd[1]: Reached target sockets.target. Jul 12 00:25:41.150858 systemd[1]: Reached target basic.target. Jul 12 00:25:41.152646 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:25:41.152714 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:25:41.154935 systemd[1]: Started amazon-ssm-agent.service. Jul 12 00:25:41.161323 systemd[1]: Starting containerd.service... Jul 12 00:25:41.163472 systemd-timesyncd[1597]: Contacted time server 155.248.196.28:123 (0.flatcar.pool.ntp.org). Jul 12 00:25:41.163586 systemd-timesyncd[1597]: Initial clock synchronization to Sat 2025-07-12 00:25:40.838972 UTC. Jul 12 00:25:41.164969 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 12 00:25:41.169865 systemd[1]: Starting dbus.service... Jul 12 00:25:41.176157 systemd[1]: Starting enable-oem-cloudinit.service... Jul 12 00:25:41.183271 systemd[1]: Starting extend-filesystems.service... Jul 12 00:25:41.185120 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 12 00:25:41.188107 systemd[1]: Starting kubelet.service... Jul 12 00:25:41.206223 systemd[1]: Starting motdgen.service... Jul 12 00:25:41.217096 systemd[1]: Started nvidia.service. Jul 12 00:25:41.221331 systemd[1]: Starting prepare-helm.service... Jul 12 00:25:41.235218 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 12 00:25:41.239367 systemd[1]: Starting sshd-keygen.service... Jul 12 00:25:41.246987 systemd[1]: Starting systemd-logind.service... Jul 12 00:25:41.248745 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:25:41.248894 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:25:41.255164 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:25:41.260288 systemd[1]: Starting update-engine.service... Jul 12 00:25:41.283383 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 12 00:25:41.291478 jq[1630]: false Jul 12 00:25:41.304620 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:25:41.313869 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 12 00:25:41.343643 jq[1641]: true Jul 12 00:25:41.352788 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:25:41.353161 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 12 00:25:41.389006 jq[1654]: true Jul 12 00:25:41.405098 tar[1650]: linux-arm64/LICENSE Jul 12 00:25:41.423458 tar[1650]: linux-arm64/helm Jul 12 00:25:41.463558 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:25:41.463929 systemd[1]: Finished motdgen.service. Jul 12 00:25:41.488915 dbus-daemon[1629]: [system] SELinux support is enabled Jul 12 00:25:41.496627 systemd[1]: Started dbus.service. Jul 12 00:25:41.501768 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:25:41.501826 systemd[1]: Reached target system-config.target. Jul 12 00:25:41.504537 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:25:41.504656 systemd[1]: Reached target user-config.target. Jul 12 00:25:41.521794 dbus-daemon[1629]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1365 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 12 00:25:41.536347 extend-filesystems[1631]: Found loop1 Jul 12 00:25:41.538632 extend-filesystems[1631]: Found nvme0n1 Jul 12 00:25:41.540536 extend-filesystems[1631]: Found nvme0n1p1 Jul 12 00:25:41.542384 extend-filesystems[1631]: Found nvme0n1p2 Jul 12 00:25:41.545076 dbus-daemon[1629]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 12 00:25:41.553739 extend-filesystems[1631]: Found nvme0n1p3 Jul 12 00:25:41.556226 extend-filesystems[1631]: Found usr Jul 12 00:25:41.556226 extend-filesystems[1631]: Found nvme0n1p4 Jul 12 00:25:41.556226 extend-filesystems[1631]: Found nvme0n1p6 Jul 12 00:25:41.556226 extend-filesystems[1631]: Found nvme0n1p7 Jul 12 00:25:41.556226 extend-filesystems[1631]: Found nvme0n1p9 Jul 12 00:25:41.556226 extend-filesystems[1631]: Checking size of /dev/nvme0n1p9 Jul 12 00:25:41.574483 systemd[1]: Starting systemd-hostnamed.service... Jul 12 00:25:41.637117 extend-filesystems[1631]: Resized partition /dev/nvme0n1p9 Jul 12 00:25:41.654363 extend-filesystems[1691]: resize2fs 1.46.5 (30-Dec-2021) Jul 12 00:25:41.703118 env[1652]: time="2025-07-12T00:25:41.703006608Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 12 00:25:41.707670 bash[1686]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:25:41.709518 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 12 00:25:41.723225 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 12 00:25:41.802982 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 12 00:25:41.820927 update_engine[1639]: I0712 00:25:41.796350 1639 main.cc:92] Flatcar Update Engine starting Jul 12 00:25:41.824306 extend-filesystems[1691]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 12 00:25:41.824306 extend-filesystems[1691]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:25:41.824306 extend-filesystems[1691]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 12 00:25:41.848933 extend-filesystems[1631]: Resized filesystem in /dev/nvme0n1p9 Jul 12 00:25:41.830823 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:25:41.851129 update_engine[1639]: I0712 00:25:41.826364 1639 update_check_scheduler.cc:74] Next update check in 5m12s Jul 12 00:25:41.831174 systemd[1]: Finished extend-filesystems.service. Jul 12 00:25:41.851473 systemd[1]: Started update-engine.service. Jul 12 00:25:41.852225 amazon-ssm-agent[1626]: 2025/07/12 00:25:41 Failed to load instance info from vault. RegistrationKey does not exist. Jul 12 00:25:41.860339 systemd[1]: Started locksmithd.service. Jul 12 00:25:41.893791 env[1652]: time="2025-07-12T00:25:41.893717053Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:25:41.894109 env[1652]: time="2025-07-12T00:25:41.893987749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:25:41.905861 env[1652]: time="2025-07-12T00:25:41.905788645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:25:41.906046 env[1652]: time="2025-07-12T00:25:41.906008809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:25:41.906638 env[1652]: time="2025-07-12T00:25:41.906577897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:25:41.907003 env[1652]: time="2025-07-12T00:25:41.906968257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:25:41.907161 env[1652]: time="2025-07-12T00:25:41.907128169Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 12 00:25:41.907330 env[1652]: time="2025-07-12T00:25:41.907298053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:25:41.907630 env[1652]: time="2025-07-12T00:25:41.907597477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:25:41.908050 systemd-logind[1638]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:25:41.908105 systemd-logind[1638]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 12 00:25:41.908562 systemd-logind[1638]: New seat seat0. Jul 12 00:25:41.912013 systemd[1]: Started systemd-logind.service. Jul 12 00:25:41.912538 env[1652]: time="2025-07-12T00:25:41.912492745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:25:41.915422 amazon-ssm-agent[1626]: Initializing new seelog logger Jul 12 00:25:41.917399 env[1652]: time="2025-07-12T00:25:41.917316469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:25:41.924349 env[1652]: time="2025-07-12T00:25:41.924260917Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:25:41.925342 env[1652]: time="2025-07-12T00:25:41.925289041Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 12 00:25:41.929573 amazon-ssm-agent[1626]: New Seelog Logger Creation Complete Jul 12 00:25:41.929857 env[1652]: time="2025-07-12T00:25:41.929794633Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:25:41.935318 amazon-ssm-agent[1626]: 2025/07/12 00:25:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:25:41.935517 amazon-ssm-agent[1626]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:25:41.937778 amazon-ssm-agent[1626]: 2025/07/12 00:25:41 processing appconfig overrides Jul 12 00:25:41.956735 env[1652]: time="2025-07-12T00:25:41.956656177Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:25:41.957027 env[1652]: time="2025-07-12T00:25:41.956992657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:25:41.957178 env[1652]: time="2025-07-12T00:25:41.957144001Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:25:41.957411 env[1652]: time="2025-07-12T00:25:41.957360697Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:25:41.957632 env[1652]: time="2025-07-12T00:25:41.957599137Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:25:41.957798 env[1652]: time="2025-07-12T00:25:41.957765241Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:25:41.957939 env[1652]: time="2025-07-12T00:25:41.957907525Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:25:41.958621 env[1652]: time="2025-07-12T00:25:41.958563073Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:25:41.958857 env[1652]: time="2025-07-12T00:25:41.958824949Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 12 00:25:41.959060 env[1652]: time="2025-07-12T00:25:41.959001865Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:25:41.959219 env[1652]: time="2025-07-12T00:25:41.959171101Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:25:41.959353 env[1652]: time="2025-07-12T00:25:41.959322073Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:25:41.959738 env[1652]: time="2025-07-12T00:25:41.959703193Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:25:41.960115 env[1652]: time="2025-07-12T00:25:41.960074221Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:25:41.960978 env[1652]: time="2025-07-12T00:25:41.960931717Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:25:41.962677 env[1652]: time="2025-07-12T00:25:41.962624965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.962999 env[1652]: time="2025-07-12T00:25:41.962950729Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:25:41.964398 env[1652]: time="2025-07-12T00:25:41.964244641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.971326 env[1652]: time="2025-07-12T00:25:41.971265901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.974872 env[1652]: time="2025-07-12T00:25:41.974741545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.975162 env[1652]: time="2025-07-12T00:25:41.975090709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.975624 env[1652]: time="2025-07-12T00:25:41.975574141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.975777 env[1652]: time="2025-07-12T00:25:41.975745093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.975927 env[1652]: time="2025-07-12T00:25:41.975897049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.976083 env[1652]: time="2025-07-12T00:25:41.976050073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.976290 env[1652]: time="2025-07-12T00:25:41.976258141Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:25:41.976864 env[1652]: time="2025-07-12T00:25:41.976802209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.977459 env[1652]: time="2025-07-12T00:25:41.977394577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.977672 env[1652]: time="2025-07-12T00:25:41.977640901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.977822 env[1652]: time="2025-07-12T00:25:41.977791741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:25:41.980341 env[1652]: time="2025-07-12T00:25:41.980253913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 12 00:25:41.981298 env[1652]: time="2025-07-12T00:25:41.981259057Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:25:41.984341 env[1652]: time="2025-07-12T00:25:41.984266797Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 12 00:25:41.985425 env[1652]: time="2025-07-12T00:25:41.985367833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:25:41.986934 env[1652]: time="2025-07-12T00:25:41.986766373Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:25:41.994794 env[1652]: time="2025-07-12T00:25:41.994740793Z" level=info msg="Connect containerd service" Jul 12 00:25:41.995133 env[1652]: time="2025-07-12T00:25:41.995089141Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:25:41.997119 env[1652]: time="2025-07-12T00:25:41.997040113Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:25:42.012139 env[1652]: time="2025-07-12T00:25:42.012065951Z" level=info msg="Start subscribing containerd event" Jul 12 00:25:42.012300 env[1652]: time="2025-07-12T00:25:42.012151151Z" level=info msg="Start recovering state" Jul 12 00:25:42.012300 env[1652]: time="2025-07-12T00:25:42.012291605Z" level=info msg="Start event monitor" Jul 12 00:25:42.012427 env[1652]: time="2025-07-12T00:25:42.012327516Z" level=info msg="Start snapshots syncer" Jul 12 00:25:42.012427 env[1652]: time="2025-07-12T00:25:42.012351429Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:25:42.012427 env[1652]: time="2025-07-12T00:25:42.012370611Z" level=info msg="Start streaming server" Jul 12 00:25:42.013170 env[1652]: time="2025-07-12T00:25:42.013107212Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:25:42.014214 env[1652]: time="2025-07-12T00:25:42.014161827Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:25:42.026773 systemd[1]: Started containerd.service. Jul 12 00:25:42.039392 systemd[1]: nvidia.service: Deactivated successfully. Jul 12 00:25:42.060240 env[1652]: time="2025-07-12T00:25:42.058000127Z" level=info msg="containerd successfully booted in 0.357445s" Jul 12 00:25:42.234802 dbus-daemon[1629]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 12 00:25:42.235044 systemd[1]: Started systemd-hostnamed.service. Jul 12 00:25:42.236922 dbus-daemon[1629]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1685 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 12 00:25:42.243178 systemd[1]: Starting polkit.service... Jul 12 00:25:42.288899 polkitd[1781]: Started polkitd version 121 Jul 12 00:25:42.331866 polkitd[1781]: Loading rules from directory /etc/polkit-1/rules.d Jul 12 00:25:42.336873 polkitd[1781]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 12 00:25:42.343351 polkitd[1781]: Finished loading, compiling and executing 2 rules Jul 12 00:25:42.344296 dbus-daemon[1629]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 12 00:25:42.344553 systemd[1]: Started polkit.service. Jul 12 00:25:42.349297 polkitd[1781]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 12 00:25:42.396759 systemd-resolved[1596]: System hostname changed to 'ip-172-31-19-35'. Jul 12 00:25:42.396765 systemd-hostnamed[1685]: Hostname set to (transient) Jul 12 00:25:42.564560 coreos-metadata[1628]: Jul 12 00:25:42.561 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:25:42.566827 coreos-metadata[1628]: Jul 12 00:25:42.566 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Jul 12 00:25:42.569153 coreos-metadata[1628]: Jul 12 00:25:42.568 INFO Fetch successful Jul 12 00:25:42.569483 coreos-metadata[1628]: Jul 12 00:25:42.569 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 12 00:25:42.572581 coreos-metadata[1628]: Jul 12 00:25:42.572 INFO Fetch successful Jul 12 00:25:42.579795 unknown[1628]: wrote ssh authorized keys file for user: core Jul 12 00:25:42.608650 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Create new startup processor Jul 12 00:25:42.614807 update-ssh-keys[1803]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:25:42.615287 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [LongRunningPluginsManager] registered plugins: {} Jul 12 00:25:42.615287 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Initializing bookkeeping folders Jul 12 00:25:42.615396 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO removing the completed state files Jul 12 00:25:42.615396 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Initializing bookkeeping folders for long running plugins Jul 12 00:25:42.615396 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Jul 12 00:25:42.615396 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Initializing healthcheck folders for long running plugins Jul 12 00:25:42.615598 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Initializing locations for inventory plugin Jul 12 00:25:42.615598 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Initializing default location for custom inventory Jul 12 00:25:42.615598 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Initializing default location for file inventory Jul 12 00:25:42.615598 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Initializing default location for role inventory Jul 12 00:25:42.615598 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Init the cloudwatchlogs publisher Jul 12 00:25:42.615598 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform independent plugin aws:softwareInventory Jul 12 00:25:42.615598 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform independent plugin aws:runPowerShellScript Jul 12 00:25:42.615598 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform independent plugin aws:configureDocker Jul 12 00:25:42.616029 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform independent plugin aws:runDockerAction Jul 12 00:25:42.616029 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform independent plugin aws:downloadContent Jul 12 00:25:42.616029 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform independent plugin aws:runDocument Jul 12 00:25:42.616029 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform independent plugin aws:updateSsmAgent Jul 12 00:25:42.616029 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform independent plugin aws:refreshAssociation Jul 12 00:25:42.616029 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform independent plugin aws:configurePackage Jul 12 00:25:42.616029 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Successfully loaded platform dependent plugin aws:runShellScript Jul 12 00:25:42.616029 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Jul 12 00:25:42.616029 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO OS: linux, Arch: arm64 Jul 12 00:25:42.617099 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 12 00:25:42.635346 amazon-ssm-agent[1626]: datastore file /var/lib/amazon/ssm/i-05f7c3f271a417794/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Jul 12 00:25:42.715710 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] Starting document processing engine... Jul 12 00:25:42.812108 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] [EngineProcessor] Starting Jul 12 00:25:42.907042 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Jul 12 00:25:43.001533 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] Starting message polling Jul 12 00:25:43.096424 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] Starting send replies to MDS Jul 12 00:25:43.191420 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [instanceID=i-05f7c3f271a417794] Starting association polling Jul 12 00:25:43.230873 tar[1650]: linux-arm64/README.md Jul 12 00:25:43.252186 systemd[1]: Finished prepare-helm.service. Jul 12 00:25:43.286523 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Jul 12 00:25:43.382369 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] [Association] Launching response handler Jul 12 00:25:43.433275 locksmithd[1710]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:25:43.477985 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Jul 12 00:25:43.573609 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Jul 12 00:25:43.669521 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Jul 12 00:25:43.765696 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [HealthCheck] HealthCheck reporting agent health. Jul 12 00:25:43.861918 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [OfflineService] Starting document processing engine... Jul 12 00:25:43.953039 systemd[1]: Started kubelet.service. Jul 12 00:25:43.958349 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [OfflineService] [EngineProcessor] Starting Jul 12 00:25:44.055168 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [OfflineService] [EngineProcessor] Initial processing Jul 12 00:25:44.151983 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [OfflineService] Starting message polling Jul 12 00:25:44.249074 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [OfflineService] Starting send replies to MDS Jul 12 00:25:44.346428 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessageGatewayService] Starting session document processing engine... Jul 12 00:25:44.443865 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessageGatewayService] [EngineProcessor] Starting Jul 12 00:25:44.541632 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Jul 12 00:25:44.639582 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-05f7c3f271a417794, requestId: 76fee92c-dbd1-4396-9fbf-97358a2ba7b9 Jul 12 00:25:44.737589 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [LongRunningPluginsManager] starting long running plugin manager Jul 12 00:25:44.835862 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Jul 12 00:25:44.923444 sshd_keygen[1666]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:25:44.934343 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessageGatewayService] listening reply. Jul 12 00:25:44.965096 systemd[1]: Finished sshd-keygen.service. Jul 12 00:25:44.970264 systemd[1]: Starting issuegen.service... Jul 12 00:25:44.981375 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:25:44.981712 systemd[1]: Finished issuegen.service. Jul 12 00:25:44.987460 systemd[1]: Starting systemd-user-sessions.service... Jul 12 00:25:45.002671 systemd[1]: Finished systemd-user-sessions.service. Jul 12 00:25:45.007791 systemd[1]: Started getty@tty1.service. Jul 12 00:25:45.012356 systemd[1]: Started serial-getty@ttyS0.service. Jul 12 00:25:45.014781 systemd[1]: Reached target getty.target. Jul 12 00:25:45.016934 systemd[1]: Reached target multi-user.target. Jul 12 00:25:45.021751 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 12 00:25:45.032994 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Jul 12 00:25:45.038681 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 12 00:25:45.039072 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 12 00:25:45.041492 systemd[1]: Startup finished in 1.203s (kernel) + 8.192s (initrd) + 11.246s (userspace) = 20.643s. Jul 12 00:25:45.131938 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [StartupProcessor] Executing startup processor tasks Jul 12 00:25:45.191076 kubelet[1826]: E0712 00:25:45.190916 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:25:45.194600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:25:45.194911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:25:45.195425 systemd[1]: kubelet.service: Consumed 1.563s CPU time. Jul 12 00:25:45.230940 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Jul 12 00:25:45.330090 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Jul 12 00:25:45.429508 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 Jul 12 00:25:45.529140 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-05f7c3f271a417794?role=subscribe&stream=input Jul 12 00:25:45.628858 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-05f7c3f271a417794?role=subscribe&stream=input Jul 12 00:25:45.728818 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessageGatewayService] Starting receiving message from control channel Jul 12 00:25:45.829069 amazon-ssm-agent[1626]: 2025-07-12 00:25:42 INFO [MessageGatewayService] [EngineProcessor] Initial processing Jul 12 00:25:50.348487 systemd[1]: Created slice system-sshd.slice. Jul 12 00:25:50.351022 systemd[1]: Started sshd@0-172.31.19.35:22-147.75.109.163:47144.service. Jul 12 00:25:50.540494 sshd[1847]: Accepted publickey for core from 147.75.109.163 port 47144 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:50.546529 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:50.566792 systemd[1]: Created slice user-500.slice. Jul 12 00:25:50.569282 systemd[1]: Starting user-runtime-dir@500.service... Jul 12 00:25:50.579922 systemd-logind[1638]: New session 1 of user core. Jul 12 00:25:50.588467 systemd[1]: Finished user-runtime-dir@500.service. Jul 12 00:25:50.591457 systemd[1]: Starting user@500.service... Jul 12 00:25:50.599529 (systemd)[1850]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:50.779520 systemd[1850]: Queued start job for default target default.target. Jul 12 00:25:50.781049 systemd[1850]: Reached target paths.target. Jul 12 00:25:50.781333 systemd[1850]: Reached target sockets.target. Jul 12 00:25:50.781508 systemd[1850]: Reached target timers.target. Jul 12 00:25:50.781676 systemd[1850]: Reached target basic.target. Jul 12 00:25:50.781921 systemd[1850]: Reached target default.target. Jul 12 00:25:50.782003 systemd[1]: Started user@500.service. Jul 12 00:25:50.782301 systemd[1850]: Startup finished in 170ms. Jul 12 00:25:50.785309 systemd[1]: Started session-1.scope. Jul 12 00:25:50.928736 systemd[1]: Started sshd@1-172.31.19.35:22-147.75.109.163:47160.service. Jul 12 00:25:51.101857 sshd[1859]: Accepted publickey for core from 147.75.109.163 port 47160 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:51.104339 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:51.111547 systemd-logind[1638]: New session 2 of user core. Jul 12 00:25:51.113470 systemd[1]: Started session-2.scope. Jul 12 00:25:51.242723 sshd[1859]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:51.247422 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:25:51.248582 systemd-logind[1638]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:25:51.248948 systemd[1]: sshd@1-172.31.19.35:22-147.75.109.163:47160.service: Deactivated successfully. Jul 12 00:25:51.250961 systemd-logind[1638]: Removed session 2. Jul 12 00:25:51.273812 systemd[1]: Started sshd@2-172.31.19.35:22-147.75.109.163:47162.service. Jul 12 00:25:51.444400 sshd[1865]: Accepted publickey for core from 147.75.109.163 port 47162 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:51.447328 sshd[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:51.455385 systemd-logind[1638]: New session 3 of user core. Jul 12 00:25:51.455741 systemd[1]: Started session-3.scope. Jul 12 00:25:51.577754 sshd[1865]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:51.582208 systemd[1]: sshd@2-172.31.19.35:22-147.75.109.163:47162.service: Deactivated successfully. Jul 12 00:25:51.583470 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:25:51.584847 systemd-logind[1638]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:25:51.586746 systemd-logind[1638]: Removed session 3. Jul 12 00:25:51.603468 systemd[1]: Started sshd@3-172.31.19.35:22-147.75.109.163:47164.service. Jul 12 00:25:51.770156 sshd[1871]: Accepted publickey for core from 147.75.109.163 port 47164 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:51.773117 sshd[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:51.780659 systemd-logind[1638]: New session 4 of user core. Jul 12 00:25:51.781663 systemd[1]: Started session-4.scope. Jul 12 00:25:51.909093 sshd[1871]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:51.914250 systemd-logind[1638]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:25:51.914801 systemd[1]: sshd@3-172.31.19.35:22-147.75.109.163:47164.service: Deactivated successfully. Jul 12 00:25:51.916000 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:25:51.917504 systemd-logind[1638]: Removed session 4. Jul 12 00:25:51.938606 systemd[1]: Started sshd@4-172.31.19.35:22-147.75.109.163:47176.service. Jul 12 00:25:52.108656 sshd[1877]: Accepted publickey for core from 147.75.109.163 port 47176 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:52.111075 sshd[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:52.119171 systemd-logind[1638]: New session 5 of user core. Jul 12 00:25:52.120100 systemd[1]: Started session-5.scope. Jul 12 00:25:52.242490 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:25:52.243040 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:25:52.322548 systemd[1]: Starting docker.service... Jul 12 00:25:52.438317 env[1890]: time="2025-07-12T00:25:52.438227757Z" level=info msg="Starting up" Jul 12 00:25:52.441118 env[1890]: time="2025-07-12T00:25:52.441057097Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:25:52.441118 env[1890]: time="2025-07-12T00:25:52.441102424Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:25:52.441347 env[1890]: time="2025-07-12T00:25:52.441152940Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:25:52.441347 env[1890]: time="2025-07-12T00:25:52.441177562Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:25:52.444733 env[1890]: time="2025-07-12T00:25:52.444686730Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:25:52.444908 env[1890]: time="2025-07-12T00:25:52.444879674Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:25:52.445030 env[1890]: time="2025-07-12T00:25:52.445000317Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:25:52.445153 env[1890]: time="2025-07-12T00:25:52.445126540Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:25:52.463759 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1829471750-merged.mount: Deactivated successfully. Jul 12 00:25:52.519251 env[1890]: time="2025-07-12T00:25:52.517983100Z" level=info msg="Loading containers: start." Jul 12 00:25:52.720243 kernel: Initializing XFRM netlink socket Jul 12 00:25:52.764668 env[1890]: time="2025-07-12T00:25:52.764607370Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 12 00:25:52.768467 (udev-worker)[1900]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:52.870013 systemd-networkd[1365]: docker0: Link UP Jul 12 00:25:52.898533 env[1890]: time="2025-07-12T00:25:52.898487728Z" level=info msg="Loading containers: done." Jul 12 00:25:52.939429 env[1890]: time="2025-07-12T00:25:52.939318218Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:25:52.939855 env[1890]: time="2025-07-12T00:25:52.939782497Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 12 00:25:52.940108 env[1890]: time="2025-07-12T00:25:52.940043516Z" level=info msg="Daemon has completed initialization" Jul 12 00:25:52.973980 systemd[1]: Started docker.service. Jul 12 00:25:52.990627 env[1890]: time="2025-07-12T00:25:52.990309508Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:25:54.330568 env[1652]: time="2025-07-12T00:25:54.330512751Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 00:25:55.022826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252652141.mount: Deactivated successfully. Jul 12 00:25:55.139702 amazon-ssm-agent[1626]: 2025-07-12 00:25:55 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Jul 12 00:25:55.209888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:25:55.210287 systemd[1]: Stopped kubelet.service. Jul 12 00:25:55.210372 systemd[1]: kubelet.service: Consumed 1.563s CPU time. Jul 12 00:25:55.212900 systemd[1]: Starting kubelet.service... Jul 12 00:25:55.635397 systemd[1]: Started kubelet.service. Jul 12 00:25:55.717157 kubelet[2016]: E0712 00:25:55.717060 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:25:55.726925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:25:55.727270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:25:57.050224 env[1652]: time="2025-07-12T00:25:57.050137127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:57.056015 env[1652]: time="2025-07-12T00:25:57.055963922Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:57.059792 env[1652]: time="2025-07-12T00:25:57.059744261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:57.063554 env[1652]: time="2025-07-12T00:25:57.063487103Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:57.065246 env[1652]: time="2025-07-12T00:25:57.065181424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 00:25:57.066511 env[1652]: time="2025-07-12T00:25:57.066465322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 00:25:59.008822 env[1652]: time="2025-07-12T00:25:59.008746274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:59.012694 env[1652]: time="2025-07-12T00:25:59.012631995Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:59.016525 env[1652]: time="2025-07-12T00:25:59.016478056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:59.020181 env[1652]: time="2025-07-12T00:25:59.020115247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:59.021940 env[1652]: time="2025-07-12T00:25:59.021861451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 00:25:59.022812 env[1652]: time="2025-07-12T00:25:59.022762498Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 00:26:00.674053 env[1652]: time="2025-07-12T00:26:00.673971829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:00.678808 env[1652]: time="2025-07-12T00:26:00.678743120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:00.682850 env[1652]: time="2025-07-12T00:26:00.682786205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:00.686779 env[1652]: time="2025-07-12T00:26:00.686730163Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:00.688293 env[1652]: time="2025-07-12T00:26:00.688200288Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 00:26:00.689175 env[1652]: time="2025-07-12T00:26:00.689128996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:26:02.042118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426291750.mount: Deactivated successfully. Jul 12 00:26:02.934707 env[1652]: time="2025-07-12T00:26:02.934639730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:02.937224 env[1652]: time="2025-07-12T00:26:02.937114224Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:02.939630 env[1652]: time="2025-07-12T00:26:02.939577721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:02.942336 env[1652]: time="2025-07-12T00:26:02.942288837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:02.943746 env[1652]: time="2025-07-12T00:26:02.943681144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:26:02.944527 env[1652]: time="2025-07-12T00:26:02.944477862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:26:03.507000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104073966.mount: Deactivated successfully. Jul 12 00:26:04.999386 env[1652]: time="2025-07-12T00:26:04.999303403Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:05.005256 env[1652]: time="2025-07-12T00:26:05.004807688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:05.009342 env[1652]: time="2025-07-12T00:26:05.009291516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:05.013110 env[1652]: time="2025-07-12T00:26:05.013046745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:05.014965 env[1652]: time="2025-07-12T00:26:05.014900649Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:26:05.016491 env[1652]: time="2025-07-12T00:26:05.016443931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:26:05.561007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052002220.mount: Deactivated successfully. Jul 12 00:26:05.576361 env[1652]: time="2025-07-12T00:26:05.576302141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:05.580779 env[1652]: time="2025-07-12T00:26:05.580732011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:05.585014 env[1652]: time="2025-07-12T00:26:05.584958588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:05.587773 env[1652]: time="2025-07-12T00:26:05.587702848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:05.588970 env[1652]: time="2025-07-12T00:26:05.588903288Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:26:05.589952 env[1652]: time="2025-07-12T00:26:05.589906878Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 00:26:05.959861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:26:05.960229 systemd[1]: Stopped kubelet.service. Jul 12 00:26:05.962770 systemd[1]: Starting kubelet.service... Jul 12 00:26:06.196686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295730992.mount: Deactivated successfully. Jul 12 00:26:06.362051 systemd[1]: Started kubelet.service. Jul 12 00:26:06.484789 kubelet[2026]: E0712 00:26:06.484696 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:26:06.494169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:26:06.494521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:26:09.224368 env[1652]: time="2025-07-12T00:26:09.224283952Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:09.228998 env[1652]: time="2025-07-12T00:26:09.228933959Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:09.233240 env[1652]: time="2025-07-12T00:26:09.233159035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:09.237516 env[1652]: time="2025-07-12T00:26:09.237461725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:09.239186 env[1652]: time="2025-07-12T00:26:09.239120947Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 00:26:12.431496 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 12 00:26:16.044502 systemd[1]: Stopped kubelet.service. Jul 12 00:26:16.050638 systemd[1]: Starting kubelet.service... Jul 12 00:26:16.109444 systemd[1]: Reloading. Jul 12 00:26:16.253863 /usr/lib/systemd/system-generators/torcx-generator[2080]: time="2025-07-12T00:26:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:26:16.288385 /usr/lib/systemd/system-generators/torcx-generator[2080]: time="2025-07-12T00:26:16Z" level=info msg="torcx already run" Jul 12 00:26:16.479136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:26:16.479176 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:26:16.518142 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:26:16.748419 systemd[1]: Started kubelet.service. Jul 12 00:26:16.762096 systemd[1]: Stopping kubelet.service... Jul 12 00:26:16.764470 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:26:16.764871 systemd[1]: Stopped kubelet.service. Jul 12 00:26:16.769232 systemd[1]: Starting kubelet.service... Jul 12 00:26:17.078103 systemd[1]: Started kubelet.service. Jul 12 00:26:17.157372 kubelet[2146]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:17.157972 kubelet[2146]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:26:17.158072 kubelet[2146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:17.158379 kubelet[2146]: I0712 00:26:17.158323 2146 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:26:18.541797 kubelet[2146]: I0712 00:26:18.541721 2146 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:26:18.541797 kubelet[2146]: I0712 00:26:18.541777 2146 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:26:18.542498 kubelet[2146]: I0712 00:26:18.542298 2146 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:26:18.638835 kubelet[2146]: E0712 00:26:18.638758 2146 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:18.647818 kubelet[2146]: I0712 00:26:18.647765 2146 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:26:18.666282 kubelet[2146]: E0712 00:26:18.666090 2146 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:26:18.666282 kubelet[2146]: I0712 00:26:18.666269 2146 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:26:18.675819 kubelet[2146]: I0712 00:26:18.675780 2146 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:26:18.676667 kubelet[2146]: I0712 00:26:18.676618 2146 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:26:18.677090 kubelet[2146]: I0712 00:26:18.676815 2146 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:26:18.677486 kubelet[2146]: I0712 00:26:18.677461 2146 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:26:18.678120 kubelet[2146]: I0712 00:26:18.678096 2146 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:26:18.678614 kubelet[2146]: I0712 00:26:18.678589 2146 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:18.685589 kubelet[2146]: I0712 00:26:18.685546 2146 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:26:18.685822 kubelet[2146]: I0712 00:26:18.685798 2146 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:26:18.685953 kubelet[2146]: I0712 00:26:18.685933 2146 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:26:18.686083 kubelet[2146]: I0712 00:26:18.686062 2146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:26:18.688676 kubelet[2146]: W0712 00:26:18.688595 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-35&limit=500&resourceVersion=0": dial tcp 172.31.19.35:6443: connect: connection refused Jul 12 00:26:18.688977 kubelet[2146]: E0712 00:26:18.688937 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-35&limit=500&resourceVersion=0\": dial tcp 172.31.19.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:18.693423 kubelet[2146]: W0712 00:26:18.693356 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.35:6443: connect: connection refused Jul 12 00:26:18.693686 kubelet[2146]: E0712 00:26:18.693649 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:18.694123 kubelet[2146]: I0712 00:26:18.694092 2146 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:26:18.695375 kubelet[2146]: I0712 00:26:18.695344 2146 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:26:18.695755 kubelet[2146]: W0712 00:26:18.695733 2146 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:26:18.698270 kubelet[2146]: I0712 00:26:18.698232 2146 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:26:18.698481 kubelet[2146]: I0712 00:26:18.698460 2146 server.go:1287] "Started kubelet" Jul 12 00:26:18.705388 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 12 00:26:18.705636 kubelet[2146]: I0712 00:26:18.705582 2146 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:26:18.715881 kubelet[2146]: I0712 00:26:18.715800 2146 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:26:18.717381 kubelet[2146]: I0712 00:26:18.717344 2146 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:26:18.729230 kubelet[2146]: I0712 00:26:18.729075 2146 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:26:18.733839 kubelet[2146]: I0712 00:26:18.733358 2146 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:26:18.733839 kubelet[2146]: I0712 00:26:18.729484 2146 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:26:18.733839 kubelet[2146]: E0712 00:26:18.730169 2146 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-35\" not found" Jul 12 00:26:18.733839 kubelet[2146]: I0712 00:26:18.731054 2146 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:26:18.734213 kubelet[2146]: I0712 00:26:18.733859 2146 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:26:18.734776 kubelet[2146]: I0712 00:26:18.729452 2146 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:26:18.736468 kubelet[2146]: W0712 00:26:18.735024 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.35:6443: connect: connection refused Jul 12 00:26:18.736648 kubelet[2146]: E0712 00:26:18.736513 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:18.736648 kubelet[2146]: E0712 00:26:18.735629 2146 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:26:18.736648 kubelet[2146]: E0712 00:26:18.735227 2146 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.35:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-35.1851595b4d594b4a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-35,UID:ip-172-31-19-35,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-35,},FirstTimestamp:2025-07-12 00:26:18.69842721 +0000 UTC m=+1.609532248,LastTimestamp:2025-07-12 00:26:18.69842721 +0000 UTC m=+1.609532248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-35,}" Jul 12 00:26:18.737087 kubelet[2146]: E0712 00:26:18.735246 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-35?timeout=10s\": dial tcp 172.31.19.35:6443: connect: connection refused" interval="200ms" Jul 12 00:26:18.739977 kubelet[2146]: I0712 00:26:18.739942 2146 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:26:18.740188 kubelet[2146]: I0712 00:26:18.740165 2146 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:26:18.740498 kubelet[2146]: I0712 00:26:18.740454 2146 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:26:18.770015 kubelet[2146]: I0712 00:26:18.769979 2146 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:26:18.770243 kubelet[2146]: I0712 00:26:18.770219 2146 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:26:18.770363 kubelet[2146]: I0712 00:26:18.770343 2146 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:18.781612 kubelet[2146]: I0712 00:26:18.781554 2146 policy_none.go:49] "None policy: Start" Jul 12 00:26:18.781612 kubelet[2146]: I0712 00:26:18.781609 2146 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:26:18.781817 kubelet[2146]: I0712 00:26:18.781636 2146 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:26:18.784338 kubelet[2146]: I0712 00:26:18.784172 2146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:26:18.790678 kubelet[2146]: I0712 00:26:18.790585 2146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:26:18.790871 kubelet[2146]: I0712 00:26:18.790848 2146 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:26:18.791001 kubelet[2146]: I0712 00:26:18.790978 2146 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:26:18.791125 kubelet[2146]: I0712 00:26:18.791105 2146 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:26:18.791372 kubelet[2146]: E0712 00:26:18.791328 2146 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:26:18.800119 systemd[1]: Created slice kubepods.slice. Jul 12 00:26:18.806352 kubelet[2146]: W0712 00:26:18.806310 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.35:6443: connect: connection refused Jul 12 00:26:18.806752 kubelet[2146]: E0712 00:26:18.806713 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:18.813525 systemd[1]: Created slice kubepods-burstable.slice. Jul 12 00:26:18.821673 systemd[1]: Created slice kubepods-besteffort.slice. Jul 12 00:26:18.830297 kubelet[2146]: I0712 00:26:18.830253 2146 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:26:18.830550 kubelet[2146]: I0712 00:26:18.830515 2146 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:26:18.830645 kubelet[2146]: I0712 00:26:18.830550 2146 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:26:18.842293 kubelet[2146]: E0712 00:26:18.842152 2146 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:26:18.842452 kubelet[2146]: E0712 00:26:18.842356 2146 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-35\" not found" Jul 12 00:26:18.843811 kubelet[2146]: I0712 00:26:18.843760 2146 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:26:18.905947 systemd[1]: Created slice kubepods-burstable-podc1dc35bc07a5b06dc1e77ba0c77a070e.slice. Jul 12 00:26:18.915027 kubelet[2146]: E0712 00:26:18.914758 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:18.921137 systemd[1]: Created slice kubepods-burstable-poda4c32a8a3432dd925ba71cff3b8f4092.slice. Jul 12 00:26:18.925939 kubelet[2146]: E0712 00:26:18.925902 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:18.928600 systemd[1]: Created slice kubepods-burstable-pode62dd414dc9e0874fba5e3428a4a17ed.slice. Jul 12 00:26:18.933519 kubelet[2146]: I0712 00:26:18.933483 2146 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-35" Jul 12 00:26:18.933939 kubelet[2146]: E0712 00:26:18.933826 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:18.934688 kubelet[2146]: E0712 00:26:18.934631 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.35:6443/api/v1/nodes\": dial tcp 172.31.19.35:6443: connect: connection refused" node="ip-172-31-19-35" Jul 12 00:26:18.938359 kubelet[2146]: E0712 00:26:18.938316 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-35?timeout=10s\": dial tcp 172.31.19.35:6443: connect: connection refused" interval="400ms" Jul 12 00:26:18.941525 kubelet[2146]: I0712 00:26:18.941482 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4c32a8a3432dd925ba71cff3b8f4092-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-35\" (UID: \"a4c32a8a3432dd925ba71cff3b8f4092\") " pod="kube-system/kube-apiserver-ip-172-31-19-35" Jul 12 00:26:18.941648 kubelet[2146]: I0712 00:26:18.941611 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:18.941752 kubelet[2146]: I0712 00:26:18.941719 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:18.941821 kubelet[2146]: I0712 00:26:18.941799 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:18.941913 kubelet[2146]: I0712 00:26:18.941879 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:18.942010 kubelet[2146]: I0712 00:26:18.941970 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4c32a8a3432dd925ba71cff3b8f4092-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-35\" (UID: \"a4c32a8a3432dd925ba71cff3b8f4092\") " pod="kube-system/kube-apiserver-ip-172-31-19-35" Jul 12 00:26:18.942090 kubelet[2146]: I0712 00:26:18.942048 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:18.942151 kubelet[2146]: I0712 00:26:18.942089 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e62dd414dc9e0874fba5e3428a4a17ed-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-35\" (UID: \"e62dd414dc9e0874fba5e3428a4a17ed\") " pod="kube-system/kube-scheduler-ip-172-31-19-35" Jul 12 00:26:18.942271 kubelet[2146]: I0712 00:26:18.942169 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4c32a8a3432dd925ba71cff3b8f4092-ca-certs\") pod \"kube-apiserver-ip-172-31-19-35\" (UID: \"a4c32a8a3432dd925ba71cff3b8f4092\") " pod="kube-system/kube-apiserver-ip-172-31-19-35" Jul 12 00:26:19.137395 kubelet[2146]: I0712 00:26:19.137267 2146 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-35" Jul 12 00:26:19.138807 kubelet[2146]: E0712 00:26:19.138730 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.35:6443/api/v1/nodes\": dial tcp 172.31.19.35:6443: connect: connection refused" node="ip-172-31-19-35" Jul 12 00:26:19.217430 env[1652]: time="2025-07-12T00:26:19.217341237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-35,Uid:c1dc35bc07a5b06dc1e77ba0c77a070e,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:19.229101 env[1652]: time="2025-07-12T00:26:19.228689643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-35,Uid:a4c32a8a3432dd925ba71cff3b8f4092,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:19.236051 env[1652]: time="2025-07-12T00:26:19.235561574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-35,Uid:e62dd414dc9e0874fba5e3428a4a17ed,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:19.339931 kubelet[2146]: E0712 00:26:19.339860 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-35?timeout=10s\": dial tcp 172.31.19.35:6443: connect: connection refused" interval="800ms" Jul 12 00:26:19.541166 kubelet[2146]: I0712 00:26:19.541124 2146 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-35" Jul 12 00:26:19.541691 kubelet[2146]: E0712 00:26:19.541645 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.35:6443/api/v1/nodes\": dial tcp 172.31.19.35:6443: connect: connection refused" node="ip-172-31-19-35" Jul 12 00:26:19.734410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469513565.mount: Deactivated successfully. Jul 12 00:26:19.750690 env[1652]: time="2025-07-12T00:26:19.750618760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.753082 env[1652]: time="2025-07-12T00:26:19.753020494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.759451 env[1652]: time="2025-07-12T00:26:19.759400806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.762459 env[1652]: time="2025-07-12T00:26:19.762376472Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.764356 env[1652]: time="2025-07-12T00:26:19.764293237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.765505 kubelet[2146]: W0712 00:26:19.765351 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-35&limit=500&resourceVersion=0": dial tcp 172.31.19.35:6443: connect: connection refused Jul 12 00:26:19.765505 kubelet[2146]: E0712 00:26:19.765445 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-35&limit=500&resourceVersion=0\": dial tcp 172.31.19.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:19.769400 env[1652]: time="2025-07-12T00:26:19.769335085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.772104 env[1652]: time="2025-07-12T00:26:19.772052292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.774416 env[1652]: time="2025-07-12T00:26:19.774361667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.777619 env[1652]: time="2025-07-12T00:26:19.777559499Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.782798 env[1652]: time="2025-07-12T00:26:19.782747801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.814469 env[1652]: time="2025-07-12T00:26:19.814332196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.832458 kubelet[2146]: W0712 00:26:19.832326 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.35:6443: connect: connection refused Jul 12 00:26:19.832458 kubelet[2146]: E0712 00:26:19.832416 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:19.835623 env[1652]: time="2025-07-12T00:26:19.835563498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:19.855464 env[1652]: time="2025-07-12T00:26:19.855339398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:19.855668 env[1652]: time="2025-07-12T00:26:19.855423932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:19.855668 env[1652]: time="2025-07-12T00:26:19.855451789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:19.856440 env[1652]: time="2025-07-12T00:26:19.856320501Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85fec43cb3eaf0b74a1e992792197056fd9d10d5c0dc7b7dbbc796b2b522f2ab pid=2190 runtime=io.containerd.runc.v2 Jul 12 00:26:19.857495 env[1652]: time="2025-07-12T00:26:19.857395271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:19.857758 env[1652]: time="2025-07-12T00:26:19.857689774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:19.862660 env[1652]: time="2025-07-12T00:26:19.862596884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:19.863927 env[1652]: time="2025-07-12T00:26:19.863752646Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/318fe06d96ccda4c6b5a84fcf2ef2a2c9bb998e6a6105ecccb12d87bed04bf67 pid=2196 runtime=io.containerd.runc.v2 Jul 12 00:26:19.884644 env[1652]: time="2025-07-12T00:26:19.884506120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:19.884800 env[1652]: time="2025-07-12T00:26:19.884669641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:19.884800 env[1652]: time="2025-07-12T00:26:19.884758279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:19.885464 env[1652]: time="2025-07-12T00:26:19.885338584Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f15314abf57145d0a8403000d06a0f972afe204efeaba7088f197e16b1b6786 pid=2232 runtime=io.containerd.runc.v2 Jul 12 00:26:19.899293 systemd[1]: Started cri-containerd-85fec43cb3eaf0b74a1e992792197056fd9d10d5c0dc7b7dbbc796b2b522f2ab.scope. Jul 12 00:26:19.911587 systemd[1]: Started cri-containerd-318fe06d96ccda4c6b5a84fcf2ef2a2c9bb998e6a6105ecccb12d87bed04bf67.scope. Jul 12 00:26:19.969068 systemd[1]: Started cri-containerd-4f15314abf57145d0a8403000d06a0f972afe204efeaba7088f197e16b1b6786.scope. Jul 12 00:26:19.982457 kubelet[2146]: W0712 00:26:19.982235 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.35:6443: connect: connection refused Jul 12 00:26:19.982457 kubelet[2146]: E0712 00:26:19.982394 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:20.049401 env[1652]: time="2025-07-12T00:26:20.047170791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-35,Uid:a4c32a8a3432dd925ba71cff3b8f4092,Namespace:kube-system,Attempt:0,} returns sandbox id \"85fec43cb3eaf0b74a1e992792197056fd9d10d5c0dc7b7dbbc796b2b522f2ab\"" Jul 12 00:26:20.065950 kubelet[2146]: W0712 00:26:20.065614 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.35:6443: connect: connection refused Jul 12 00:26:20.065950 kubelet[2146]: E0712 00:26:20.065761 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:26:20.068438 env[1652]: time="2025-07-12T00:26:20.068374944Z" level=info msg="CreateContainer within sandbox \"85fec43cb3eaf0b74a1e992792197056fd9d10d5c0dc7b7dbbc796b2b522f2ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:26:20.090985 env[1652]: time="2025-07-12T00:26:20.088954688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-35,Uid:c1dc35bc07a5b06dc1e77ba0c77a070e,Namespace:kube-system,Attempt:0,} returns sandbox id \"318fe06d96ccda4c6b5a84fcf2ef2a2c9bb998e6a6105ecccb12d87bed04bf67\"" Jul 12 00:26:20.101460 env[1652]: time="2025-07-12T00:26:20.101404375Z" level=info msg="CreateContainer within sandbox \"318fe06d96ccda4c6b5a84fcf2ef2a2c9bb998e6a6105ecccb12d87bed04bf67\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:26:20.118560 env[1652]: time="2025-07-12T00:26:20.118466137Z" level=info msg="CreateContainer within sandbox \"85fec43cb3eaf0b74a1e992792197056fd9d10d5c0dc7b7dbbc796b2b522f2ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"13e6aa0c6942dff7a4600c6fe490ca3b671f10b0b3f9b69fde00ee7f6879d3fb\"" Jul 12 00:26:20.119759 env[1652]: time="2025-07-12T00:26:20.119689747Z" level=info msg="StartContainer for \"13e6aa0c6942dff7a4600c6fe490ca3b671f10b0b3f9b69fde00ee7f6879d3fb\"" Jul 12 00:26:20.135326 env[1652]: time="2025-07-12T00:26:20.135251426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-35,Uid:e62dd414dc9e0874fba5e3428a4a17ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f15314abf57145d0a8403000d06a0f972afe204efeaba7088f197e16b1b6786\"" Jul 12 00:26:20.142587 kubelet[2146]: E0712 00:26:20.141751 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-35?timeout=10s\": dial tcp 172.31.19.35:6443: connect: connection refused" interval="1.6s" Jul 12 00:26:20.143506 env[1652]: time="2025-07-12T00:26:20.143436587Z" level=info msg="CreateContainer within sandbox \"4f15314abf57145d0a8403000d06a0f972afe204efeaba7088f197e16b1b6786\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:26:20.156426 env[1652]: time="2025-07-12T00:26:20.156341305Z" level=info msg="CreateContainer within sandbox \"318fe06d96ccda4c6b5a84fcf2ef2a2c9bb998e6a6105ecccb12d87bed04bf67\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e\"" Jul 12 00:26:20.157329 env[1652]: time="2025-07-12T00:26:20.157281913Z" level=info msg="StartContainer for \"92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e\"" Jul 12 00:26:20.174798 systemd[1]: Started cri-containerd-13e6aa0c6942dff7a4600c6fe490ca3b671f10b0b3f9b69fde00ee7f6879d3fb.scope. Jul 12 00:26:20.188214 env[1652]: time="2025-07-12T00:26:20.188121128Z" level=info msg="CreateContainer within sandbox \"4f15314abf57145d0a8403000d06a0f972afe204efeaba7088f197e16b1b6786\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d\"" Jul 12 00:26:20.190378 env[1652]: time="2025-07-12T00:26:20.190290556Z" level=info msg="StartContainer for \"1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d\"" Jul 12 00:26:20.239277 systemd[1]: Started cri-containerd-1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d.scope. Jul 12 00:26:20.268430 systemd[1]: Started cri-containerd-92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e.scope. Jul 12 00:26:20.307369 env[1652]: time="2025-07-12T00:26:20.307304633Z" level=info msg="StartContainer for \"13e6aa0c6942dff7a4600c6fe490ca3b671f10b0b3f9b69fde00ee7f6879d3fb\" returns successfully" Jul 12 00:26:20.344984 kubelet[2146]: I0712 00:26:20.344312 2146 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-35" Jul 12 00:26:20.345892 kubelet[2146]: E0712 00:26:20.345828 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.35:6443/api/v1/nodes\": dial tcp 172.31.19.35:6443: connect: connection refused" node="ip-172-31-19-35" Jul 12 00:26:20.388478 env[1652]: time="2025-07-12T00:26:20.388389559Z" level=info msg="StartContainer for \"92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e\" returns successfully" Jul 12 00:26:20.443446 env[1652]: time="2025-07-12T00:26:20.443364909Z" level=info msg="StartContainer for \"1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d\" returns successfully" Jul 12 00:26:20.811444 kubelet[2146]: E0712 00:26:20.811406 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:20.818119 kubelet[2146]: E0712 00:26:20.817704 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:20.819592 kubelet[2146]: E0712 00:26:20.819558 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:21.822961 kubelet[2146]: E0712 00:26:21.822924 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:21.824115 kubelet[2146]: E0712 00:26:21.823884 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:21.948439 kubelet[2146]: I0712 00:26:21.948374 2146 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-35" Jul 12 00:26:24.042764 kubelet[2146]: E0712 00:26:24.042728 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:24.662106 kubelet[2146]: E0712 00:26:24.662062 2146 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-35\" not found" node="ip-172-31-19-35" Jul 12 00:26:24.685896 kubelet[2146]: E0712 00:26:24.685688 2146 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-35.1851595b4d594b4a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-35,UID:ip-172-31-19-35,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-35,},FirstTimestamp:2025-07-12 00:26:18.69842721 +0000 UTC m=+1.609532248,LastTimestamp:2025-07-12 00:26:18.69842721 +0000 UTC m=+1.609532248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-35,}" Jul 12 00:26:24.695317 kubelet[2146]: I0712 00:26:24.695272 2146 apiserver.go:52] "Watching apiserver" Jul 12 00:26:24.734313 kubelet[2146]: I0712 00:26:24.734262 2146 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:26:24.797536 kubelet[2146]: I0712 00:26:24.797483 2146 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-35" Jul 12 00:26:24.831172 kubelet[2146]: I0712 00:26:24.831096 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:24.903355 kubelet[2146]: E0712 00:26:24.903308 2146 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:24.903578 kubelet[2146]: I0712 00:26:24.903553 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-35" Jul 12 00:26:24.912760 kubelet[2146]: E0712 00:26:24.912605 2146 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-35" Jul 12 00:26:24.912760 kubelet[2146]: I0712 00:26:24.912664 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-35" Jul 12 00:26:24.923664 kubelet[2146]: E0712 00:26:24.923594 2146 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-35" Jul 12 00:26:25.177372 amazon-ssm-agent[1626]: 2025-07-12 00:26:25 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Jul 12 00:26:27.087375 systemd[1]: Reloading. Jul 12 00:26:27.192191 kubelet[2146]: I0712 00:26:27.192151 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:27.260354 /usr/lib/systemd/system-generators/torcx-generator[2441]: time="2025-07-12T00:26:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:26:27.264381 /usr/lib/systemd/system-generators/torcx-generator[2441]: time="2025-07-12T00:26:27Z" level=info msg="torcx already run" Jul 12 00:26:27.406317 update_engine[1639]: I0712 00:26:27.405280 1639 update_attempter.cc:509] Updating boot flags... Jul 12 00:26:27.500933 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:26:27.501185 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:26:27.560374 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:26:27.978548 systemd[1]: Stopping kubelet.service... Jul 12 00:26:28.015187 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:26:28.015622 systemd[1]: Stopped kubelet.service. Jul 12 00:26:28.015711 systemd[1]: kubelet.service: Consumed 2.354s CPU time. Jul 12 00:26:28.019047 systemd[1]: Starting kubelet.service... Jul 12 00:26:28.364546 systemd[1]: Started kubelet.service. Jul 12 00:26:28.502854 kubelet[2595]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:28.502854 kubelet[2595]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:26:28.502854 kubelet[2595]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:28.503498 kubelet[2595]: I0712 00:26:28.503070 2595 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:26:28.520052 sudo[2605]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:26:28.520999 sudo[2605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 12 00:26:28.528617 kubelet[2595]: I0712 00:26:28.528568 2595 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:26:28.528885 kubelet[2595]: I0712 00:26:28.528862 2595 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:26:28.536234 kubelet[2595]: I0712 00:26:28.529472 2595 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:26:28.539097 kubelet[2595]: I0712 00:26:28.539047 2595 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:26:28.547165 kubelet[2595]: I0712 00:26:28.547119 2595 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:26:28.563662 kubelet[2595]: E0712 00:26:28.563607 2595 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:26:28.563883 kubelet[2595]: I0712 00:26:28.563858 2595 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:26:28.568372 kubelet[2595]: I0712 00:26:28.568333 2595 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:26:28.573452 kubelet[2595]: I0712 00:26:28.573393 2595 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:26:28.573926 kubelet[2595]: I0712 00:26:28.573642 2595 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:26:28.574168 kubelet[2595]: I0712 00:26:28.574142 2595 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:26:28.574334 kubelet[2595]: I0712 00:26:28.574313 2595 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:26:28.574517 kubelet[2595]: I0712 00:26:28.574496 2595 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:28.574889 kubelet[2595]: I0712 00:26:28.574854 2595 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:26:28.576507 kubelet[2595]: I0712 00:26:28.576477 2595 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:26:28.576783 kubelet[2595]: I0712 00:26:28.576762 2595 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:26:28.582998 kubelet[2595]: I0712 00:26:28.582959 2595 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:26:28.603985 kubelet[2595]: I0712 00:26:28.603943 2595 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:26:28.607086 kubelet[2595]: I0712 00:26:28.607048 2595 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:26:28.626882 kubelet[2595]: I0712 00:26:28.626752 2595 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:26:28.631679 kubelet[2595]: I0712 00:26:28.631639 2595 server.go:1287] "Started kubelet" Jul 12 00:26:28.634596 kubelet[2595]: I0712 00:26:28.634527 2595 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:26:28.640470 kubelet[2595]: I0712 00:26:28.640433 2595 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:26:28.642631 kubelet[2595]: I0712 00:26:28.642558 2595 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:26:28.644926 kubelet[2595]: I0712 00:26:28.644800 2595 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:26:28.652609 kubelet[2595]: I0712 00:26:28.652562 2595 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:26:28.680858 kubelet[2595]: E0712 00:26:28.680789 2595 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:26:28.684842 kubelet[2595]: I0712 00:26:28.656469 2595 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:26:28.685232 kubelet[2595]: I0712 00:26:28.685175 2595 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:26:28.688652 kubelet[2595]: I0712 00:26:28.685381 2595 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:26:28.689056 kubelet[2595]: I0712 00:26:28.689034 2595 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:26:28.695075 kubelet[2595]: I0712 00:26:28.695031 2595 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:26:28.695456 kubelet[2595]: I0712 00:26:28.695422 2595 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:26:28.705270 kubelet[2595]: I0712 00:26:28.705230 2595 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:26:28.761455 kubelet[2595]: I0712 00:26:28.761402 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:26:28.767843 kubelet[2595]: I0712 00:26:28.767804 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:26:28.768035 kubelet[2595]: I0712 00:26:28.768014 2595 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:26:28.768159 kubelet[2595]: I0712 00:26:28.768138 2595 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:26:28.768345 kubelet[2595]: I0712 00:26:28.768326 2595 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:26:28.768528 kubelet[2595]: E0712 00:26:28.768498 2595 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:26:28.871234 kubelet[2595]: E0712 00:26:28.871153 2595 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:26:28.873016 kubelet[2595]: I0712 00:26:28.872952 2595 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:26:28.873016 kubelet[2595]: I0712 00:26:28.872985 2595 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:26:28.873016 kubelet[2595]: I0712 00:26:28.873022 2595 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:28.873311 kubelet[2595]: I0712 00:26:28.873295 2595 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:26:28.873374 kubelet[2595]: I0712 00:26:28.873318 2595 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:26:28.873374 kubelet[2595]: I0712 00:26:28.873353 2595 policy_none.go:49] "None policy: Start" Jul 12 00:26:28.873374 kubelet[2595]: I0712 00:26:28.873371 2595 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:26:28.873550 kubelet[2595]: I0712 00:26:28.873391 2595 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:26:28.873616 kubelet[2595]: I0712 00:26:28.873573 2595 state_mem.go:75] "Updated machine memory state" Jul 12 00:26:28.893105 kubelet[2595]: I0712 00:26:28.892959 2595 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:26:28.893739 kubelet[2595]: I0712 00:26:28.893350 2595 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:26:28.893739 kubelet[2595]: I0712 00:26:28.893387 2595 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:26:28.893942 kubelet[2595]: I0712 00:26:28.893780 2595 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:26:28.901114 kubelet[2595]: E0712 00:26:28.900882 2595 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:26:29.010801 kubelet[2595]: I0712 00:26:29.010750 2595 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-35" Jul 12 00:26:29.026936 kubelet[2595]: I0712 00:26:29.026880 2595 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-35" Jul 12 00:26:29.027140 kubelet[2595]: I0712 00:26:29.027010 2595 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-35" Jul 12 00:26:29.072581 kubelet[2595]: I0712 00:26:29.072529 2595 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:29.073229 kubelet[2595]: I0712 00:26:29.073166 2595 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-35" Jul 12 00:26:29.073700 kubelet[2595]: I0712 00:26:29.073657 2595 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-35" Jul 12 00:26:29.093042 kubelet[2595]: E0712 00:26:29.092980 2595 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-35\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:29.105481 kubelet[2595]: I0712 00:26:29.105426 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4c32a8a3432dd925ba71cff3b8f4092-ca-certs\") pod \"kube-apiserver-ip-172-31-19-35\" (UID: \"a4c32a8a3432dd925ba71cff3b8f4092\") " pod="kube-system/kube-apiserver-ip-172-31-19-35" Jul 12 00:26:29.105670 kubelet[2595]: I0712 00:26:29.105497 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:29.105670 kubelet[2595]: I0712 00:26:29.105544 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:29.105670 kubelet[2595]: I0712 00:26:29.105592 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:29.105670 kubelet[2595]: I0712 00:26:29.105633 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:29.105922 kubelet[2595]: I0712 00:26:29.105674 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e62dd414dc9e0874fba5e3428a4a17ed-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-35\" (UID: \"e62dd414dc9e0874fba5e3428a4a17ed\") " pod="kube-system/kube-scheduler-ip-172-31-19-35" Jul 12 00:26:29.105922 kubelet[2595]: I0712 00:26:29.105710 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4c32a8a3432dd925ba71cff3b8f4092-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-35\" (UID: \"a4c32a8a3432dd925ba71cff3b8f4092\") " pod="kube-system/kube-apiserver-ip-172-31-19-35" Jul 12 00:26:29.105922 kubelet[2595]: I0712 00:26:29.105748 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1dc35bc07a5b06dc1e77ba0c77a070e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-35\" (UID: \"c1dc35bc07a5b06dc1e77ba0c77a070e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-35" Jul 12 00:26:29.105922 kubelet[2595]: I0712 00:26:29.105818 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4c32a8a3432dd925ba71cff3b8f4092-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-35\" (UID: \"a4c32a8a3432dd925ba71cff3b8f4092\") " pod="kube-system/kube-apiserver-ip-172-31-19-35" Jul 12 00:26:29.584604 kubelet[2595]: I0712 00:26:29.584562 2595 apiserver.go:52] "Watching apiserver" Jul 12 00:26:29.589798 kubelet[2595]: I0712 00:26:29.589754 2595 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:26:29.590543 sudo[2605]: pam_unix(sudo:session): session closed for user root Jul 12 00:26:29.877478 kubelet[2595]: I0712 00:26:29.877269 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-35" podStartSLOduration=2.877227377 podStartE2EDuration="2.877227377s" podCreationTimestamp="2025-07-12 00:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:29.863307693 +0000 UTC m=+1.484448677" watchObservedRunningTime="2025-07-12 00:26:29.877227377 +0000 UTC m=+1.498368397" Jul 12 00:26:29.877951 kubelet[2595]: I0712 00:26:29.877871 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-35" podStartSLOduration=0.877829633 podStartE2EDuration="877.829633ms" podCreationTimestamp="2025-07-12 00:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:29.876763305 +0000 UTC m=+1.497904301" watchObservedRunningTime="2025-07-12 00:26:29.877829633 +0000 UTC m=+1.498970605" Jul 12 00:26:29.897866 kubelet[2595]: I0712 00:26:29.897772 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-35" podStartSLOduration=0.897749847 podStartE2EDuration="897.749847ms" podCreationTimestamp="2025-07-12 00:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:29.89617791 +0000 UTC m=+1.517318918" watchObservedRunningTime="2025-07-12 00:26:29.897749847 +0000 UTC m=+1.518890819" Jul 12 00:26:32.159512 kubelet[2595]: I0712 00:26:32.159319 2595 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:26:32.160753 env[1652]: time="2025-07-12T00:26:32.160693239Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:26:32.162104 kubelet[2595]: I0712 00:26:32.161822 2595 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:26:32.188692 systemd[1]: Created slice kubepods-besteffort-pod6abfa829_869f_4510_9bd2_cc3d8317b921.slice. Jul 12 00:26:32.213764 kubelet[2595]: I0712 00:26:32.213703 2595 status_manager.go:890] "Failed to get status for pod" podUID="6abfa829-869f-4510-9bd2-cc3d8317b921" pod="kube-system/kube-proxy-ffh9d" err="pods \"kube-proxy-ffh9d\" is forbidden: User \"system:node:ip-172-31-19-35\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-35' and this object" Jul 12 00:26:32.229022 kubelet[2595]: I0712 00:26:32.228981 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-752c4\" (UniqueName: \"kubernetes.io/projected/6abfa829-869f-4510-9bd2-cc3d8317b921-kube-api-access-752c4\") pod \"kube-proxy-ffh9d\" (UID: \"6abfa829-869f-4510-9bd2-cc3d8317b921\") " pod="kube-system/kube-proxy-ffh9d" Jul 12 00:26:32.229370 kubelet[2595]: I0712 00:26:32.229339 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6abfa829-869f-4510-9bd2-cc3d8317b921-kube-proxy\") pod \"kube-proxy-ffh9d\" (UID: \"6abfa829-869f-4510-9bd2-cc3d8317b921\") " pod="kube-system/kube-proxy-ffh9d" Jul 12 00:26:32.229559 kubelet[2595]: I0712 00:26:32.229531 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6abfa829-869f-4510-9bd2-cc3d8317b921-xtables-lock\") pod \"kube-proxy-ffh9d\" (UID: \"6abfa829-869f-4510-9bd2-cc3d8317b921\") " pod="kube-system/kube-proxy-ffh9d" Jul 12 00:26:32.229747 kubelet[2595]: I0712 00:26:32.229718 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6abfa829-869f-4510-9bd2-cc3d8317b921-lib-modules\") pod \"kube-proxy-ffh9d\" (UID: \"6abfa829-869f-4510-9bd2-cc3d8317b921\") " pod="kube-system/kube-proxy-ffh9d" Jul 12 00:26:32.235289 systemd[1]: Created slice kubepods-burstable-podd7bb525f_fa5b_430f_981a_53b0f6311998.slice. Jul 12 00:26:32.331060 kubelet[2595]: I0712 00:26:32.331012 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-xtables-lock\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.331361 kubelet[2595]: I0712 00:26:32.331330 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-cgroup\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.331532 kubelet[2595]: I0712 00:26:32.331507 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-etc-cni-netd\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.331672 kubelet[2595]: I0712 00:26:32.331647 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-lib-modules\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.331848 kubelet[2595]: I0712 00:26:32.331821 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7bb525f-fa5b-430f-981a-53b0f6311998-clustermesh-secrets\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.332053 kubelet[2595]: I0712 00:26:32.332023 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-config-path\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.332272 kubelet[2595]: I0712 00:26:32.332238 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-run\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.332436 kubelet[2595]: I0712 00:26:32.332411 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-bpf-maps\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.332575 kubelet[2595]: I0712 00:26:32.332550 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-hostproc\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.332717 kubelet[2595]: I0712 00:26:32.332689 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqnr\" (UniqueName: \"kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-kube-api-access-nhqnr\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.332861 kubelet[2595]: I0712 00:26:32.332835 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cni-path\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.333010 kubelet[2595]: I0712 00:26:32.332973 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-hubble-tls\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.333400 kubelet[2595]: I0712 00:26:32.333331 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-host-proc-sys-net\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.333500 kubelet[2595]: I0712 00:26:32.333417 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-host-proc-sys-kernel\") pod \"cilium-4hdbk\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " pod="kube-system/cilium-4hdbk" Jul 12 00:26:32.349616 kubelet[2595]: E0712 00:26:32.349540 2595 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 12 00:26:32.349831 kubelet[2595]: E0712 00:26:32.349804 2595 projected.go:194] Error preparing data for projected volume kube-api-access-752c4 for pod kube-system/kube-proxy-ffh9d: configmap "kube-root-ca.crt" not found Jul 12 00:26:32.350481 kubelet[2595]: E0712 00:26:32.350449 2595 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6abfa829-869f-4510-9bd2-cc3d8317b921-kube-api-access-752c4 podName:6abfa829-869f-4510-9bd2-cc3d8317b921 nodeName:}" failed. No retries permitted until 2025-07-12 00:26:32.850072512 +0000 UTC m=+4.471213472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-752c4" (UniqueName: "kubernetes.io/projected/6abfa829-869f-4510-9bd2-cc3d8317b921-kube-api-access-752c4") pod "kube-proxy-ffh9d" (UID: "6abfa829-869f-4510-9bd2-cc3d8317b921") : configmap "kube-root-ca.crt" not found Jul 12 00:26:32.405264 kubelet[2595]: E0712 00:26:32.405175 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-nhqnr lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-4hdbk" podUID="d7bb525f-fa5b-430f-981a-53b0f6311998" Jul 12 00:26:32.435268 kubelet[2595]: I0712 00:26:32.435113 2595 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 12 00:26:32.489276 kubelet[2595]: E0712 00:26:32.489235 2595 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 12 00:26:32.489526 kubelet[2595]: E0712 00:26:32.489490 2595 projected.go:194] Error preparing data for projected volume kube-api-access-nhqnr for pod kube-system/cilium-4hdbk: configmap "kube-root-ca.crt" not found Jul 12 00:26:32.489714 kubelet[2595]: E0712 00:26:32.489690 2595 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-kube-api-access-nhqnr podName:d7bb525f-fa5b-430f-981a-53b0f6311998 nodeName:}" failed. No retries permitted until 2025-07-12 00:26:32.989661682 +0000 UTC m=+4.610802654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nhqnr" (UniqueName: "kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-kube-api-access-nhqnr") pod "cilium-4hdbk" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998") : configmap "kube-root-ca.crt" not found Jul 12 00:26:32.940067 kubelet[2595]: I0712 00:26:32.940022 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7bb525f-fa5b-430f-981a-53b0f6311998-clustermesh-secrets\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.940363 kubelet[2595]: I0712 00:26:32.940336 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-run\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.940531 kubelet[2595]: I0712 00:26:32.940504 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-host-proc-sys-net\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.940687 kubelet[2595]: I0712 00:26:32.940658 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-hubble-tls\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.940843 kubelet[2595]: I0712 00:26:32.940816 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-lib-modules\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.940992 kubelet[2595]: I0712 00:26:32.940968 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-hostproc\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.941131 kubelet[2595]: I0712 00:26:32.941108 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cni-path\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.941350 kubelet[2595]: I0712 00:26:32.941323 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-xtables-lock\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.941489 kubelet[2595]: I0712 00:26:32.941465 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-etc-cni-netd\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.941651 kubelet[2595]: I0712 00:26:32.941625 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-host-proc-sys-kernel\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.941804 kubelet[2595]: I0712 00:26:32.941780 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-cgroup\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.941943 kubelet[2595]: I0712 00:26:32.941918 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-config-path\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.942088 kubelet[2595]: I0712 00:26:32.942063 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-bpf-maps\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:32.948966 systemd[1]: var-lib-kubelet-pods-d7bb525f\x2dfa5b\x2d430f\x2d981a\x2d53b0f6311998-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:26:32.957529 kubelet[2595]: I0712 00:26:32.957464 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7bb525f-fa5b-430f-981a-53b0f6311998-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:26:32.957788 kubelet[2595]: I0712 00:26:32.957574 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cni-path" (OuterVolumeSpecName: "cni-path") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.957788 kubelet[2595]: I0712 00:26:32.957625 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.957788 kubelet[2595]: I0712 00:26:32.957668 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.958831 kubelet[2595]: I0712 00:26:32.958794 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.959073 kubelet[2595]: I0712 00:26:32.959047 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.959430 kubelet[2595]: I0712 00:26:32.959397 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.959646 kubelet[2595]: I0712 00:26:32.959616 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.970974 systemd[1]: var-lib-kubelet-pods-d7bb525f\x2dfa5b\x2d430f\x2d981a\x2d53b0f6311998-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:26:32.972732 kubelet[2595]: I0712 00:26:32.972690 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.972939 kubelet[2595]: I0712 00:26:32.972899 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-hostproc" (OuterVolumeSpecName: "hostproc") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.973139 kubelet[2595]: I0712 00:26:32.973101 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:26:32.974901 kubelet[2595]: I0712 00:26:32.974061 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:26:32.975821 kubelet[2595]: I0712 00:26:32.975767 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:26:33.043982 kubelet[2595]: I0712 00:26:33.043544 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-cgroup\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.043982 kubelet[2595]: I0712 00:26:33.043597 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-config-path\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.043982 kubelet[2595]: I0712 00:26:33.043625 2595 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-bpf-maps\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.043982 kubelet[2595]: I0712 00:26:33.043647 2595 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7bb525f-fa5b-430f-981a-53b0f6311998-clustermesh-secrets\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.043982 kubelet[2595]: I0712 00:26:33.043669 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cilium-run\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.043982 kubelet[2595]: I0712 00:26:33.043690 2595 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-host-proc-sys-net\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.043982 kubelet[2595]: I0712 00:26:33.043710 2595 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-hubble-tls\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.043982 kubelet[2595]: I0712 00:26:33.043732 2595 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-lib-modules\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.044618 kubelet[2595]: I0712 00:26:33.043752 2595 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-hostproc\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.044618 kubelet[2595]: I0712 00:26:33.043772 2595 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-cni-path\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.044618 kubelet[2595]: I0712 00:26:33.043793 2595 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-xtables-lock\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.044618 kubelet[2595]: I0712 00:26:33.043814 2595 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-etc-cni-netd\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.044618 kubelet[2595]: I0712 00:26:33.043834 2595 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7bb525f-fa5b-430f-981a-53b0f6311998-host-proc-sys-kernel\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.095343 systemd[1]: Created slice kubepods-besteffort-pod07b2d995_5de9_4d86_bedd_ee2e93752809.slice. Jul 12 00:26:33.110087 env[1652]: time="2025-07-12T00:26:33.109471388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ffh9d,Uid:6abfa829-869f-4510-9bd2-cc3d8317b921,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:33.144352 kubelet[2595]: I0712 00:26:33.144284 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhqnr\" (UniqueName: \"kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-kube-api-access-nhqnr\") pod \"d7bb525f-fa5b-430f-981a-53b0f6311998\" (UID: \"d7bb525f-fa5b-430f-981a-53b0f6311998\") " Jul 12 00:26:33.144762 kubelet[2595]: I0712 00:26:33.144727 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07b2d995-5de9-4d86-bedd-ee2e93752809-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xd4b4\" (UID: \"07b2d995-5de9-4d86-bedd-ee2e93752809\") " pod="kube-system/cilium-operator-6c4d7847fc-xd4b4" Jul 12 00:26:33.145693 kubelet[2595]: I0712 00:26:33.145624 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpvb\" (UniqueName: \"kubernetes.io/projected/07b2d995-5de9-4d86-bedd-ee2e93752809-kube-api-access-fvpvb\") pod \"cilium-operator-6c4d7847fc-xd4b4\" (UID: \"07b2d995-5de9-4d86-bedd-ee2e93752809\") " pod="kube-system/cilium-operator-6c4d7847fc-xd4b4" Jul 12 00:26:33.155966 env[1652]: time="2025-07-12T00:26:33.151704442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:33.155966 env[1652]: time="2025-07-12T00:26:33.151795147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:33.155966 env[1652]: time="2025-07-12T00:26:33.151823049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:33.155966 env[1652]: time="2025-07-12T00:26:33.152102101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03771001c4f779d3ec51f58c60db82959072db7a852a29f92607af0c6deea49a pid=2663 runtime=io.containerd.runc.v2 Jul 12 00:26:33.158114 kubelet[2595]: I0712 00:26:33.157575 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-kube-api-access-nhqnr" (OuterVolumeSpecName: "kube-api-access-nhqnr") pod "d7bb525f-fa5b-430f-981a-53b0f6311998" (UID: "d7bb525f-fa5b-430f-981a-53b0f6311998"). InnerVolumeSpecName "kube-api-access-nhqnr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:26:33.195033 systemd[1]: Started cri-containerd-03771001c4f779d3ec51f58c60db82959072db7a852a29f92607af0c6deea49a.scope. Jul 12 00:26:33.247719 kubelet[2595]: I0712 00:26:33.247676 2595 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhqnr\" (UniqueName: \"kubernetes.io/projected/d7bb525f-fa5b-430f-981a-53b0f6311998-kube-api-access-nhqnr\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:26:33.374687 env[1652]: time="2025-07-12T00:26:33.374619061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ffh9d,Uid:6abfa829-869f-4510-9bd2-cc3d8317b921,Namespace:kube-system,Attempt:0,} returns sandbox id \"03771001c4f779d3ec51f58c60db82959072db7a852a29f92607af0c6deea49a\"" Jul 12 00:26:33.382625 env[1652]: time="2025-07-12T00:26:33.381593329Z" level=info msg="CreateContainer within sandbox \"03771001c4f779d3ec51f58c60db82959072db7a852a29f92607af0c6deea49a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:26:33.403707 env[1652]: time="2025-07-12T00:26:33.403632124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xd4b4,Uid:07b2d995-5de9-4d86-bedd-ee2e93752809,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:33.414431 env[1652]: time="2025-07-12T00:26:33.414367807Z" level=info msg="CreateContainer within sandbox \"03771001c4f779d3ec51f58c60db82959072db7a852a29f92607af0c6deea49a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abcbe955fc1bd09336440eff3d885faa188ee0524f20426d02153d0996c9a577\"" Jul 12 00:26:33.417895 env[1652]: time="2025-07-12T00:26:33.415758866Z" level=info msg="StartContainer for \"abcbe955fc1bd09336440eff3d885faa188ee0524f20426d02153d0996c9a577\"" Jul 12 00:26:33.464240 env[1652]: time="2025-07-12T00:26:33.464031222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:33.464681 env[1652]: time="2025-07-12T00:26:33.464614564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:33.464864 env[1652]: time="2025-07-12T00:26:33.464819880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:33.472430 env[1652]: time="2025-07-12T00:26:33.472265231Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e pid=2720 runtime=io.containerd.runc.v2 Jul 12 00:26:33.490498 systemd[1]: Started cri-containerd-abcbe955fc1bd09336440eff3d885faa188ee0524f20426d02153d0996c9a577.scope. Jul 12 00:26:33.520837 systemd[1]: Started cri-containerd-15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e.scope. Jul 12 00:26:33.551715 systemd[1]: run-containerd-runc-k8s.io-15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e-runc.ULxfRM.mount: Deactivated successfully. Jul 12 00:26:33.593125 env[1652]: time="2025-07-12T00:26:33.593062191Z" level=info msg="StartContainer for \"abcbe955fc1bd09336440eff3d885faa188ee0524f20426d02153d0996c9a577\" returns successfully" Jul 12 00:26:33.669759 env[1652]: time="2025-07-12T00:26:33.669704261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xd4b4,Uid:07b2d995-5de9-4d86-bedd-ee2e93752809,Namespace:kube-system,Attempt:0,} returns sandbox id \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\"" Jul 12 00:26:33.673026 env[1652]: time="2025-07-12T00:26:33.672917354Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:26:33.865056 systemd[1]: Removed slice kubepods-burstable-podd7bb525f_fa5b_430f_981a_53b0f6311998.slice. Jul 12 00:26:33.871541 kubelet[2595]: I0712 00:26:33.871449 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ffh9d" podStartSLOduration=1.871428343 podStartE2EDuration="1.871428343s" podCreationTimestamp="2025-07-12 00:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:33.870951163 +0000 UTC m=+5.492092159" watchObservedRunningTime="2025-07-12 00:26:33.871428343 +0000 UTC m=+5.492569303" Jul 12 00:26:33.960566 systemd[1]: Created slice kubepods-burstable-pod59a5576e_d8c5_4c71_97ca_a2ec671e645e.slice. Jul 12 00:26:34.058048 kubelet[2595]: I0712 00:26:34.058003 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-bpf-maps\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.058346 kubelet[2595]: I0712 00:26:34.058317 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-etc-cni-netd\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.058533 kubelet[2595]: I0712 00:26:34.058494 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59a5576e-d8c5-4c71-97ca-a2ec671e645e-hubble-tls\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.058696 kubelet[2595]: I0712 00:26:34.058671 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cni-path\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.058861 kubelet[2595]: I0712 00:26:34.058835 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-hostproc\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.059015 kubelet[2595]: I0712 00:26:34.058990 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tszkn\" (UniqueName: \"kubernetes.io/projected/59a5576e-d8c5-4c71-97ca-a2ec671e645e-kube-api-access-tszkn\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.059173 kubelet[2595]: I0712 00:26:34.059147 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-xtables-lock\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.059376 kubelet[2595]: I0712 00:26:34.059351 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-config-path\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.059525 kubelet[2595]: I0712 00:26:34.059499 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-host-proc-sys-net\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.059676 kubelet[2595]: I0712 00:26:34.059651 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-host-proc-sys-kernel\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.059922 kubelet[2595]: I0712 00:26:34.059882 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-run\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.060095 kubelet[2595]: I0712 00:26:34.060069 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-cgroup\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.060272 kubelet[2595]: I0712 00:26:34.060236 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59a5576e-d8c5-4c71-97ca-a2ec671e645e-clustermesh-secrets\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.060450 kubelet[2595]: I0712 00:26:34.060424 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-lib-modules\") pod \"cilium-lpbmw\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " pod="kube-system/cilium-lpbmw" Jul 12 00:26:34.267314 env[1652]: time="2025-07-12T00:26:34.266890119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lpbmw,Uid:59a5576e-d8c5-4c71-97ca-a2ec671e645e,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:34.305078 env[1652]: time="2025-07-12T00:26:34.304682120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:34.305078 env[1652]: time="2025-07-12T00:26:34.304753023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:34.305078 env[1652]: time="2025-07-12T00:26:34.304778478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:34.305450 env[1652]: time="2025-07-12T00:26:34.305126307Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb pid=2882 runtime=io.containerd.runc.v2 Jul 12 00:26:34.330255 systemd[1]: Started cri-containerd-9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb.scope. Jul 12 00:26:34.401368 env[1652]: time="2025-07-12T00:26:34.401298405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lpbmw,Uid:59a5576e-d8c5-4c71-97ca-a2ec671e645e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\"" Jul 12 00:26:34.774791 kubelet[2595]: I0712 00:26:34.774307 2595 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7bb525f-fa5b-430f-981a-53b0f6311998" path="/var/lib/kubelet/pods/d7bb525f-fa5b-430f-981a-53b0f6311998/volumes" Jul 12 00:26:34.968568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861598519.mount: Deactivated successfully. Jul 12 00:26:36.062349 env[1652]: time="2025-07-12T00:26:36.062288877Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:36.068874 env[1652]: time="2025-07-12T00:26:36.068825908Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:36.072487 env[1652]: time="2025-07-12T00:26:36.072419239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:36.073712 env[1652]: time="2025-07-12T00:26:36.073664720Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:26:36.085777 env[1652]: time="2025-07-12T00:26:36.085720410Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:26:36.090167 env[1652]: time="2025-07-12T00:26:36.089800671Z" level=info msg="CreateContainer within sandbox \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:26:36.116751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209917078.mount: Deactivated successfully. Jul 12 00:26:36.136710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366640359.mount: Deactivated successfully. Jul 12 00:26:36.137765 env[1652]: time="2025-07-12T00:26:36.137708356Z" level=info msg="CreateContainer within sandbox \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\"" Jul 12 00:26:36.139444 env[1652]: time="2025-07-12T00:26:36.139390940Z" level=info msg="StartContainer for \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\"" Jul 12 00:26:36.182370 systemd[1]: Started cri-containerd-af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9.scope. Jul 12 00:26:36.263987 env[1652]: time="2025-07-12T00:26:36.263922362Z" level=info msg="StartContainer for \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\" returns successfully" Jul 12 00:26:43.091910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205729319.mount: Deactivated successfully. Jul 12 00:26:47.175179 env[1652]: time="2025-07-12T00:26:47.175106044Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:47.181842 env[1652]: time="2025-07-12T00:26:47.181779416Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:47.187142 env[1652]: time="2025-07-12T00:26:47.187091573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:47.189282 env[1652]: time="2025-07-12T00:26:47.188181429Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:26:47.193393 env[1652]: time="2025-07-12T00:26:47.193335128Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:26:47.216446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456172283.mount: Deactivated successfully. Jul 12 00:26:47.230788 env[1652]: time="2025-07-12T00:26:47.230703369Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\"" Jul 12 00:26:47.234684 env[1652]: time="2025-07-12T00:26:47.234619809Z" level=info msg="StartContainer for \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\"" Jul 12 00:26:47.287604 systemd[1]: Started cri-containerd-35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7.scope. Jul 12 00:26:47.362494 env[1652]: time="2025-07-12T00:26:47.362406144Z" level=info msg="StartContainer for \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\" returns successfully" Jul 12 00:26:47.382536 systemd[1]: cri-containerd-35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7.scope: Deactivated successfully. Jul 12 00:26:47.910797 kubelet[2595]: I0712 00:26:47.910701 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xd4b4" podStartSLOduration=12.502703936 podStartE2EDuration="14.910678405s" podCreationTimestamp="2025-07-12 00:26:33 +0000 UTC" firstStartedPulling="2025-07-12 00:26:33.672031317 +0000 UTC m=+5.293172301" lastFinishedPulling="2025-07-12 00:26:36.080005798 +0000 UTC m=+7.701146770" observedRunningTime="2025-07-12 00:26:36.878181682 +0000 UTC m=+8.499322654" watchObservedRunningTime="2025-07-12 00:26:47.910678405 +0000 UTC m=+19.531819377" Jul 12 00:26:48.036942 env[1652]: time="2025-07-12T00:26:48.036718301Z" level=info msg="shim disconnected" id=35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7 Jul 12 00:26:48.036942 env[1652]: time="2025-07-12T00:26:48.036788805Z" level=warning msg="cleaning up after shim disconnected" id=35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7 namespace=k8s.io Jul 12 00:26:48.036942 env[1652]: time="2025-07-12T00:26:48.036814474Z" level=info msg="cleaning up dead shim" Jul 12 00:26:48.057882 env[1652]: time="2025-07-12T00:26:48.057799997Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3038 runtime=io.containerd.runc.v2\n" Jul 12 00:26:48.209912 systemd[1]: run-containerd-runc-k8s.io-35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7-runc.8Q56ie.mount: Deactivated successfully. Jul 12 00:26:48.210058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7-rootfs.mount: Deactivated successfully. Jul 12 00:26:48.895136 env[1652]: time="2025-07-12T00:26:48.894761520Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:26:48.925360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount526285890.mount: Deactivated successfully. Jul 12 00:26:48.946819 env[1652]: time="2025-07-12T00:26:48.946732466Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\"" Jul 12 00:26:48.947885 env[1652]: time="2025-07-12T00:26:48.947815899Z" level=info msg="StartContainer for \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\"" Jul 12 00:26:49.003644 systemd[1]: Started cri-containerd-cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1.scope. Jul 12 00:26:49.106033 env[1652]: time="2025-07-12T00:26:49.105952696Z" level=info msg="StartContainer for \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\" returns successfully" Jul 12 00:26:49.127389 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:26:49.128056 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:26:49.128484 systemd[1]: Stopping systemd-sysctl.service... Jul 12 00:26:49.135086 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:26:49.137265 systemd[1]: cri-containerd-cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1.scope: Deactivated successfully. Jul 12 00:26:49.154136 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:26:49.194024 env[1652]: time="2025-07-12T00:26:49.193960328Z" level=info msg="shim disconnected" id=cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1 Jul 12 00:26:49.194430 env[1652]: time="2025-07-12T00:26:49.194393276Z" level=warning msg="cleaning up after shim disconnected" id=cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1 namespace=k8s.io Jul 12 00:26:49.194555 env[1652]: time="2025-07-12T00:26:49.194526603Z" level=info msg="cleaning up dead shim" Jul 12 00:26:49.209471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1-rootfs.mount: Deactivated successfully. Jul 12 00:26:49.218970 env[1652]: time="2025-07-12T00:26:49.218913317Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3103 runtime=io.containerd.runc.v2\n" Jul 12 00:26:49.899044 env[1652]: time="2025-07-12T00:26:49.898978160Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:26:49.936141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2786499481.mount: Deactivated successfully. Jul 12 00:26:49.950239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268824702.mount: Deactivated successfully. Jul 12 00:26:49.957431 env[1652]: time="2025-07-12T00:26:49.957369360Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\"" Jul 12 00:26:49.960077 env[1652]: time="2025-07-12T00:26:49.959609654Z" level=info msg="StartContainer for \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\"" Jul 12 00:26:49.993113 systemd[1]: Started cri-containerd-3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae.scope. Jul 12 00:26:50.066521 env[1652]: time="2025-07-12T00:26:50.066443033Z" level=info msg="StartContainer for \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\" returns successfully" Jul 12 00:26:50.072652 systemd[1]: cri-containerd-3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae.scope: Deactivated successfully. Jul 12 00:26:50.121106 env[1652]: time="2025-07-12T00:26:50.121028919Z" level=info msg="shim disconnected" id=3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae Jul 12 00:26:50.121430 env[1652]: time="2025-07-12T00:26:50.121108124Z" level=warning msg="cleaning up after shim disconnected" id=3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae namespace=k8s.io Jul 12 00:26:50.121430 env[1652]: time="2025-07-12T00:26:50.121131693Z" level=info msg="cleaning up dead shim" Jul 12 00:26:50.135482 env[1652]: time="2025-07-12T00:26:50.135408120Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3162 runtime=io.containerd.runc.v2\n" Jul 12 00:26:50.904082 env[1652]: time="2025-07-12T00:26:50.901291647Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:26:50.930775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516772856.mount: Deactivated successfully. Jul 12 00:26:50.952335 env[1652]: time="2025-07-12T00:26:50.952249860Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\"" Jul 12 00:26:50.954932 env[1652]: time="2025-07-12T00:26:50.953432823Z" level=info msg="StartContainer for \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\"" Jul 12 00:26:50.990991 systemd[1]: Started cri-containerd-b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2.scope. Jul 12 00:26:51.085571 systemd[1]: cri-containerd-b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2.scope: Deactivated successfully. Jul 12 00:26:51.089422 env[1652]: time="2025-07-12T00:26:51.089356695Z" level=info msg="StartContainer for \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\" returns successfully" Jul 12 00:26:51.130670 env[1652]: time="2025-07-12T00:26:51.130602444Z" level=info msg="shim disconnected" id=b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2 Jul 12 00:26:51.130670 env[1652]: time="2025-07-12T00:26:51.130670379Z" level=warning msg="cleaning up after shim disconnected" id=b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2 namespace=k8s.io Jul 12 00:26:51.130670 env[1652]: time="2025-07-12T00:26:51.130692340Z" level=info msg="cleaning up dead shim" Jul 12 00:26:51.144229 env[1652]: time="2025-07-12T00:26:51.144123219Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3221 runtime=io.containerd.runc.v2\n" Jul 12 00:26:51.209597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2-rootfs.mount: Deactivated successfully. Jul 12 00:26:51.911047 env[1652]: time="2025-07-12T00:26:51.910975314Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:26:51.949644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797559094.mount: Deactivated successfully. Jul 12 00:26:51.971535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569981763.mount: Deactivated successfully. Jul 12 00:26:51.973145 env[1652]: time="2025-07-12T00:26:51.973064268Z" level=info msg="CreateContainer within sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\"" Jul 12 00:26:51.978946 env[1652]: time="2025-07-12T00:26:51.977641365Z" level=info msg="StartContainer for \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\"" Jul 12 00:26:52.013603 systemd[1]: Started cri-containerd-017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18.scope. Jul 12 00:26:52.103703 env[1652]: time="2025-07-12T00:26:52.103527949Z" level=info msg="StartContainer for \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\" returns successfully" Jul 12 00:26:52.395255 kubelet[2595]: I0712 00:26:52.395174 2595 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:26:52.459552 systemd[1]: Created slice kubepods-burstable-podde361cdb_ca1d_47f2_b17b_48b931522bda.slice. Jul 12 00:26:52.468327 systemd[1]: Created slice kubepods-burstable-pod96477595_7509_47e6_84e9_5f93be704d4e.slice. Jul 12 00:26:52.516843 kubelet[2595]: I0712 00:26:52.516785 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmtlt\" (UniqueName: \"kubernetes.io/projected/de361cdb-ca1d-47f2-b17b-48b931522bda-kube-api-access-qmtlt\") pod \"coredns-668d6bf9bc-xkw6n\" (UID: \"de361cdb-ca1d-47f2-b17b-48b931522bda\") " pod="kube-system/coredns-668d6bf9bc-xkw6n" Jul 12 00:26:52.517165 kubelet[2595]: I0712 00:26:52.517132 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96477595-7509-47e6-84e9-5f93be704d4e-config-volume\") pod \"coredns-668d6bf9bc-sqtn2\" (UID: \"96477595-7509-47e6-84e9-5f93be704d4e\") " pod="kube-system/coredns-668d6bf9bc-sqtn2" Jul 12 00:26:52.519389 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:26:52.521688 kubelet[2595]: I0712 00:26:52.521639 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twl65\" (UniqueName: \"kubernetes.io/projected/96477595-7509-47e6-84e9-5f93be704d4e-kube-api-access-twl65\") pod \"coredns-668d6bf9bc-sqtn2\" (UID: \"96477595-7509-47e6-84e9-5f93be704d4e\") " pod="kube-system/coredns-668d6bf9bc-sqtn2" Jul 12 00:26:52.521961 kubelet[2595]: I0712 00:26:52.521930 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de361cdb-ca1d-47f2-b17b-48b931522bda-config-volume\") pod \"coredns-668d6bf9bc-xkw6n\" (UID: \"de361cdb-ca1d-47f2-b17b-48b931522bda\") " pod="kube-system/coredns-668d6bf9bc-xkw6n" Jul 12 00:26:52.767867 env[1652]: time="2025-07-12T00:26:52.767257479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xkw6n,Uid:de361cdb-ca1d-47f2-b17b-48b931522bda,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:52.776722 env[1652]: time="2025-07-12T00:26:52.776349989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sqtn2,Uid:96477595-7509-47e6-84e9-5f93be704d4e,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:52.941804 kubelet[2595]: I0712 00:26:52.941715 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lpbmw" podStartSLOduration=7.155656333 podStartE2EDuration="19.941693892s" podCreationTimestamp="2025-07-12 00:26:33 +0000 UTC" firstStartedPulling="2025-07-12 00:26:34.404299003 +0000 UTC m=+6.025439975" lastFinishedPulling="2025-07-12 00:26:47.190336574 +0000 UTC m=+18.811477534" observedRunningTime="2025-07-12 00:26:52.939137355 +0000 UTC m=+24.560278339" watchObservedRunningTime="2025-07-12 00:26:52.941693892 +0000 UTC m=+24.562834864" Jul 12 00:26:53.484255 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:26:55.311851 (udev-worker)[3373]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:55.314574 systemd-networkd[1365]: cilium_host: Link UP Jul 12 00:26:55.314836 systemd-networkd[1365]: cilium_net: Link UP Jul 12 00:26:55.316060 systemd-networkd[1365]: cilium_net: Gained carrier Jul 12 00:26:55.318142 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 12 00:26:55.318296 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 12 00:26:55.319586 systemd-networkd[1365]: cilium_host: Gained carrier Jul 12 00:26:55.320580 systemd-networkd[1365]: cilium_net: Gained IPv6LL Jul 12 00:26:55.321330 systemd-networkd[1365]: cilium_host: Gained IPv6LL Jul 12 00:26:55.322179 (udev-worker)[3410]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:55.465673 systemd[1]: run-containerd-runc-k8s.io-017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18-runc.QUDGZn.mount: Deactivated successfully. Jul 12 00:26:55.601943 (udev-worker)[3417]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:55.611277 systemd-networkd[1365]: cilium_vxlan: Link UP Jul 12 00:26:55.611292 systemd-networkd[1365]: cilium_vxlan: Gained carrier Jul 12 00:26:56.144231 kernel: NET: Registered PF_ALG protocol family Jul 12 00:26:57.293553 systemd-networkd[1365]: cilium_vxlan: Gained IPv6LL Jul 12 00:26:57.512083 (udev-worker)[3416]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:57.521766 systemd-networkd[1365]: lxc_health: Link UP Jul 12 00:26:57.530365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:26:57.532420 systemd-networkd[1365]: lxc_health: Gained carrier Jul 12 00:26:57.928183 systemd-networkd[1365]: lxc2aee1707d433: Link UP Jul 12 00:26:57.930505 systemd-networkd[1365]: lxc70d6f0b9dd21: Link UP Jul 12 00:26:57.947334 kernel: eth0: renamed from tmp70feb Jul 12 00:26:57.952366 kernel: eth0: renamed from tmp317b7 Jul 12 00:26:57.957577 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc70d6f0b9dd21: link becomes ready Jul 12 00:26:57.957079 systemd-networkd[1365]: lxc70d6f0b9dd21: Gained carrier Jul 12 00:26:57.963342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2aee1707d433: link becomes ready Jul 12 00:26:57.963683 systemd-networkd[1365]: lxc2aee1707d433: Gained carrier Jul 12 00:26:58.766000 systemd-networkd[1365]: lxc_health: Gained IPv6LL Jul 12 00:26:59.470961 systemd-networkd[1365]: lxc70d6f0b9dd21: Gained IPv6LL Jul 12 00:26:59.662907 systemd-networkd[1365]: lxc2aee1707d433: Gained IPv6LL Jul 12 00:27:00.054569 systemd[1]: run-containerd-runc-k8s.io-017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18-runc.9N6LvK.mount: Deactivated successfully. Jul 12 00:27:04.550930 systemd[1]: run-containerd-runc-k8s.io-017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18-runc.CooGpi.mount: Deactivated successfully. Jul 12 00:27:04.932379 sudo[1880]: pam_unix(sudo:session): session closed for user root Jul 12 00:27:04.956882 sshd[1877]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:04.962752 systemd[1]: sshd@4-172.31.19.35:22-147.75.109.163:47176.service: Deactivated successfully. Jul 12 00:27:04.964041 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:27:04.965911 systemd[1]: session-5.scope: Consumed 10.990s CPU time. Jul 12 00:27:04.968469 systemd-logind[1638]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:27:04.971152 systemd-logind[1638]: Removed session 5. Jul 12 00:27:06.618104 env[1652]: time="2025-07-12T00:27:06.617957479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:06.618104 env[1652]: time="2025-07-12T00:27:06.618038590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:06.618913 env[1652]: time="2025-07-12T00:27:06.618097525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:06.618913 env[1652]: time="2025-07-12T00:27:06.618447506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/317b790ce06165190868becad2fa096f16e4c1330d1d31c613881c698f290b97 pid=3904 runtime=io.containerd.runc.v2 Jul 12 00:27:06.668434 systemd[1]: run-containerd-runc-k8s.io-317b790ce06165190868becad2fa096f16e4c1330d1d31c613881c698f290b97-runc.ItR2Q3.mount: Deactivated successfully. Jul 12 00:27:06.687814 systemd[1]: Started cri-containerd-317b790ce06165190868becad2fa096f16e4c1330d1d31c613881c698f290b97.scope. Jul 12 00:27:06.730328 env[1652]: time="2025-07-12T00:27:06.730161997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:06.730520 env[1652]: time="2025-07-12T00:27:06.730355301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:06.730520 env[1652]: time="2025-07-12T00:27:06.730448340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:06.731012 env[1652]: time="2025-07-12T00:27:06.730907010Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70feb4ad6998af351d334004946c0e172956d0301bcded811c4d53951303ab83 pid=3941 runtime=io.containerd.runc.v2 Jul 12 00:27:06.775117 systemd[1]: Started cri-containerd-70feb4ad6998af351d334004946c0e172956d0301bcded811c4d53951303ab83.scope. Jul 12 00:27:06.849749 env[1652]: time="2025-07-12T00:27:06.849690239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xkw6n,Uid:de361cdb-ca1d-47f2-b17b-48b931522bda,Namespace:kube-system,Attempt:0,} returns sandbox id \"317b790ce06165190868becad2fa096f16e4c1330d1d31c613881c698f290b97\"" Jul 12 00:27:06.858467 env[1652]: time="2025-07-12T00:27:06.858371058Z" level=info msg="CreateContainer within sandbox \"317b790ce06165190868becad2fa096f16e4c1330d1d31c613881c698f290b97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:27:06.891301 env[1652]: time="2025-07-12T00:27:06.890963856Z" level=info msg="CreateContainer within sandbox \"317b790ce06165190868becad2fa096f16e4c1330d1d31c613881c698f290b97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3af22a8150e6bf80eb18ef8b76ada5852570541a0a6c816affba5c419f89a65f\"" Jul 12 00:27:06.893619 env[1652]: time="2025-07-12T00:27:06.893549523Z" level=info msg="StartContainer for \"3af22a8150e6bf80eb18ef8b76ada5852570541a0a6c816affba5c419f89a65f\"" Jul 12 00:27:06.937173 systemd[1]: Started cri-containerd-3af22a8150e6bf80eb18ef8b76ada5852570541a0a6c816affba5c419f89a65f.scope. Jul 12 00:27:07.019658 env[1652]: time="2025-07-12T00:27:07.019589275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sqtn2,Uid:96477595-7509-47e6-84e9-5f93be704d4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"70feb4ad6998af351d334004946c0e172956d0301bcded811c4d53951303ab83\"" Jul 12 00:27:07.026481 env[1652]: time="2025-07-12T00:27:07.025301910Z" level=info msg="CreateContainer within sandbox \"70feb4ad6998af351d334004946c0e172956d0301bcded811c4d53951303ab83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:27:07.061735 env[1652]: time="2025-07-12T00:27:07.061655969Z" level=info msg="CreateContainer within sandbox \"70feb4ad6998af351d334004946c0e172956d0301bcded811c4d53951303ab83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7df7fa3fb5327d838927888e51dc9cc85395dcab12de004a45d8115b15adfc2\"" Jul 12 00:27:07.065432 env[1652]: time="2025-07-12T00:27:07.065357688Z" level=info msg="StartContainer for \"c7df7fa3fb5327d838927888e51dc9cc85395dcab12de004a45d8115b15adfc2\"" Jul 12 00:27:07.090868 env[1652]: time="2025-07-12T00:27:07.090792381Z" level=info msg="StartContainer for \"3af22a8150e6bf80eb18ef8b76ada5852570541a0a6c816affba5c419f89a65f\" returns successfully" Jul 12 00:27:07.136854 systemd[1]: Started cri-containerd-c7df7fa3fb5327d838927888e51dc9cc85395dcab12de004a45d8115b15adfc2.scope. Jul 12 00:27:07.228549 env[1652]: time="2025-07-12T00:27:07.228471747Z" level=info msg="StartContainer for \"c7df7fa3fb5327d838927888e51dc9cc85395dcab12de004a45d8115b15adfc2\" returns successfully" Jul 12 00:27:07.981241 kubelet[2595]: I0712 00:27:07.979848 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xkw6n" podStartSLOduration=34.979826844 podStartE2EDuration="34.979826844s" podCreationTimestamp="2025-07-12 00:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:27:07.977849901 +0000 UTC m=+39.598990909" watchObservedRunningTime="2025-07-12 00:27:07.979826844 +0000 UTC m=+39.600967816" Jul 12 00:27:45.744240 systemd[1]: Started sshd@5-172.31.19.35:22-147.75.109.163:43710.service. Jul 12 00:27:45.913340 sshd[4074]: Accepted publickey for core from 147.75.109.163 port 43710 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:45.915186 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:45.924599 systemd-logind[1638]: New session 6 of user core. Jul 12 00:27:45.924672 systemd[1]: Started session-6.scope. Jul 12 00:27:46.202326 sshd[4074]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:46.207513 systemd[1]: sshd@5-172.31.19.35:22-147.75.109.163:43710.service: Deactivated successfully. Jul 12 00:27:46.208869 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:27:46.210355 systemd-logind[1638]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:27:46.212478 systemd-logind[1638]: Removed session 6. Jul 12 00:27:51.233645 systemd[1]: Started sshd@6-172.31.19.35:22-147.75.109.163:35362.service. Jul 12 00:27:51.408954 sshd[4087]: Accepted publickey for core from 147.75.109.163 port 35362 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:51.412252 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:51.420369 systemd-logind[1638]: New session 7 of user core. Jul 12 00:27:51.421066 systemd[1]: Started session-7.scope. Jul 12 00:27:51.675229 sshd[4087]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:51.680153 systemd-logind[1638]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:27:51.680803 systemd[1]: sshd@6-172.31.19.35:22-147.75.109.163:35362.service: Deactivated successfully. Jul 12 00:27:51.682143 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:27:51.683910 systemd-logind[1638]: Removed session 7. Jul 12 00:27:56.707366 systemd[1]: Started sshd@7-172.31.19.35:22-147.75.109.163:34898.service. Jul 12 00:27:56.882335 sshd[4100]: Accepted publickey for core from 147.75.109.163 port 34898 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:56.884995 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:56.893618 systemd-logind[1638]: New session 8 of user core. Jul 12 00:27:56.894265 systemd[1]: Started session-8.scope. Jul 12 00:27:57.157875 sshd[4100]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:57.163531 systemd[1]: sshd@7-172.31.19.35:22-147.75.109.163:34898.service: Deactivated successfully. Jul 12 00:27:57.165549 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:27:57.167178 systemd-logind[1638]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:27:57.169335 systemd-logind[1638]: Removed session 8. Jul 12 00:28:02.186139 systemd[1]: Started sshd@8-172.31.19.35:22-147.75.109.163:34906.service. Jul 12 00:28:02.356536 sshd[4112]: Accepted publickey for core from 147.75.109.163 port 34906 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:02.359101 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:02.368321 systemd-logind[1638]: New session 9 of user core. Jul 12 00:28:02.369262 systemd[1]: Started session-9.scope. Jul 12 00:28:02.627697 sshd[4112]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:02.634248 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:28:02.635505 systemd-logind[1638]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:28:02.635798 systemd[1]: sshd@8-172.31.19.35:22-147.75.109.163:34906.service: Deactivated successfully. Jul 12 00:28:02.638733 systemd-logind[1638]: Removed session 9. Jul 12 00:28:07.656522 systemd[1]: Started sshd@9-172.31.19.35:22-147.75.109.163:55950.service. Jul 12 00:28:07.826156 sshd[4129]: Accepted publickey for core from 147.75.109.163 port 55950 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:07.830035 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:07.840962 systemd[1]: Started session-10.scope. Jul 12 00:28:07.843702 systemd-logind[1638]: New session 10 of user core. Jul 12 00:28:08.107733 sshd[4129]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:08.114120 systemd-logind[1638]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:28:08.114735 systemd[1]: sshd@9-172.31.19.35:22-147.75.109.163:55950.service: Deactivated successfully. Jul 12 00:28:08.116013 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:28:08.118743 systemd-logind[1638]: Removed session 10. Jul 12 00:28:08.138887 systemd[1]: Started sshd@10-172.31.19.35:22-147.75.109.163:55966.service. Jul 12 00:28:08.325446 sshd[4141]: Accepted publickey for core from 147.75.109.163 port 55966 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:08.328502 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:08.337294 systemd[1]: Started session-11.scope. Jul 12 00:28:08.338330 systemd-logind[1638]: New session 11 of user core. Jul 12 00:28:08.667525 sshd[4141]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:08.673792 systemd[1]: sshd@10-172.31.19.35:22-147.75.109.163:55966.service: Deactivated successfully. Jul 12 00:28:08.675110 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:28:08.677704 systemd-logind[1638]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:28:08.680855 systemd-logind[1638]: Removed session 11. Jul 12 00:28:08.703488 systemd[1]: Started sshd@11-172.31.19.35:22-147.75.109.163:55978.service. Jul 12 00:28:08.876273 sshd[4151]: Accepted publickey for core from 147.75.109.163 port 55978 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:08.878901 sshd[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:08.888393 systemd[1]: Started session-12.scope. Jul 12 00:28:08.889482 systemd-logind[1638]: New session 12 of user core. Jul 12 00:28:09.139385 sshd[4151]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:09.144486 systemd-logind[1638]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:28:09.147025 systemd[1]: sshd@11-172.31.19.35:22-147.75.109.163:55978.service: Deactivated successfully. Jul 12 00:28:09.148588 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:28:09.149977 systemd-logind[1638]: Removed session 12. Jul 12 00:28:14.171759 systemd[1]: Started sshd@12-172.31.19.35:22-147.75.109.163:55988.service. Jul 12 00:28:14.345154 sshd[4163]: Accepted publickey for core from 147.75.109.163 port 55988 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:14.348289 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:14.356115 systemd-logind[1638]: New session 13 of user core. Jul 12 00:28:14.357226 systemd[1]: Started session-13.scope. Jul 12 00:28:14.607861 sshd[4163]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:14.612616 systemd-logind[1638]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:28:14.613278 systemd[1]: sshd@12-172.31.19.35:22-147.75.109.163:55988.service: Deactivated successfully. Jul 12 00:28:14.614611 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:28:14.616463 systemd-logind[1638]: Removed session 13. Jul 12 00:28:19.638132 systemd[1]: Started sshd@13-172.31.19.35:22-147.75.109.163:45994.service. Jul 12 00:28:19.821357 sshd[4175]: Accepted publickey for core from 147.75.109.163 port 45994 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:19.823740 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:19.832258 systemd[1]: Started session-14.scope. Jul 12 00:28:19.835057 systemd-logind[1638]: New session 14 of user core. Jul 12 00:28:20.095140 sshd[4175]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:20.099478 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:28:20.100646 systemd[1]: sshd@13-172.31.19.35:22-147.75.109.163:45994.service: Deactivated successfully. Jul 12 00:28:20.102922 systemd-logind[1638]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:28:20.104904 systemd-logind[1638]: Removed session 14. Jul 12 00:28:25.122510 systemd[1]: Started sshd@14-172.31.19.35:22-147.75.109.163:46004.service. Jul 12 00:28:25.293973 sshd[4187]: Accepted publickey for core from 147.75.109.163 port 46004 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:25.296704 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:25.305066 systemd-logind[1638]: New session 15 of user core. Jul 12 00:28:25.306114 systemd[1]: Started session-15.scope. Jul 12 00:28:25.555373 sshd[4187]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:25.560700 systemd[1]: sshd@14-172.31.19.35:22-147.75.109.163:46004.service: Deactivated successfully. Jul 12 00:28:25.561574 systemd-logind[1638]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:28:25.561982 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:28:25.564743 systemd-logind[1638]: Removed session 15. Jul 12 00:28:25.585602 systemd[1]: Started sshd@15-172.31.19.35:22-147.75.109.163:46012.service. Jul 12 00:28:25.760021 sshd[4199]: Accepted publickey for core from 147.75.109.163 port 46012 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:25.763539 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:25.771717 systemd-logind[1638]: New session 16 of user core. Jul 12 00:28:25.772730 systemd[1]: Started session-16.scope. Jul 12 00:28:26.111181 sshd[4199]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:26.115555 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:28:26.116715 systemd[1]: sshd@15-172.31.19.35:22-147.75.109.163:46012.service: Deactivated successfully. Jul 12 00:28:26.118488 systemd-logind[1638]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:28:26.121532 systemd-logind[1638]: Removed session 16. Jul 12 00:28:26.141163 systemd[1]: Started sshd@16-172.31.19.35:22-147.75.109.163:34942.service. Jul 12 00:28:26.312454 sshd[4209]: Accepted publickey for core from 147.75.109.163 port 34942 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:26.314994 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:26.323312 systemd-logind[1638]: New session 17 of user core. Jul 12 00:28:26.324380 systemd[1]: Started session-17.scope. Jul 12 00:28:27.683805 sshd[4209]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:27.691019 systemd[1]: sshd@16-172.31.19.35:22-147.75.109.163:34942.service: Deactivated successfully. Jul 12 00:28:27.692358 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:28:27.693402 systemd-logind[1638]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:28:27.695257 systemd-logind[1638]: Removed session 17. Jul 12 00:28:27.716957 systemd[1]: Started sshd@17-172.31.19.35:22-147.75.109.163:34948.service. Jul 12 00:28:27.894840 sshd[4225]: Accepted publickey for core from 147.75.109.163 port 34948 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:27.897476 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:27.906966 systemd-logind[1638]: New session 18 of user core. Jul 12 00:28:27.908398 systemd[1]: Started session-18.scope. Jul 12 00:28:28.431892 sshd[4225]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:28.437471 systemd[1]: sshd@17-172.31.19.35:22-147.75.109.163:34948.service: Deactivated successfully. Jul 12 00:28:28.438827 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:28:28.440444 systemd-logind[1638]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:28:28.442955 systemd-logind[1638]: Removed session 18. Jul 12 00:28:28.460852 systemd[1]: Started sshd@18-172.31.19.35:22-147.75.109.163:34956.service. Jul 12 00:28:28.629840 sshd[4236]: Accepted publickey for core from 147.75.109.163 port 34956 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:28.633323 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:28.642065 systemd[1]: Started session-19.scope. Jul 12 00:28:28.643348 systemd-logind[1638]: New session 19 of user core. Jul 12 00:28:28.905651 sshd[4236]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:28.913441 systemd[1]: sshd@18-172.31.19.35:22-147.75.109.163:34956.service: Deactivated successfully. Jul 12 00:28:28.914788 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:28:28.917054 systemd-logind[1638]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:28:28.919410 systemd-logind[1638]: Removed session 19. Jul 12 00:28:33.934180 systemd[1]: Started sshd@19-172.31.19.35:22-147.75.109.163:34962.service. Jul 12 00:28:34.107576 sshd[4250]: Accepted publickey for core from 147.75.109.163 port 34962 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:34.110128 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:34.119455 systemd[1]: Started session-20.scope. Jul 12 00:28:34.120648 systemd-logind[1638]: New session 20 of user core. Jul 12 00:28:34.372848 sshd[4250]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:34.377784 systemd[1]: sshd@19-172.31.19.35:22-147.75.109.163:34962.service: Deactivated successfully. Jul 12 00:28:34.379012 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:28:34.380732 systemd-logind[1638]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:28:34.382766 systemd-logind[1638]: Removed session 20. Jul 12 00:28:39.402841 systemd[1]: Started sshd@20-172.31.19.35:22-147.75.109.163:55498.service. Jul 12 00:28:39.579160 sshd[4266]: Accepted publickey for core from 147.75.109.163 port 55498 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:39.589120 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:39.598986 systemd[1]: Started session-21.scope. Jul 12 00:28:39.601740 systemd-logind[1638]: New session 21 of user core. Jul 12 00:28:39.841018 sshd[4266]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:39.846999 systemd[1]: sshd@20-172.31.19.35:22-147.75.109.163:55498.service: Deactivated successfully. Jul 12 00:28:39.848352 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:28:39.848403 systemd-logind[1638]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:28:39.850528 systemd-logind[1638]: Removed session 21. Jul 12 00:28:44.870392 systemd[1]: Started sshd@21-172.31.19.35:22-147.75.109.163:55504.service. Jul 12 00:28:45.044998 sshd[4278]: Accepted publickey for core from 147.75.109.163 port 55504 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:45.048331 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:45.057068 systemd[1]: Started session-22.scope. Jul 12 00:28:45.057943 systemd-logind[1638]: New session 22 of user core. Jul 12 00:28:45.310155 sshd[4278]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:45.315436 systemd-logind[1638]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:28:45.316995 systemd[1]: sshd@21-172.31.19.35:22-147.75.109.163:55504.service: Deactivated successfully. Jul 12 00:28:45.318250 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:28:45.319573 systemd-logind[1638]: Removed session 22. Jul 12 00:28:50.339392 systemd[1]: Started sshd@22-172.31.19.35:22-147.75.109.163:37398.service. Jul 12 00:28:50.508065 sshd[4290]: Accepted publickey for core from 147.75.109.163 port 37398 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:50.511125 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:50.518964 systemd-logind[1638]: New session 23 of user core. Jul 12 00:28:50.519997 systemd[1]: Started session-23.scope. Jul 12 00:28:50.773174 sshd[4290]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:50.778355 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:28:50.779817 systemd-logind[1638]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:28:50.780137 systemd[1]: sshd@22-172.31.19.35:22-147.75.109.163:37398.service: Deactivated successfully. Jul 12 00:28:50.782915 systemd-logind[1638]: Removed session 23. Jul 12 00:28:50.805262 systemd[1]: Started sshd@23-172.31.19.35:22-147.75.109.163:37406.service. Jul 12 00:28:50.983470 sshd[4302]: Accepted publickey for core from 147.75.109.163 port 37406 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:50.986035 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:50.995305 systemd[1]: Started session-24.scope. Jul 12 00:28:50.995306 systemd-logind[1638]: New session 24 of user core. Jul 12 00:28:53.272285 kubelet[2595]: I0712 00:28:53.272174 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sqtn2" podStartSLOduration=140.272127293 podStartE2EDuration="2m20.272127293s" podCreationTimestamp="2025-07-12 00:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:27:08.050774772 +0000 UTC m=+39.671915780" watchObservedRunningTime="2025-07-12 00:28:53.272127293 +0000 UTC m=+144.893268277" Jul 12 00:28:53.311002 env[1652]: time="2025-07-12T00:28:53.310925080Z" level=info msg="StopContainer for \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\" with timeout 30 (s)" Jul 12 00:28:53.312187 env[1652]: time="2025-07-12T00:28:53.312096452Z" level=info msg="Stop container \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\" with signal terminated" Jul 12 00:28:53.348539 systemd[1]: cri-containerd-af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9.scope: Deactivated successfully. Jul 12 00:28:53.410892 env[1652]: time="2025-07-12T00:28:53.410663707Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:28:53.412477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9-rootfs.mount: Deactivated successfully. Jul 12 00:28:53.428105 env[1652]: time="2025-07-12T00:28:53.428043786Z" level=info msg="StopContainer for \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\" with timeout 2 (s)" Jul 12 00:28:53.429123 env[1652]: time="2025-07-12T00:28:53.428738999Z" level=info msg="Stop container \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\" with signal terminated" Jul 12 00:28:53.431292 env[1652]: time="2025-07-12T00:28:53.431171253Z" level=info msg="shim disconnected" id=af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9 Jul 12 00:28:53.431292 env[1652]: time="2025-07-12T00:28:53.431286612Z" level=warning msg="cleaning up after shim disconnected" id=af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9 namespace=k8s.io Jul 12 00:28:53.431547 env[1652]: time="2025-07-12T00:28:53.431310169Z" level=info msg="cleaning up dead shim" Jul 12 00:28:53.441635 systemd-networkd[1365]: lxc_health: Link DOWN Jul 12 00:28:53.441649 systemd-networkd[1365]: lxc_health: Lost carrier Jul 12 00:28:53.466866 systemd[1]: cri-containerd-017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18.scope: Deactivated successfully. Jul 12 00:28:53.467545 systemd[1]: cri-containerd-017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18.scope: Consumed 15.139s CPU time. Jul 12 00:28:53.470121 env[1652]: time="2025-07-12T00:28:53.470039218Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4350 runtime=io.containerd.runc.v2\n" Jul 12 00:28:53.478426 env[1652]: time="2025-07-12T00:28:53.478363722Z" level=info msg="StopContainer for \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\" returns successfully" Jul 12 00:28:53.479711 env[1652]: time="2025-07-12T00:28:53.479654809Z" level=info msg="StopPodSandbox for \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\"" Jul 12 00:28:53.480315 env[1652]: time="2025-07-12T00:28:53.480238744Z" level=info msg="Container to stop \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:28:53.487609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e-shm.mount: Deactivated successfully. Jul 12 00:28:53.505783 systemd[1]: cri-containerd-15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e.scope: Deactivated successfully. Jul 12 00:28:53.515689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18-rootfs.mount: Deactivated successfully. Jul 12 00:28:53.527440 env[1652]: time="2025-07-12T00:28:53.526413564Z" level=info msg="shim disconnected" id=017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18 Jul 12 00:28:53.527811 env[1652]: time="2025-07-12T00:28:53.527756252Z" level=warning msg="cleaning up after shim disconnected" id=017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18 namespace=k8s.io Jul 12 00:28:53.527955 env[1652]: time="2025-07-12T00:28:53.527926465Z" level=info msg="cleaning up dead shim" Jul 12 00:28:53.547433 env[1652]: time="2025-07-12T00:28:53.547373161Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4395 runtime=io.containerd.runc.v2\n" Jul 12 00:28:53.551973 env[1652]: time="2025-07-12T00:28:53.551917335Z" level=info msg="StopContainer for \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\" returns successfully" Jul 12 00:28:53.553239 env[1652]: time="2025-07-12T00:28:53.553162629Z" level=info msg="StopPodSandbox for \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\"" Jul 12 00:28:53.553575 env[1652]: time="2025-07-12T00:28:53.553532778Z" level=info msg="Container to stop \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:28:53.553737 env[1652]: time="2025-07-12T00:28:53.553700842Z" level=info msg="Container to stop \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:28:53.554018 env[1652]: time="2025-07-12T00:28:53.553904511Z" level=info msg="Container to stop \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:28:53.554216 env[1652]: time="2025-07-12T00:28:53.554166165Z" level=info msg="Container to stop \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:28:53.554487 env[1652]: time="2025-07-12T00:28:53.554449096Z" level=info msg="Container to stop \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:28:53.558366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb-shm.mount: Deactivated successfully. Jul 12 00:28:53.573540 systemd[1]: cri-containerd-9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb.scope: Deactivated successfully. Jul 12 00:28:53.577022 env[1652]: time="2025-07-12T00:28:53.576139270Z" level=info msg="shim disconnected" id=15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e Jul 12 00:28:53.577516 env[1652]: time="2025-07-12T00:28:53.577455402Z" level=warning msg="cleaning up after shim disconnected" id=15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e namespace=k8s.io Jul 12 00:28:53.577852 env[1652]: time="2025-07-12T00:28:53.577820535Z" level=info msg="cleaning up dead shim" Jul 12 00:28:53.606454 env[1652]: time="2025-07-12T00:28:53.606401611Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4421 runtime=io.containerd.runc.v2\n" Jul 12 00:28:53.607289 env[1652]: time="2025-07-12T00:28:53.607190847Z" level=info msg="TearDown network for sandbox \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\" successfully" Jul 12 00:28:53.607459 env[1652]: time="2025-07-12T00:28:53.607423316Z" level=info msg="StopPodSandbox for \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\" returns successfully" Jul 12 00:28:53.643471 env[1652]: time="2025-07-12T00:28:53.643404899Z" level=info msg="shim disconnected" id=9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb Jul 12 00:28:53.643856 env[1652]: time="2025-07-12T00:28:53.643805589Z" level=warning msg="cleaning up after shim disconnected" id=9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb namespace=k8s.io Jul 12 00:28:53.643992 env[1652]: time="2025-07-12T00:28:53.643964809Z" level=info msg="cleaning up dead shim" Jul 12 00:28:53.659139 env[1652]: time="2025-07-12T00:28:53.659083589Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4448 runtime=io.containerd.runc.v2\n" Jul 12 00:28:53.660352 env[1652]: time="2025-07-12T00:28:53.660304342Z" level=info msg="TearDown network for sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" successfully" Jul 12 00:28:53.660537 env[1652]: time="2025-07-12T00:28:53.660501831Z" level=info msg="StopPodSandbox for \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" returns successfully" Jul 12 00:28:53.669019 kubelet[2595]: I0712 00:28:53.668971 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07b2d995-5de9-4d86-bedd-ee2e93752809-cilium-config-path\") pod \"07b2d995-5de9-4d86-bedd-ee2e93752809\" (UID: \"07b2d995-5de9-4d86-bedd-ee2e93752809\") " Jul 12 00:28:53.669479 kubelet[2595]: I0712 00:28:53.669450 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvpvb\" (UniqueName: \"kubernetes.io/projected/07b2d995-5de9-4d86-bedd-ee2e93752809-kube-api-access-fvpvb\") pod \"07b2d995-5de9-4d86-bedd-ee2e93752809\" (UID: \"07b2d995-5de9-4d86-bedd-ee2e93752809\") " Jul 12 00:28:53.674576 kubelet[2595]: I0712 00:28:53.674465 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b2d995-5de9-4d86-bedd-ee2e93752809-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "07b2d995-5de9-4d86-bedd-ee2e93752809" (UID: "07b2d995-5de9-4d86-bedd-ee2e93752809"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:28:53.681399 kubelet[2595]: I0712 00:28:53.681338 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b2d995-5de9-4d86-bedd-ee2e93752809-kube-api-access-fvpvb" (OuterVolumeSpecName: "kube-api-access-fvpvb") pod "07b2d995-5de9-4d86-bedd-ee2e93752809" (UID: "07b2d995-5de9-4d86-bedd-ee2e93752809"). InnerVolumeSpecName "kube-api-access-fvpvb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:28:53.770680 kubelet[2595]: I0712 00:28:53.770622 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-xtables-lock\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.770977 kubelet[2595]: I0712 00:28:53.770927 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-host-proc-sys-kernel\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.771179 kubelet[2595]: I0712 00:28:53.771135 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-config-path\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.771391 kubelet[2595]: I0712 00:28:53.771364 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-run\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.771573 kubelet[2595]: I0712 00:28:53.771547 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-bpf-maps\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.771775 kubelet[2595]: I0712 00:28:53.771734 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tszkn\" (UniqueName: \"kubernetes.io/projected/59a5576e-d8c5-4c71-97ca-a2ec671e645e-kube-api-access-tszkn\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.771933 kubelet[2595]: I0712 00:28:53.771906 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-cgroup\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.772110 kubelet[2595]: I0712 00:28:53.772084 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-lib-modules\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.772333 kubelet[2595]: I0712 00:28:53.772305 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-etc-cni-netd\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.772536 kubelet[2595]: I0712 00:28:53.772488 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cni-path\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.772695 kubelet[2595]: I0712 00:28:53.772670 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-hostproc\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.772862 kubelet[2595]: I0712 00:28:53.772837 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-host-proc-sys-net\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.773033 kubelet[2595]: I0712 00:28:53.773008 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59a5576e-d8c5-4c71-97ca-a2ec671e645e-clustermesh-secrets\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.773253 kubelet[2595]: I0712 00:28:53.773226 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59a5576e-d8c5-4c71-97ca-a2ec671e645e-hubble-tls\") pod \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\" (UID: \"59a5576e-d8c5-4c71-97ca-a2ec671e645e\") " Jul 12 00:28:53.773532 kubelet[2595]: I0712 00:28:53.773503 2595 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fvpvb\" (UniqueName: \"kubernetes.io/projected/07b2d995-5de9-4d86-bedd-ee2e93752809-kube-api-access-fvpvb\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.773698 kubelet[2595]: I0712 00:28:53.773674 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07b2d995-5de9-4d86-bedd-ee2e93752809-cilium-config-path\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.774619 kubelet[2595]: I0712 00:28:53.774570 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.774931 kubelet[2595]: I0712 00:28:53.774900 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.775151 kubelet[2595]: I0712 00:28:53.775123 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.775572 kubelet[2595]: I0712 00:28:53.775528 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.775759 kubelet[2595]: I0712 00:28:53.775730 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.775924 kubelet[2595]: I0712 00:28:53.775894 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cni-path" (OuterVolumeSpecName: "cni-path") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.776087 kubelet[2595]: I0712 00:28:53.776059 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-hostproc" (OuterVolumeSpecName: "hostproc") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.776267 kubelet[2595]: I0712 00:28:53.776237 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.776969 kubelet[2595]: I0712 00:28:53.776900 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.782364 kubelet[2595]: I0712 00:28:53.777180 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:53.785379 kubelet[2595]: I0712 00:28:53.785312 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a5576e-d8c5-4c71-97ca-a2ec671e645e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:28:53.795358 kubelet[2595]: I0712 00:28:53.795302 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a5576e-d8c5-4c71-97ca-a2ec671e645e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:28:53.797347 kubelet[2595]: I0712 00:28:53.796513 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a5576e-d8c5-4c71-97ca-a2ec671e645e-kube-api-access-tszkn" (OuterVolumeSpecName: "kube-api-access-tszkn") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "kube-api-access-tszkn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:28:53.800261 kubelet[2595]: I0712 00:28:53.800177 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "59a5576e-d8c5-4c71-97ca-a2ec671e645e" (UID: "59a5576e-d8c5-4c71-97ca-a2ec671e645e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:28:53.874424 kubelet[2595]: I0712 00:28:53.874382 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-run\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.874681 kubelet[2595]: I0712 00:28:53.874651 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-config-path\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.874813 kubelet[2595]: I0712 00:28:53.874790 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cilium-cgroup\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.874946 kubelet[2595]: I0712 00:28:53.874923 2595 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-bpf-maps\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.875075 kubelet[2595]: I0712 00:28:53.875052 2595 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tszkn\" (UniqueName: \"kubernetes.io/projected/59a5576e-d8c5-4c71-97ca-a2ec671e645e-kube-api-access-tszkn\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.875229 kubelet[2595]: I0712 00:28:53.875180 2595 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-etc-cni-netd\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.875374 kubelet[2595]: I0712 00:28:53.875350 2595 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-lib-modules\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.875545 kubelet[2595]: I0712 00:28:53.875521 2595 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-cni-path\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.875677 kubelet[2595]: I0712 00:28:53.875654 2595 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59a5576e-d8c5-4c71-97ca-a2ec671e645e-hubble-tls\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.875894 kubelet[2595]: I0712 00:28:53.875870 2595 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-hostproc\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.876041 kubelet[2595]: I0712 00:28:53.876009 2595 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-host-proc-sys-net\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.876212 kubelet[2595]: I0712 00:28:53.876171 2595 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59a5576e-d8c5-4c71-97ca-a2ec671e645e-clustermesh-secrets\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.876400 kubelet[2595]: I0712 00:28:53.876358 2595 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-xtables-lock\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.876549 kubelet[2595]: I0712 00:28:53.876525 2595 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59a5576e-d8c5-4c71-97ca-a2ec671e645e-host-proc-sys-kernel\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:53.955623 kubelet[2595]: E0712 00:28:53.955561 2595 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:28:54.251167 kubelet[2595]: I0712 00:28:54.251122 2595 scope.go:117] "RemoveContainer" containerID="af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9" Jul 12 00:28:54.256944 env[1652]: time="2025-07-12T00:28:54.256821437Z" level=info msg="RemoveContainer for \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\"" Jul 12 00:28:54.262107 systemd[1]: Removed slice kubepods-besteffort-pod07b2d995_5de9_4d86_bedd_ee2e93752809.slice. Jul 12 00:28:54.272302 env[1652]: time="2025-07-12T00:28:54.272237225Z" level=info msg="RemoveContainer for \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\" returns successfully" Jul 12 00:28:54.278510 kubelet[2595]: I0712 00:28:54.278469 2595 scope.go:117] "RemoveContainer" containerID="af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9" Jul 12 00:28:54.279904 env[1652]: time="2025-07-12T00:28:54.279714221Z" level=error msg="ContainerStatus for \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\": not found" Jul 12 00:28:54.284239 systemd[1]: Removed slice kubepods-burstable-pod59a5576e_d8c5_4c71_97ca_a2ec671e645e.slice. Jul 12 00:28:54.284449 systemd[1]: kubepods-burstable-pod59a5576e_d8c5_4c71_97ca_a2ec671e645e.slice: Consumed 15.406s CPU time. Jul 12 00:28:54.285573 kubelet[2595]: E0712 00:28:54.285531 2595 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\": not found" containerID="af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9" Jul 12 00:28:54.285848 kubelet[2595]: I0712 00:28:54.285715 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9"} err="failed to get container status \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"af00fec4a0ada38c012b2559f9fa4f3d36b4f0a4e8c7bec778ee48ccdd39c0a9\": not found" Jul 12 00:28:54.286049 kubelet[2595]: I0712 00:28:54.286024 2595 scope.go:117] "RemoveContainer" containerID="017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18" Jul 12 00:28:54.294806 env[1652]: time="2025-07-12T00:28:54.294700651Z" level=info msg="RemoveContainer for \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\"" Jul 12 00:28:54.302320 env[1652]: time="2025-07-12T00:28:54.302242629Z" level=info msg="RemoveContainer for \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\" returns successfully" Jul 12 00:28:54.303361 kubelet[2595]: I0712 00:28:54.303320 2595 scope.go:117] "RemoveContainer" containerID="b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2" Jul 12 00:28:54.314034 env[1652]: time="2025-07-12T00:28:54.313705858Z" level=info msg="RemoveContainer for \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\"" Jul 12 00:28:54.325184 env[1652]: time="2025-07-12T00:28:54.325089513Z" level=info msg="RemoveContainer for \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\" returns successfully" Jul 12 00:28:54.325624 kubelet[2595]: I0712 00:28:54.325591 2595 scope.go:117] "RemoveContainer" containerID="3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae" Jul 12 00:28:54.331769 env[1652]: time="2025-07-12T00:28:54.328511528Z" level=info msg="RemoveContainer for \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\"" Jul 12 00:28:54.341015 env[1652]: time="2025-07-12T00:28:54.340955156Z" level=info msg="RemoveContainer for \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\" returns successfully" Jul 12 00:28:54.341707 kubelet[2595]: I0712 00:28:54.341673 2595 scope.go:117] "RemoveContainer" containerID="cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1" Jul 12 00:28:54.343756 env[1652]: time="2025-07-12T00:28:54.343704926Z" level=info msg="RemoveContainer for \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\"" Jul 12 00:28:54.347628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb-rootfs.mount: Deactivated successfully. Jul 12 00:28:54.347813 systemd[1]: var-lib-kubelet-pods-59a5576e\x2dd8c5\x2d4c71\x2d97ca\x2da2ec671e645e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtszkn.mount: Deactivated successfully. Jul 12 00:28:54.347951 systemd[1]: var-lib-kubelet-pods-59a5576e\x2dd8c5\x2d4c71\x2d97ca\x2da2ec671e645e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:28:54.348091 systemd[1]: var-lib-kubelet-pods-59a5576e\x2dd8c5\x2d4c71\x2d97ca\x2da2ec671e645e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:28:54.348306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e-rootfs.mount: Deactivated successfully. Jul 12 00:28:54.348488 systemd[1]: var-lib-kubelet-pods-07b2d995\x2d5de9\x2d4d86\x2dbedd\x2dee2e93752809-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvpvb.mount: Deactivated successfully. Jul 12 00:28:54.360759 env[1652]: time="2025-07-12T00:28:54.360702101Z" level=info msg="RemoveContainer for \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\" returns successfully" Jul 12 00:28:54.361338 kubelet[2595]: I0712 00:28:54.361265 2595 scope.go:117] "RemoveContainer" containerID="35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7" Jul 12 00:28:54.363941 env[1652]: time="2025-07-12T00:28:54.363850209Z" level=info msg="RemoveContainer for \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\"" Jul 12 00:28:54.370223 env[1652]: time="2025-07-12T00:28:54.370142849Z" level=info msg="RemoveContainer for \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\" returns successfully" Jul 12 00:28:54.370721 kubelet[2595]: I0712 00:28:54.370689 2595 scope.go:117] "RemoveContainer" containerID="017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18" Jul 12 00:28:54.371584 env[1652]: time="2025-07-12T00:28:54.371474785Z" level=error msg="ContainerStatus for \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\": not found" Jul 12 00:28:54.371876 kubelet[2595]: E0712 00:28:54.371804 2595 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\": not found" containerID="017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18" Jul 12 00:28:54.372021 kubelet[2595]: I0712 00:28:54.371888 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18"} err="failed to get container status \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\": rpc error: code = NotFound desc = an error occurred when try to find container \"017fb74e4cc917585c91a5fedbb633945fa5253b13f2151fb46c801e79ef2c18\": not found" Jul 12 00:28:54.372021 kubelet[2595]: I0712 00:28:54.371950 2595 scope.go:117] "RemoveContainer" containerID="b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2" Jul 12 00:28:54.372621 env[1652]: time="2025-07-12T00:28:54.372440520Z" level=error msg="ContainerStatus for \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\": not found" Jul 12 00:28:54.372913 kubelet[2595]: E0712 00:28:54.372852 2595 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\": not found" containerID="b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2" Jul 12 00:28:54.373006 kubelet[2595]: I0712 00:28:54.372928 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2"} err="failed to get container status \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"b941be1ad1d134e6c7a82902b709658680909f4ace2a04043620f7de2ff039b2\": not found" Jul 12 00:28:54.373006 kubelet[2595]: I0712 00:28:54.372994 2595 scope.go:117] "RemoveContainer" containerID="3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae" Jul 12 00:28:54.373681 env[1652]: time="2025-07-12T00:28:54.373574824Z" level=error msg="ContainerStatus for \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\": not found" Jul 12 00:28:54.373954 kubelet[2595]: E0712 00:28:54.373889 2595 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\": not found" containerID="3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae" Jul 12 00:28:54.374053 kubelet[2595]: I0712 00:28:54.373966 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae"} err="failed to get container status \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"3165c148519b1d1c2f01ffdf46a613df6187cfc0fd1f1dfa858fa1001993a1ae\": not found" Jul 12 00:28:54.374053 kubelet[2595]: I0712 00:28:54.374025 2595 scope.go:117] "RemoveContainer" containerID="cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1" Jul 12 00:28:54.374489 env[1652]: time="2025-07-12T00:28:54.374407164Z" level=error msg="ContainerStatus for \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\": not found" Jul 12 00:28:54.374919 kubelet[2595]: E0712 00:28:54.374885 2595 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\": not found" containerID="cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1" Jul 12 00:28:54.375087 kubelet[2595]: I0712 00:28:54.375050 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1"} err="failed to get container status \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb11663e112147b82ddc8b48beba1d73a621bda125630b570f9d578f392bafd1\": not found" Jul 12 00:28:54.375269 kubelet[2595]: I0712 00:28:54.375224 2595 scope.go:117] "RemoveContainer" containerID="35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7" Jul 12 00:28:54.375927 env[1652]: time="2025-07-12T00:28:54.375842650Z" level=error msg="ContainerStatus for \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\": not found" Jul 12 00:28:54.376286 kubelet[2595]: E0712 00:28:54.376242 2595 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\": not found" containerID="35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7" Jul 12 00:28:54.376377 kubelet[2595]: I0712 00:28:54.376317 2595 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7"} err="failed to get container status \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"35d8dfd9993e1b8396277c59041face64c8399f7469947f2c58177fef25763e7\": not found" Jul 12 00:28:54.774501 kubelet[2595]: I0712 00:28:54.774439 2595 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b2d995-5de9-4d86-bedd-ee2e93752809" path="/var/lib/kubelet/pods/07b2d995-5de9-4d86-bedd-ee2e93752809/volumes" Jul 12 00:28:54.775680 kubelet[2595]: I0712 00:28:54.775593 2595 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a5576e-d8c5-4c71-97ca-a2ec671e645e" path="/var/lib/kubelet/pods/59a5576e-d8c5-4c71-97ca-a2ec671e645e/volumes" Jul 12 00:28:55.216734 sshd[4302]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:55.221481 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:28:55.221782 systemd[1]: session-24.scope: Consumed 1.520s CPU time. Jul 12 00:28:55.222661 systemd[1]: sshd@23-172.31.19.35:22-147.75.109.163:37406.service: Deactivated successfully. Jul 12 00:28:55.224426 systemd-logind[1638]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:28:55.226727 systemd-logind[1638]: Removed session 24. Jul 12 00:28:55.246154 systemd[1]: Started sshd@24-172.31.19.35:22-147.75.109.163:37410.service. Jul 12 00:28:55.421141 sshd[4467]: Accepted publickey for core from 147.75.109.163 port 37410 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:55.423746 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:55.433353 systemd-logind[1638]: New session 25 of user core. Jul 12 00:28:55.433571 systemd[1]: Started session-25.scope. Jul 12 00:28:57.071486 sshd[4467]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:57.078226 systemd[1]: sshd@24-172.31.19.35:22-147.75.109.163:37410.service: Deactivated successfully. Jul 12 00:28:57.079591 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:28:57.079926 systemd[1]: session-25.scope: Consumed 1.402s CPU time. Jul 12 00:28:57.080282 systemd-logind[1638]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:28:57.082152 systemd-logind[1638]: Removed session 25. Jul 12 00:28:57.087480 kubelet[2595]: I0712 00:28:57.087420 2595 memory_manager.go:355] "RemoveStaleState removing state" podUID="07b2d995-5de9-4d86-bedd-ee2e93752809" containerName="cilium-operator" Jul 12 00:28:57.087480 kubelet[2595]: I0712 00:28:57.087468 2595 memory_manager.go:355] "RemoveStaleState removing state" podUID="59a5576e-d8c5-4c71-97ca-a2ec671e645e" containerName="cilium-agent" Jul 12 00:28:57.102080 systemd[1]: Started sshd@25-172.31.19.35:22-147.75.109.163:51272.service. Jul 12 00:28:57.111999 kubelet[2595]: I0712 00:28:57.111935 2595 status_manager.go:890] "Failed to get status for pod" podUID="ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" pod="kube-system/cilium-jpzmc" err="pods \"cilium-jpzmc\" is forbidden: User \"system:node:ip-172-31-19-35\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-35' and this object" Jul 12 00:28:57.112431 kubelet[2595]: W0712 00:28:57.112393 2595 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-19-35" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-35' and this object Jul 12 00:28:57.112678 kubelet[2595]: E0712 00:28:57.112626 2595 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-19-35\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-35' and this object" logger="UnhandledError" Jul 12 00:28:57.112933 kubelet[2595]: W0712 00:28:57.112901 2595 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-19-35" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-35' and this object Jul 12 00:28:57.113101 kubelet[2595]: E0712 00:28:57.113067 2595 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-19-35\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-35' and this object" logger="UnhandledError" Jul 12 00:28:57.115460 kubelet[2595]: W0712 00:28:57.115414 2595 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-19-35" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-35' and this object Jul 12 00:28:57.116399 kubelet[2595]: E0712 00:28:57.116330 2595 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-19-35\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-35' and this object" logger="UnhandledError" Jul 12 00:28:57.116873 kubelet[2595]: W0712 00:28:57.116832 2595 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-19-35" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-35' and this object Jul 12 00:28:57.117125 kubelet[2595]: E0712 00:28:57.117071 2595 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-19-35\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-35' and this object" logger="UnhandledError" Jul 12 00:28:57.121976 systemd[1]: Created slice kubepods-burstable-podec3907cc_9941_4db7_bfe8_e9b7366ae7b8.slice. Jul 12 00:28:57.199066 kubelet[2595]: I0712 00:28:57.198248 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-xtables-lock\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199066 kubelet[2595]: I0712 00:28:57.198316 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-run\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199066 kubelet[2595]: I0712 00:28:57.198355 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-bpf-maps\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199066 kubelet[2595]: I0712 00:28:57.198394 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-etc-cni-netd\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199066 kubelet[2595]: I0712 00:28:57.198431 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-host-proc-sys-net\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199066 kubelet[2595]: I0712 00:28:57.198476 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hostproc\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199598 kubelet[2595]: I0712 00:28:57.198510 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-host-proc-sys-kernel\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199598 kubelet[2595]: I0712 00:28:57.198547 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hubble-tls\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199598 kubelet[2595]: I0712 00:28:57.198585 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cni-path\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199598 kubelet[2595]: I0712 00:28:57.198626 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-lib-modules\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199598 kubelet[2595]: I0712 00:28:57.198669 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-cgroup\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199598 kubelet[2595]: I0712 00:28:57.198705 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-config-path\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199941 kubelet[2595]: I0712 00:28:57.198742 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-ipsec-secrets\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199941 kubelet[2595]: I0712 00:28:57.198784 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-clustermesh-secrets\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.199941 kubelet[2595]: I0712 00:28:57.198839 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27n7\" (UniqueName: \"kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-kube-api-access-j27n7\") pod \"cilium-jpzmc\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " pod="kube-system/cilium-jpzmc" Jul 12 00:28:57.295900 sshd[4477]: Accepted publickey for core from 147.75.109.163 port 51272 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:57.298463 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:57.311472 systemd[1]: Started session-26.scope. Jul 12 00:28:57.312306 systemd-logind[1638]: New session 26 of user core. Jul 12 00:28:57.598752 sshd[4477]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:57.604520 systemd[1]: sshd@25-172.31.19.35:22-147.75.109.163:51272.service: Deactivated successfully. Jul 12 00:28:57.605962 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:28:57.610681 systemd-logind[1638]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:28:57.611528 kubelet[2595]: E0712 00:28:57.610188 2595 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-jpzmc" podUID="ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" Jul 12 00:28:57.612405 systemd-logind[1638]: Removed session 26. Jul 12 00:28:57.635434 systemd[1]: Started sshd@26-172.31.19.35:22-147.75.109.163:51278.service. Jul 12 00:28:57.805570 sshd[4490]: Accepted publickey for core from 147.75.109.163 port 51278 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:57.808572 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:57.818005 systemd[1]: Started session-27.scope. Jul 12 00:28:57.821337 systemd-logind[1638]: New session 27 of user core. Jul 12 00:28:58.300216 kubelet[2595]: E0712 00:28:58.300161 2595 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 12 00:28:58.300973 kubelet[2595]: E0712 00:28:58.300941 2595 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-config-path podName:ec3907cc-9941-4db7-bfe8-e9b7366ae7b8 nodeName:}" failed. No retries permitted until 2025-07-12 00:28:58.800899254 +0000 UTC m=+150.422040226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-config-path") pod "cilium-jpzmc" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8") : failed to sync configmap cache: timed out waiting for the condition Jul 12 00:28:58.301265 kubelet[2595]: E0712 00:28:58.301234 2595 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 12 00:28:58.301423 kubelet[2595]: E0712 00:28:58.301394 2595 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-jpzmc: failed to sync secret cache: timed out waiting for the condition Jul 12 00:28:58.301639 kubelet[2595]: E0712 00:28:58.301610 2595 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hubble-tls podName:ec3907cc-9941-4db7-bfe8-e9b7366ae7b8 nodeName:}" failed. No retries permitted until 2025-07-12 00:28:58.801584267 +0000 UTC m=+150.422725239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hubble-tls") pod "cilium-jpzmc" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8") : failed to sync secret cache: timed out waiting for the condition Jul 12 00:28:58.407827 kubelet[2595]: I0712 00:28:58.407774 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-xtables-lock\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.408082 kubelet[2595]: I0712 00:28:58.408042 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-ipsec-secrets\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.408285 kubelet[2595]: I0712 00:28:58.408259 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-clustermesh-secrets\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.408444 kubelet[2595]: I0712 00:28:58.408420 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-bpf-maps\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.408630 kubelet[2595]: I0712 00:28:58.408604 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-host-proc-sys-kernel\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.408812 kubelet[2595]: I0712 00:28:58.408787 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hostproc\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.408971 kubelet[2595]: I0712 00:28:58.408946 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cni-path\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.409129 kubelet[2595]: I0712 00:28:58.409103 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-lib-modules\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.409339 kubelet[2595]: I0712 00:28:58.409301 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-etc-cni-netd\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.409517 kubelet[2595]: I0712 00:28:58.409491 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-host-proc-sys-net\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.409725 kubelet[2595]: I0712 00:28:58.409685 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j27n7\" (UniqueName: \"kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-kube-api-access-j27n7\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.409902 kubelet[2595]: I0712 00:28:58.409864 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-run\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.410044 kubelet[2595]: I0712 00:28:58.410017 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-cgroup\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.411163 kubelet[2595]: I0712 00:28:58.408333 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.411407 kubelet[2595]: I0712 00:28:58.410350 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.411576 kubelet[2595]: I0712 00:28:58.411081 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cni-path" (OuterVolumeSpecName: "cni-path") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.411764 kubelet[2595]: I0712 00:28:58.411719 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.411915 kubelet[2595]: I0712 00:28:58.411889 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.412122 kubelet[2595]: I0712 00:28:58.412059 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hostproc" (OuterVolumeSpecName: "hostproc") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.412337 kubelet[2595]: I0712 00:28:58.412288 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.412621 kubelet[2595]: I0712 00:28:58.412525 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.412621 kubelet[2595]: I0712 00:28:58.412557 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.413029 kubelet[2595]: I0712 00:28:58.412994 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:28:58.416700 systemd[1]: var-lib-kubelet-pods-ec3907cc\x2d9941\x2d4db7\x2dbfe8\x2de9b7366ae7b8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 12 00:28:58.422533 kubelet[2595]: I0712 00:28:58.422473 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:28:58.428050 systemd[1]: var-lib-kubelet-pods-ec3907cc\x2d9941\x2d4db7\x2dbfe8\x2de9b7366ae7b8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:28:58.432733 systemd[1]: var-lib-kubelet-pods-ec3907cc\x2d9941\x2d4db7\x2dbfe8\x2de9b7366ae7b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj27n7.mount: Deactivated successfully. Jul 12 00:28:58.433641 kubelet[2595]: I0712 00:28:58.433581 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-kube-api-access-j27n7" (OuterVolumeSpecName: "kube-api-access-j27n7") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "kube-api-access-j27n7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:28:58.435465 kubelet[2595]: I0712 00:28:58.435335 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:28:58.511438 kubelet[2595]: I0712 00:28:58.511372 2595 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-xtables-lock\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.511438 kubelet[2595]: I0712 00:28:58.511430 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-ipsec-secrets\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.511627 kubelet[2595]: I0712 00:28:58.511459 2595 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-clustermesh-secrets\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.511627 kubelet[2595]: I0712 00:28:58.511482 2595 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-host-proc-sys-kernel\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.511627 kubelet[2595]: I0712 00:28:58.511504 2595 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-bpf-maps\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.511627 kubelet[2595]: I0712 00:28:58.511525 2595 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hostproc\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.511627 kubelet[2595]: I0712 00:28:58.511546 2595 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-lib-modules\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.511627 kubelet[2595]: I0712 00:28:58.511566 2595 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cni-path\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.511627 kubelet[2595]: I0712 00:28:58.511586 2595 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-etc-cni-netd\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.511627 kubelet[2595]: I0712 00:28:58.511607 2595 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-host-proc-sys-net\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.512174 kubelet[2595]: I0712 00:28:58.511628 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-run\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.512174 kubelet[2595]: I0712 00:28:58.511648 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-cgroup\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.512174 kubelet[2595]: I0712 00:28:58.511669 2595 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j27n7\" (UniqueName: \"kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-kube-api-access-j27n7\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:58.783381 systemd[1]: Removed slice kubepods-burstable-podec3907cc_9941_4db7_bfe8_e9b7366ae7b8.slice. Jul 12 00:28:58.915254 kubelet[2595]: I0712 00:28:58.915175 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hubble-tls\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.915561 kubelet[2595]: I0712 00:28:58.915531 2595 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-config-path\") pod \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\" (UID: \"ec3907cc-9941-4db7-bfe8-e9b7366ae7b8\") " Jul 12 00:28:58.924630 systemd[1]: var-lib-kubelet-pods-ec3907cc\x2d9941\x2d4db7\x2dbfe8\x2de9b7366ae7b8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:28:58.926250 kubelet[2595]: I0712 00:28:58.924876 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:28:58.926250 kubelet[2595]: I0712 00:28:58.925915 2595 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" (UID: "ec3907cc-9941-4db7-bfe8-e9b7366ae7b8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:28:58.957659 kubelet[2595]: E0712 00:28:58.957604 2595 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:28:59.016284 kubelet[2595]: I0712 00:28:59.016246 2595 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-hubble-tls\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:59.016552 kubelet[2595]: I0712 00:28:59.016525 2595 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8-cilium-config-path\") on node \"ip-172-31-19-35\" DevicePath \"\"" Jul 12 00:28:59.369064 systemd[1]: Created slice kubepods-burstable-pod06f0fdab_b642_4ac5_81fb_21b1c60ada72.slice. Jul 12 00:28:59.419067 kubelet[2595]: I0712 00:28:59.419018 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-cilium-run\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.419833 kubelet[2595]: I0712 00:28:59.419789 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06f0fdab-b642-4ac5-81fb-21b1c60ada72-hubble-tls\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.420028 kubelet[2595]: I0712 00:28:59.419990 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-bpf-maps\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.420238 kubelet[2595]: I0712 00:28:59.420179 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-lib-modules\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.420451 kubelet[2595]: I0712 00:28:59.420422 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/06f0fdab-b642-4ac5-81fb-21b1c60ada72-cilium-ipsec-secrets\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.420642 kubelet[2595]: I0712 00:28:59.420617 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06f0fdab-b642-4ac5-81fb-21b1c60ada72-clustermesh-secrets\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.420824 kubelet[2595]: I0712 00:28:59.420800 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-host-proc-sys-kernel\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.421025 kubelet[2595]: I0712 00:28:59.420971 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8fd\" (UniqueName: \"kubernetes.io/projected/06f0fdab-b642-4ac5-81fb-21b1c60ada72-kube-api-access-bv8fd\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.421214 kubelet[2595]: I0712 00:28:59.421172 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06f0fdab-b642-4ac5-81fb-21b1c60ada72-cilium-config-path\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.421420 kubelet[2595]: I0712 00:28:59.421394 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-cni-path\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.421581 kubelet[2595]: I0712 00:28:59.421555 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-xtables-lock\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.421796 kubelet[2595]: I0712 00:28:59.421770 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-etc-cni-netd\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.421962 kubelet[2595]: I0712 00:28:59.421938 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-host-proc-sys-net\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.422138 kubelet[2595]: I0712 00:28:59.422108 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-cilium-cgroup\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.422360 kubelet[2595]: I0712 00:28:59.422335 2595 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06f0fdab-b642-4ac5-81fb-21b1c60ada72-hostproc\") pod \"cilium-dzjfj\" (UID: \"06f0fdab-b642-4ac5-81fb-21b1c60ada72\") " pod="kube-system/cilium-dzjfj" Jul 12 00:28:59.676305 env[1652]: time="2025-07-12T00:28:59.676106045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dzjfj,Uid:06f0fdab-b642-4ac5-81fb-21b1c60ada72,Namespace:kube-system,Attempt:0,}" Jul 12 00:28:59.727348 env[1652]: time="2025-07-12T00:28:59.722532716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:28:59.727348 env[1652]: time="2025-07-12T00:28:59.722640695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:28:59.727348 env[1652]: time="2025-07-12T00:28:59.722892413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:28:59.727348 env[1652]: time="2025-07-12T00:28:59.723475867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c pid=4521 runtime=io.containerd.runc.v2 Jul 12 00:28:59.757660 systemd[1]: Started cri-containerd-e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c.scope. Jul 12 00:28:59.837802 env[1652]: time="2025-07-12T00:28:59.837733136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dzjfj,Uid:06f0fdab-b642-4ac5-81fb-21b1c60ada72,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\"" Jul 12 00:28:59.846581 env[1652]: time="2025-07-12T00:28:59.846484866Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:28:59.877346 env[1652]: time="2025-07-12T00:28:59.877259112Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c22772379f695340d4a9169398ef85fe9de29bf3ca11ad8201cb17d4b0acb29f\"" Jul 12 00:28:59.878582 env[1652]: time="2025-07-12T00:28:59.878527435Z" level=info msg="StartContainer for \"c22772379f695340d4a9169398ef85fe9de29bf3ca11ad8201cb17d4b0acb29f\"" Jul 12 00:28:59.921904 systemd[1]: Started cri-containerd-c22772379f695340d4a9169398ef85fe9de29bf3ca11ad8201cb17d4b0acb29f.scope. Jul 12 00:29:00.005205 env[1652]: time="2025-07-12T00:29:00.005103364Z" level=info msg="StartContainer for \"c22772379f695340d4a9169398ef85fe9de29bf3ca11ad8201cb17d4b0acb29f\" returns successfully" Jul 12 00:29:00.025816 systemd[1]: cri-containerd-c22772379f695340d4a9169398ef85fe9de29bf3ca11ad8201cb17d4b0acb29f.scope: Deactivated successfully. Jul 12 00:29:00.080006 env[1652]: time="2025-07-12T00:29:00.079921171Z" level=info msg="shim disconnected" id=c22772379f695340d4a9169398ef85fe9de29bf3ca11ad8201cb17d4b0acb29f Jul 12 00:29:00.080006 env[1652]: time="2025-07-12T00:29:00.079992441Z" level=warning msg="cleaning up after shim disconnected" id=c22772379f695340d4a9169398ef85fe9de29bf3ca11ad8201cb17d4b0acb29f namespace=k8s.io Jul 12 00:29:00.080400 env[1652]: time="2025-07-12T00:29:00.080015349Z" level=info msg="cleaning up dead shim" Jul 12 00:29:00.101117 env[1652]: time="2025-07-12T00:29:00.101047207Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:29:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4604 runtime=io.containerd.runc.v2\n" Jul 12 00:29:00.301094 env[1652]: time="2025-07-12T00:29:00.300943947Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:29:00.327785 env[1652]: time="2025-07-12T00:29:00.327714849Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d3925272147e0f46b8a076f66f9e1e55157e623f3c7b369ee7b9b3187b65984b\"" Jul 12 00:29:00.329112 env[1652]: time="2025-07-12T00:29:00.329061462Z" level=info msg="StartContainer for \"d3925272147e0f46b8a076f66f9e1e55157e623f3c7b369ee7b9b3187b65984b\"" Jul 12 00:29:00.388544 systemd[1]: Started cri-containerd-d3925272147e0f46b8a076f66f9e1e55157e623f3c7b369ee7b9b3187b65984b.scope. Jul 12 00:29:00.505810 env[1652]: time="2025-07-12T00:29:00.505746660Z" level=info msg="StartContainer for \"d3925272147e0f46b8a076f66f9e1e55157e623f3c7b369ee7b9b3187b65984b\" returns successfully" Jul 12 00:29:00.523391 systemd[1]: cri-containerd-d3925272147e0f46b8a076f66f9e1e55157e623f3c7b369ee7b9b3187b65984b.scope: Deactivated successfully. Jul 12 00:29:00.544711 systemd[1]: run-containerd-runc-k8s.io-e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c-runc.Sdvwq5.mount: Deactivated successfully. Jul 12 00:29:00.570987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3925272147e0f46b8a076f66f9e1e55157e623f3c7b369ee7b9b3187b65984b-rootfs.mount: Deactivated successfully. Jul 12 00:29:00.583378 env[1652]: time="2025-07-12T00:29:00.583313614Z" level=info msg="shim disconnected" id=d3925272147e0f46b8a076f66f9e1e55157e623f3c7b369ee7b9b3187b65984b Jul 12 00:29:00.583837 env[1652]: time="2025-07-12T00:29:00.583802758Z" level=warning msg="cleaning up after shim disconnected" id=d3925272147e0f46b8a076f66f9e1e55157e623f3c7b369ee7b9b3187b65984b namespace=k8s.io Jul 12 00:29:00.584035 env[1652]: time="2025-07-12T00:29:00.584005923Z" level=info msg="cleaning up dead shim" Jul 12 00:29:00.597426 env[1652]: time="2025-07-12T00:29:00.597354545Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:29:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4666 runtime=io.containerd.runc.v2\n" Jul 12 00:29:00.774469 kubelet[2595]: I0712 00:29:00.774405 2595 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3907cc-9941-4db7-bfe8-e9b7366ae7b8" path="/var/lib/kubelet/pods/ec3907cc-9941-4db7-bfe8-e9b7366ae7b8/volumes" Jul 12 00:29:01.308413 env[1652]: time="2025-07-12T00:29:01.308338017Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:29:01.346949 env[1652]: time="2025-07-12T00:29:01.346840982Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fdd1fed969a8e33e50cd6b50a9369805e47ea1b385d895e10468eb5a63599729\"" Jul 12 00:29:01.347859 env[1652]: time="2025-07-12T00:29:01.347808338Z" level=info msg="StartContainer for \"fdd1fed969a8e33e50cd6b50a9369805e47ea1b385d895e10468eb5a63599729\"" Jul 12 00:29:01.387032 systemd[1]: Started cri-containerd-fdd1fed969a8e33e50cd6b50a9369805e47ea1b385d895e10468eb5a63599729.scope. Jul 12 00:29:01.472433 env[1652]: time="2025-07-12T00:29:01.472351788Z" level=info msg="StartContainer for \"fdd1fed969a8e33e50cd6b50a9369805e47ea1b385d895e10468eb5a63599729\" returns successfully" Jul 12 00:29:01.474885 systemd[1]: cri-containerd-fdd1fed969a8e33e50cd6b50a9369805e47ea1b385d895e10468eb5a63599729.scope: Deactivated successfully. Jul 12 00:29:01.521934 env[1652]: time="2025-07-12T00:29:01.521870675Z" level=info msg="shim disconnected" id=fdd1fed969a8e33e50cd6b50a9369805e47ea1b385d895e10468eb5a63599729 Jul 12 00:29:01.522341 env[1652]: time="2025-07-12T00:29:01.522306850Z" level=warning msg="cleaning up after shim disconnected" id=fdd1fed969a8e33e50cd6b50a9369805e47ea1b385d895e10468eb5a63599729 namespace=k8s.io Jul 12 00:29:01.522484 env[1652]: time="2025-07-12T00:29:01.522456649Z" level=info msg="cleaning up dead shim" Jul 12 00:29:01.540659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdd1fed969a8e33e50cd6b50a9369805e47ea1b385d895e10468eb5a63599729-rootfs.mount: Deactivated successfully. Jul 12 00:29:01.543683 env[1652]: time="2025-07-12T00:29:01.543615939Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:29:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4724 runtime=io.containerd.runc.v2\n" Jul 12 00:29:02.133919 kubelet[2595]: I0712 00:29:02.133437 2595 setters.go:602] "Node became not ready" node="ip-172-31-19-35" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:29:02Z","lastTransitionTime":"2025-07-12T00:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:29:02.317133 env[1652]: time="2025-07-12T00:29:02.316609216Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:29:02.350099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4229351754.mount: Deactivated successfully. Jul 12 00:29:02.361345 env[1652]: time="2025-07-12T00:29:02.361278678Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7bfa045136fc7de020bad83864247d99e7c565d517ecd3f3db30d7299d805d48\"" Jul 12 00:29:02.362760 env[1652]: time="2025-07-12T00:29:02.362686121Z" level=info msg="StartContainer for \"7bfa045136fc7de020bad83864247d99e7c565d517ecd3f3db30d7299d805d48\"" Jul 12 00:29:02.405711 systemd[1]: Started cri-containerd-7bfa045136fc7de020bad83864247d99e7c565d517ecd3f3db30d7299d805d48.scope. Jul 12 00:29:02.472691 systemd[1]: cri-containerd-7bfa045136fc7de020bad83864247d99e7c565d517ecd3f3db30d7299d805d48.scope: Deactivated successfully. Jul 12 00:29:02.476961 env[1652]: time="2025-07-12T00:29:02.476845169Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06f0fdab_b642_4ac5_81fb_21b1c60ada72.slice/cri-containerd-7bfa045136fc7de020bad83864247d99e7c565d517ecd3f3db30d7299d805d48.scope/memory.events\": no such file or directory" Jul 12 00:29:02.481695 env[1652]: time="2025-07-12T00:29:02.481635338Z" level=info msg="StartContainer for \"7bfa045136fc7de020bad83864247d99e7c565d517ecd3f3db30d7299d805d48\" returns successfully" Jul 12 00:29:02.527613 env[1652]: time="2025-07-12T00:29:02.527545199Z" level=info msg="shim disconnected" id=7bfa045136fc7de020bad83864247d99e7c565d517ecd3f3db30d7299d805d48 Jul 12 00:29:02.527995 env[1652]: time="2025-07-12T00:29:02.527959893Z" level=warning msg="cleaning up after shim disconnected" id=7bfa045136fc7de020bad83864247d99e7c565d517ecd3f3db30d7299d805d48 namespace=k8s.io Jul 12 00:29:02.528127 env[1652]: time="2025-07-12T00:29:02.528098401Z" level=info msg="cleaning up dead shim" Jul 12 00:29:02.540837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bfa045136fc7de020bad83864247d99e7c565d517ecd3f3db30d7299d805d48-rootfs.mount: Deactivated successfully. Jul 12 00:29:02.545285 env[1652]: time="2025-07-12T00:29:02.545230104Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:29:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4780 runtime=io.containerd.runc.v2\n" Jul 12 00:29:03.320051 env[1652]: time="2025-07-12T00:29:03.319993239Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:29:03.362017 env[1652]: time="2025-07-12T00:29:03.361910627Z" level=info msg="CreateContainer within sandbox \"e5eade92310d472da3224cfd790aaaf08aec9295672f55e73e50e2f9b08e447c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dde1af8cc13409ac23cf7b36e3877120f493554bb5e2d6db7b68f3186e24bd83\"" Jul 12 00:29:03.363225 env[1652]: time="2025-07-12T00:29:03.363148385Z" level=info msg="StartContainer for \"dde1af8cc13409ac23cf7b36e3877120f493554bb5e2d6db7b68f3186e24bd83\"" Jul 12 00:29:03.415333 systemd[1]: Started cri-containerd-dde1af8cc13409ac23cf7b36e3877120f493554bb5e2d6db7b68f3186e24bd83.scope. Jul 12 00:29:03.489010 env[1652]: time="2025-07-12T00:29:03.488923123Z" level=info msg="StartContainer for \"dde1af8cc13409ac23cf7b36e3877120f493554bb5e2d6db7b68f3186e24bd83\" returns successfully" Jul 12 00:29:03.564705 systemd[1]: run-containerd-runc-k8s.io-dde1af8cc13409ac23cf7b36e3877120f493554bb5e2d6db7b68f3186e24bd83-runc.V2VcGq.mount: Deactivated successfully. Jul 12 00:29:04.415317 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 12 00:29:06.500376 systemd[1]: run-containerd-runc-k8s.io-dde1af8cc13409ac23cf7b36e3877120f493554bb5e2d6db7b68f3186e24bd83-runc.sjtQQs.mount: Deactivated successfully. Jul 12 00:29:08.722726 systemd-networkd[1365]: lxc_health: Link UP Jul 12 00:29:08.736809 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:29:08.737080 systemd-networkd[1365]: lxc_health: Gained carrier Jul 12 00:29:08.737345 (udev-worker)[5331]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:29:08.876108 systemd[1]: run-containerd-runc-k8s.io-dde1af8cc13409ac23cf7b36e3877120f493554bb5e2d6db7b68f3186e24bd83-runc.FFFdOW.mount: Deactivated successfully. Jul 12 00:29:09.718177 kubelet[2595]: I0712 00:29:09.718060 2595 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dzjfj" podStartSLOduration=10.718037433 podStartE2EDuration="10.718037433s" podCreationTimestamp="2025-07-12 00:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:29:04.360483483 +0000 UTC m=+155.981624491" watchObservedRunningTime="2025-07-12 00:29:09.718037433 +0000 UTC m=+161.339178393" Jul 12 00:29:10.349400 systemd-networkd[1365]: lxc_health: Gained IPv6LL Jul 12 00:29:11.329656 systemd[1]: run-containerd-runc-k8s.io-dde1af8cc13409ac23cf7b36e3877120f493554bb5e2d6db7b68f3186e24bd83-runc.mJ0Smz.mount: Deactivated successfully. Jul 12 00:29:13.869416 sshd[4490]: pam_unix(sshd:session): session closed for user core Jul 12 00:29:13.875417 systemd[1]: sshd@26-172.31.19.35:22-147.75.109.163:51278.service: Deactivated successfully. Jul 12 00:29:13.876744 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:29:13.878056 systemd-logind[1638]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:29:13.880348 systemd-logind[1638]: Removed session 27. Jul 12 00:29:28.696177 env[1652]: time="2025-07-12T00:29:28.696113285Z" level=info msg="StopPodSandbox for \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\"" Jul 12 00:29:28.696965 env[1652]: time="2025-07-12T00:29:28.696278102Z" level=info msg="TearDown network for sandbox \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\" successfully" Jul 12 00:29:28.696965 env[1652]: time="2025-07-12T00:29:28.696337622Z" level=info msg="StopPodSandbox for \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\" returns successfully" Jul 12 00:29:28.697848 env[1652]: time="2025-07-12T00:29:28.697790154Z" level=info msg="RemovePodSandbox for \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\"" Jul 12 00:29:28.698013 env[1652]: time="2025-07-12T00:29:28.697856597Z" level=info msg="Forcibly stopping sandbox \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\"" Jul 12 00:29:28.698013 env[1652]: time="2025-07-12T00:29:28.697988788Z" level=info msg="TearDown network for sandbox \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\" successfully" Jul 12 00:29:28.704360 env[1652]: time="2025-07-12T00:29:28.704294044Z" level=info msg="RemovePodSandbox \"15884aaf5dccd6b37b9b8730409589fa6c84223e140a4f0f918d07bde0205d8e\" returns successfully" Jul 12 00:29:28.705073 env[1652]: time="2025-07-12T00:29:28.705030942Z" level=info msg="StopPodSandbox for \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\"" Jul 12 00:29:28.705420 env[1652]: time="2025-07-12T00:29:28.705355610Z" level=info msg="TearDown network for sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" successfully" Jul 12 00:29:28.705592 env[1652]: time="2025-07-12T00:29:28.705559619Z" level=info msg="StopPodSandbox for \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" returns successfully" Jul 12 00:29:28.706293 env[1652]: time="2025-07-12T00:29:28.706187643Z" level=info msg="RemovePodSandbox for \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\"" Jul 12 00:29:28.706426 env[1652]: time="2025-07-12T00:29:28.706293457Z" level=info msg="Forcibly stopping sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\"" Jul 12 00:29:28.706544 env[1652]: time="2025-07-12T00:29:28.706473935Z" level=info msg="TearDown network for sandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" successfully" Jul 12 00:29:28.712676 env[1652]: time="2025-07-12T00:29:28.712597394Z" level=info msg="RemovePodSandbox \"9ccc2e8b80e381858c8e9b149bf185aac8662d1333f2fd00bfe0282d511715bb\" returns successfully" Jul 12 00:29:29.100847 systemd[1]: cri-containerd-92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e.scope: Deactivated successfully. Jul 12 00:29:29.101476 systemd[1]: cri-containerd-92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e.scope: Consumed 6.656s CPU time. Jul 12 00:29:29.138341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e-rootfs.mount: Deactivated successfully. Jul 12 00:29:29.156718 env[1652]: time="2025-07-12T00:29:29.156657157Z" level=info msg="shim disconnected" id=92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e Jul 12 00:29:29.157075 env[1652]: time="2025-07-12T00:29:29.157039304Z" level=warning msg="cleaning up after shim disconnected" id=92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e namespace=k8s.io Jul 12 00:29:29.157223 env[1652]: time="2025-07-12T00:29:29.157168182Z" level=info msg="cleaning up dead shim" Jul 12 00:29:29.172565 env[1652]: time="2025-07-12T00:29:29.172508115Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:29:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5442 runtime=io.containerd.runc.v2\n" Jul 12 00:29:29.390398 kubelet[2595]: I0712 00:29:29.390249 2595 scope.go:117] "RemoveContainer" containerID="92a34b41a9dad0d1861e4fb9d5aabbfd5eee7ff0a779c90f8e7cf895e8dd930e" Jul 12 00:29:29.395881 env[1652]: time="2025-07-12T00:29:29.395585847Z" level=info msg="CreateContainer within sandbox \"318fe06d96ccda4c6b5a84fcf2ef2a2c9bb998e6a6105ecccb12d87bed04bf67\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 12 00:29:29.417476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783562070.mount: Deactivated successfully. Jul 12 00:29:29.430373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083513731.mount: Deactivated successfully. Jul 12 00:29:29.440736 env[1652]: time="2025-07-12T00:29:29.440644984Z" level=info msg="CreateContainer within sandbox \"318fe06d96ccda4c6b5a84fcf2ef2a2c9bb998e6a6105ecccb12d87bed04bf67\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9f1929f2012b9bd792140e7f2bc2fbd34648a97eac6824cb5b7ca3bc8bb55094\"" Jul 12 00:29:29.443456 env[1652]: time="2025-07-12T00:29:29.443371757Z" level=info msg="StartContainer for \"9f1929f2012b9bd792140e7f2bc2fbd34648a97eac6824cb5b7ca3bc8bb55094\"" Jul 12 00:29:29.474585 systemd[1]: Started cri-containerd-9f1929f2012b9bd792140e7f2bc2fbd34648a97eac6824cb5b7ca3bc8bb55094.scope. Jul 12 00:29:29.568020 env[1652]: time="2025-07-12T00:29:29.567934777Z" level=info msg="StartContainer for \"9f1929f2012b9bd792140e7f2bc2fbd34648a97eac6824cb5b7ca3bc8bb55094\" returns successfully" Jul 12 00:29:32.700685 kubelet[2595]: E0712 00:29:32.700588 2595 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-35?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 12 00:29:33.408820 systemd[1]: cri-containerd-1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d.scope: Deactivated successfully. Jul 12 00:29:33.409414 systemd[1]: cri-containerd-1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d.scope: Consumed 4.602s CPU time. Jul 12 00:29:33.449485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d-rootfs.mount: Deactivated successfully. Jul 12 00:29:33.467788 env[1652]: time="2025-07-12T00:29:33.467708283Z" level=info msg="shim disconnected" id=1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d Jul 12 00:29:33.468487 env[1652]: time="2025-07-12T00:29:33.467786534Z" level=warning msg="cleaning up after shim disconnected" id=1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d namespace=k8s.io Jul 12 00:29:33.468487 env[1652]: time="2025-07-12T00:29:33.467810078Z" level=info msg="cleaning up dead shim" Jul 12 00:29:33.481423 env[1652]: time="2025-07-12T00:29:33.481365829Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:29:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5503 runtime=io.containerd.runc.v2\n" Jul 12 00:29:34.407045 kubelet[2595]: I0712 00:29:34.406977 2595 scope.go:117] "RemoveContainer" containerID="1c12c931e17ee5083604afa940ab93c3d39bcdf64acfb254751602eba1d6a66d" Jul 12 00:29:34.410439 env[1652]: time="2025-07-12T00:29:34.410373870Z" level=info msg="CreateContainer within sandbox \"4f15314abf57145d0a8403000d06a0f972afe204efeaba7088f197e16b1b6786\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 12 00:29:34.435982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481984479.mount: Deactivated successfully. Jul 12 00:29:34.451289 env[1652]: time="2025-07-12T00:29:34.451190315Z" level=info msg="CreateContainer within sandbox \"4f15314abf57145d0a8403000d06a0f972afe204efeaba7088f197e16b1b6786\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"22c8bdeba7d2a432aad963fcf89ae0266dc6689c4ac5c34842f41dbe0f74665e\"" Jul 12 00:29:34.451957 env[1652]: time="2025-07-12T00:29:34.451907740Z" level=info msg="StartContainer for \"22c8bdeba7d2a432aad963fcf89ae0266dc6689c4ac5c34842f41dbe0f74665e\"" Jul 12 00:29:34.503183 systemd[1]: Started cri-containerd-22c8bdeba7d2a432aad963fcf89ae0266dc6689c4ac5c34842f41dbe0f74665e.scope. Jul 12 00:29:34.580490 env[1652]: time="2025-07-12T00:29:34.580403848Z" level=info msg="StartContainer for \"22c8bdeba7d2a432aad963fcf89ae0266dc6689c4ac5c34842f41dbe0f74665e\" returns successfully" Jul 12 00:29:37.617275 amazon-ssm-agent[1626]: 2025-07-12 00:29:37 INFO [HealthCheck] HealthCheck reporting agent health.