Jul 12 00:25:51.994797 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 12 00:25:51.994851 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Jul 11 23:15:18 -00 2025 Jul 12 00:25:51.994874 kernel: efi: EFI v2.70 by EDK II Jul 12 00:25:51.994890 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Jul 12 00:25:51.994903 kernel: ACPI: Early table checksum verification disabled Jul 12 00:25:51.994917 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 12 00:25:51.994933 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 12 00:25:51.994947 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 12 00:25:51.994960 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 12 00:25:51.994974 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 12 00:25:51.994992 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 12 00:25:51.995006 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 12 00:25:51.995019 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 12 00:25:51.995033 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 12 00:25:51.995049 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 12 00:25:51.995068 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 12 00:25:51.995082 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 12 00:25:51.995097 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 12 00:25:51.995111 kernel: printk: bootconsole [uart0] enabled Jul 12 00:25:51.995125 kernel: NUMA: Failed to initialise from firmware Jul 12 00:25:51.995139 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:25:51.995154 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Jul 12 00:25:51.995168 kernel: Zone ranges: Jul 12 00:25:51.995183 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:25:51.995197 kernel: DMA32 empty Jul 12 00:25:51.995211 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 12 00:25:51.995229 kernel: Movable zone start for each node Jul 12 00:25:51.995243 kernel: Early memory node ranges Jul 12 00:25:51.995257 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 12 00:25:51.995271 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 12 00:25:51.995285 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 12 00:25:51.995300 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 12 00:25:51.995314 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 12 00:25:51.995328 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 12 00:25:51.995342 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 12 00:25:51.995356 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 12 00:25:51.995370 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:25:51.995384 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 12 00:25:51.995403 kernel: psci: probing for conduit method from ACPI. Jul 12 00:25:51.995417 kernel: psci: PSCIv1.0 detected in firmware. Jul 12 00:25:51.995438 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:25:51.995454 kernel: psci: Trusted OS migration not required Jul 12 00:25:51.995468 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:25:51.995488 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 12 00:25:51.995503 kernel: ACPI: SRAT not present Jul 12 00:25:51.995519 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 12 00:25:51.995534 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 12 00:25:51.995549 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:25:51.995564 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:25:51.995579 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:25:51.995594 kernel: CPU features: detected: Spectre-v2 Jul 12 00:25:51.995609 kernel: CPU features: detected: Spectre-v3a Jul 12 00:25:51.995623 kernel: CPU features: detected: Spectre-BHB Jul 12 00:25:51.995638 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:25:51.995657 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:25:51.995672 kernel: CPU features: detected: ARM erratum 1742098 Jul 12 00:25:51.995687 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 12 00:25:51.995702 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 12 00:25:51.995717 kernel: Policy zone: Normal Jul 12 00:25:51.995734 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:25:51.995750 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:25:51.995765 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:25:51.995781 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:25:51.995795 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:25:51.995830 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 12 00:25:51.995850 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Jul 12 00:25:51.995866 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:25:51.995882 kernel: trace event string verifier disabled Jul 12 00:25:51.995897 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:25:51.995913 kernel: rcu: RCU event tracing is enabled. Jul 12 00:25:51.995928 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:25:51.995943 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:25:51.995959 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:25:51.995974 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:25:51.995989 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:25:51.996004 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:25:51.996023 kernel: GICv3: 96 SPIs implemented Jul 12 00:25:51.996038 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:25:51.996053 kernel: GICv3: Distributor has no Range Selector support Jul 12 00:25:51.996068 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:25:51.996082 kernel: GICv3: 16 PPIs implemented Jul 12 00:25:51.996097 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 12 00:25:51.996112 kernel: ACPI: SRAT not present Jul 12 00:25:51.996126 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 12 00:25:51.996142 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:25:51.996157 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:25:51.996172 kernel: GICv3: using LPI property table @0x00000004000b0000 Jul 12 00:25:51.996191 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 12 00:25:51.996207 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jul 12 00:25:51.996221 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 12 00:25:51.996236 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 12 00:25:51.996252 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 12 00:25:51.996267 kernel: Console: colour dummy device 80x25 Jul 12 00:25:51.996282 kernel: printk: console [tty1] enabled Jul 12 00:25:51.996297 kernel: ACPI: Core revision 20210730 Jul 12 00:25:51.996313 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 12 00:25:51.996329 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:25:51.996348 kernel: LSM: Security Framework initializing Jul 12 00:25:51.996364 kernel: SELinux: Initializing. Jul 12 00:25:51.996379 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:25:51.996394 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:25:51.996410 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:25:51.996426 kernel: Platform MSI: ITS@0x10080000 domain created Jul 12 00:25:51.996441 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 12 00:25:51.996457 kernel: Remapping and enabling EFI services. Jul 12 00:25:51.996472 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:25:51.996487 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:25:51.996507 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 12 00:25:51.996522 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jul 12 00:25:51.996538 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 12 00:25:51.996553 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:25:51.996568 kernel: SMP: Total of 2 processors activated. Jul 12 00:25:51.996584 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:25:51.996599 kernel: CPU features: detected: 32-bit EL1 Support Jul 12 00:25:51.996615 kernel: CPU features: detected: CRC32 instructions Jul 12 00:25:51.996630 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:25:51.996649 kernel: alternatives: patching kernel code Jul 12 00:25:51.996665 kernel: devtmpfs: initialized Jul 12 00:25:51.996690 kernel: KASLR disabled due to lack of seed Jul 12 00:25:51.996710 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:25:51.996726 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:25:51.996742 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:25:51.996758 kernel: SMBIOS 3.0.0 present. Jul 12 00:25:51.996774 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 12 00:25:51.996790 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:25:51.996806 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:25:52.002063 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:25:52.002093 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:25:52.002110 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:25:52.002127 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Jul 12 00:25:52.002144 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:25:52.002160 kernel: cpuidle: using governor menu Jul 12 00:25:52.002181 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:25:52.002197 kernel: ASID allocator initialised with 32768 entries Jul 12 00:25:52.002214 kernel: ACPI: bus type PCI registered Jul 12 00:25:52.002230 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:25:52.002245 kernel: Serial: AMBA PL011 UART driver Jul 12 00:25:52.002262 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:25:52.002278 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:25:52.002294 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:25:52.002310 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:25:52.002331 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:25:52.002347 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:25:52.002363 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:25:52.002379 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:25:52.002395 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:25:52.002411 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 12 00:25:52.002427 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 12 00:25:52.002443 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 12 00:25:52.002459 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:25:52.002475 kernel: ACPI: Interpreter enabled Jul 12 00:25:52.002495 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:25:52.002511 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:25:52.002527 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 12 00:25:52.002849 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:25:52.003046 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:25:52.003232 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:25:52.003415 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 12 00:25:52.003602 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 12 00:25:52.003625 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 12 00:25:52.003641 kernel: acpiphp: Slot [1] registered Jul 12 00:25:52.003657 kernel: acpiphp: Slot [2] registered Jul 12 00:25:52.003673 kernel: acpiphp: Slot [3] registered Jul 12 00:25:52.003689 kernel: acpiphp: Slot [4] registered Jul 12 00:25:52.003705 kernel: acpiphp: Slot [5] registered Jul 12 00:25:52.003721 kernel: acpiphp: Slot [6] registered Jul 12 00:25:52.003736 kernel: acpiphp: Slot [7] registered Jul 12 00:25:52.003757 kernel: acpiphp: Slot [8] registered Jul 12 00:25:52.003773 kernel: acpiphp: Slot [9] registered Jul 12 00:25:52.003789 kernel: acpiphp: Slot [10] registered Jul 12 00:25:52.003804 kernel: acpiphp: Slot [11] registered Jul 12 00:25:52.003841 kernel: acpiphp: Slot [12] registered Jul 12 00:25:52.003859 kernel: acpiphp: Slot [13] registered Jul 12 00:25:52.003875 kernel: acpiphp: Slot [14] registered Jul 12 00:25:52.003891 kernel: acpiphp: Slot [15] registered Jul 12 00:25:52.003907 kernel: acpiphp: Slot [16] registered Jul 12 00:25:52.003928 kernel: acpiphp: Slot [17] registered Jul 12 00:25:52.003944 kernel: acpiphp: Slot [18] registered Jul 12 00:25:52.003960 kernel: acpiphp: Slot [19] registered Jul 12 00:25:52.003975 kernel: acpiphp: Slot [20] registered Jul 12 00:25:52.003991 kernel: acpiphp: Slot [21] registered Jul 12 00:25:52.004007 kernel: acpiphp: Slot [22] registered Jul 12 00:25:52.004023 kernel: acpiphp: Slot [23] registered Jul 12 00:25:52.004039 kernel: acpiphp: Slot [24] registered Jul 12 00:25:52.004055 kernel: acpiphp: Slot [25] registered Jul 12 00:25:52.004070 kernel: acpiphp: Slot [26] registered Jul 12 00:25:52.004091 kernel: acpiphp: Slot [27] registered Jul 12 00:25:52.004107 kernel: acpiphp: Slot [28] registered Jul 12 00:25:52.004122 kernel: acpiphp: Slot [29] registered Jul 12 00:25:52.004138 kernel: acpiphp: Slot [30] registered Jul 12 00:25:52.004154 kernel: acpiphp: Slot [31] registered Jul 12 00:25:52.004170 kernel: PCI host bridge to bus 0000:00 Jul 12 00:25:52.004366 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 12 00:25:52.004539 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:25:52.004710 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 12 00:25:52.004907 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 12 00:25:52.005117 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 12 00:25:52.005368 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 12 00:25:52.010514 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 12 00:25:52.010836 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 12 00:25:52.011075 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 12 00:25:52.011295 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:25:52.011531 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 12 00:25:52.013734 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 12 00:25:52.014053 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 12 00:25:52.014338 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 12 00:25:52.014609 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:25:52.016716 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 12 00:25:52.023382 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 12 00:25:52.023600 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 12 00:25:52.023789 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 12 00:25:52.024022 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 12 00:25:52.024200 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 12 00:25:52.024369 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:25:52.024545 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 12 00:25:52.024568 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:25:52.024586 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:25:52.024602 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:25:52.024619 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:25:52.024635 kernel: iommu: Default domain type: Translated Jul 12 00:25:52.024651 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:25:52.024667 kernel: vgaarb: loaded Jul 12 00:25:52.024683 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:25:52.024704 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:25:52.024721 kernel: PTP clock support registered Jul 12 00:25:52.024736 kernel: Registered efivars operations Jul 12 00:25:52.024752 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:25:52.024768 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:25:52.024784 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:25:52.024800 kernel: pnp: PnP ACPI init Jul 12 00:25:52.025013 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 12 00:25:52.025044 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:25:52.025061 kernel: NET: Registered PF_INET protocol family Jul 12 00:25:52.025078 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:25:52.025095 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:25:52.025111 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:25:52.025127 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:25:52.025143 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 12 00:25:52.025159 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:25:52.025176 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:25:52.025196 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:25:52.025212 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:25:52.025228 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:25:52.025244 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 12 00:25:52.025260 kernel: kvm [1]: HYP mode not available Jul 12 00:25:52.025276 kernel: Initialise system trusted keyrings Jul 12 00:25:52.025293 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:25:52.025309 kernel: Key type asymmetric registered Jul 12 00:25:52.025325 kernel: Asymmetric key parser 'x509' registered Jul 12 00:25:52.025345 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:25:52.025361 kernel: io scheduler mq-deadline registered Jul 12 00:25:52.025377 kernel: io scheduler kyber registered Jul 12 00:25:52.025393 kernel: io scheduler bfq registered Jul 12 00:25:52.025599 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 12 00:25:52.025623 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:25:52.025640 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:25:52.025656 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 12 00:25:52.025672 kernel: ACPI: button: Sleep Button [SLPB] Jul 12 00:25:52.025693 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:25:52.025710 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:25:52.029193 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 12 00:25:52.029236 kernel: printk: console [ttyS0] disabled Jul 12 00:25:52.029254 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 12 00:25:52.029271 kernel: printk: console [ttyS0] enabled Jul 12 00:25:52.029288 kernel: printk: bootconsole [uart0] disabled Jul 12 00:25:52.029304 kernel: thunder_xcv, ver 1.0 Jul 12 00:25:52.029320 kernel: thunder_bgx, ver 1.0 Jul 12 00:25:52.029345 kernel: nicpf, ver 1.0 Jul 12 00:25:52.029361 kernel: nicvf, ver 1.0 Jul 12 00:25:52.029571 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:25:52.029757 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:25:51 UTC (1752279951) Jul 12 00:25:52.029781 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:25:52.029798 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:25:52.037883 kernel: Segment Routing with IPv6 Jul 12 00:25:52.037921 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:25:52.037948 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:25:52.037965 kernel: Key type dns_resolver registered Jul 12 00:25:52.037982 kernel: registered taskstats version 1 Jul 12 00:25:52.037999 kernel: Loading compiled-in X.509 certificates Jul 12 00:25:52.038016 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: de2ee1d04443f96c763927c453375bbe23b5752a' Jul 12 00:25:52.038032 kernel: Key type .fscrypt registered Jul 12 00:25:52.038048 kernel: Key type fscrypt-provisioning registered Jul 12 00:25:52.038063 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:25:52.038080 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:25:52.038100 kernel: ima: No architecture policies found Jul 12 00:25:52.038117 kernel: clk: Disabling unused clocks Jul 12 00:25:52.038133 kernel: Freeing unused kernel memory: 36416K Jul 12 00:25:52.038149 kernel: Run /init as init process Jul 12 00:25:52.038165 kernel: with arguments: Jul 12 00:25:52.038181 kernel: /init Jul 12 00:25:52.038196 kernel: with environment: Jul 12 00:25:52.038212 kernel: HOME=/ Jul 12 00:25:52.038228 kernel: TERM=linux Jul 12 00:25:52.038248 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:25:52.038271 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:25:52.038292 systemd[1]: Detected virtualization amazon. Jul 12 00:25:52.038311 systemd[1]: Detected architecture arm64. Jul 12 00:25:52.038328 systemd[1]: Running in initrd. Jul 12 00:25:52.038345 systemd[1]: No hostname configured, using default hostname. Jul 12 00:25:52.038362 systemd[1]: Hostname set to . Jul 12 00:25:52.038384 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:25:52.038402 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:25:52.038420 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:25:52.038437 systemd[1]: Reached target cryptsetup.target. Jul 12 00:25:52.038454 systemd[1]: Reached target paths.target. Jul 12 00:25:52.038471 systemd[1]: Reached target slices.target. Jul 12 00:25:52.038488 systemd[1]: Reached target swap.target. Jul 12 00:25:52.038505 systemd[1]: Reached target timers.target. Jul 12 00:25:52.038527 systemd[1]: Listening on iscsid.socket. Jul 12 00:25:52.038545 systemd[1]: Listening on iscsiuio.socket. Jul 12 00:25:52.038562 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:25:52.038580 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:25:52.038597 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:25:52.038615 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:25:52.038632 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:25:52.038650 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:25:52.038671 systemd[1]: Reached target sockets.target. Jul 12 00:25:52.038688 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:25:52.038706 systemd[1]: Finished network-cleanup.service. Jul 12 00:25:52.038724 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:25:52.038741 systemd[1]: Starting systemd-journald.service... Jul 12 00:25:52.038758 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:25:52.038795 systemd[1]: Starting systemd-resolved.service... Jul 12 00:25:52.038834 systemd[1]: Starting systemd-vconsole-setup.service... Jul 12 00:25:52.038855 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:25:52.038878 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:25:52.038896 systemd[1]: Finished systemd-vconsole-setup.service. Jul 12 00:25:52.038915 kernel: audit: type=1130 audit(1752279951.990:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.038933 systemd[1]: Starting dracut-cmdline-ask.service... Jul 12 00:25:52.038951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:25:52.038972 systemd-journald[309]: Journal started Jul 12 00:25:52.039080 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2148a99b2e40a8839e2944398c41b3) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:25:52.043875 systemd[1]: Started systemd-journald.service. Jul 12 00:25:52.043926 kernel: audit: type=1130 audit(1752279952.039:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:51.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:51.968407 systemd-modules-load[310]: Inserted module 'overlay' Jul 12 00:25:52.055845 kernel: audit: type=1130 audit(1752279952.053:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.047329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:25:52.078093 kernel: audit: type=1130 audit(1752279952.068:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.068273 systemd[1]: Finished dracut-cmdline-ask.service. Jul 12 00:25:52.086701 systemd-resolved[311]: Positive Trust Anchors: Jul 12 00:25:52.102958 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:25:52.086730 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:25:52.086806 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:25:52.093190 systemd[1]: Starting dracut-cmdline.service... Jul 12 00:25:52.122456 systemd-modules-load[310]: Inserted module 'br_netfilter' Jul 12 00:25:52.126937 kernel: Bridge firewalling registered Jul 12 00:25:52.138431 dracut-cmdline[327]: dracut-dracut-053 Jul 12 00:25:52.149844 kernel: SCSI subsystem initialized Jul 12 00:25:52.154037 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:25:52.182951 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:25:52.183019 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:25:52.188847 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 12 00:25:52.193513 systemd-modules-load[310]: Inserted module 'dm_multipath' Jul 12 00:25:52.196528 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:25:52.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.210158 kernel: audit: type=1130 audit(1752279952.196:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.208446 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:25:52.226340 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:25:52.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.239862 kernel: audit: type=1130 audit(1752279952.229:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.324857 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:25:52.345853 kernel: iscsi: registered transport (tcp) Jul 12 00:25:52.373862 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:25:52.373933 kernel: QLogic iSCSI HBA Driver Jul 12 00:25:52.528614 systemd-resolved[311]: Defaulting to hostname 'linux'. Jul 12 00:25:52.531201 kernel: random: crng init done Jul 12 00:25:52.532099 systemd[1]: Started systemd-resolved.service. Jul 12 00:25:52.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.544460 kernel: audit: type=1130 audit(1752279952.532:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.535242 systemd[1]: Reached target nss-lookup.target. Jul 12 00:25:52.563352 systemd[1]: Finished dracut-cmdline.service. Jul 12 00:25:52.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.568032 systemd[1]: Starting dracut-pre-udev.service... Jul 12 00:25:52.575359 kernel: audit: type=1130 audit(1752279952.564:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:52.634858 kernel: raid6: neonx8 gen() 6370 MB/s Jul 12 00:25:52.652846 kernel: raid6: neonx8 xor() 4748 MB/s Jul 12 00:25:52.670844 kernel: raid6: neonx4 gen() 6492 MB/s Jul 12 00:25:52.688845 kernel: raid6: neonx4 xor() 4929 MB/s Jul 12 00:25:52.706844 kernel: raid6: neonx2 gen() 5759 MB/s Jul 12 00:25:52.724843 kernel: raid6: neonx2 xor() 4524 MB/s Jul 12 00:25:52.742844 kernel: raid6: neonx1 gen() 4450 MB/s Jul 12 00:25:52.760845 kernel: raid6: neonx1 xor() 3680 MB/s Jul 12 00:25:52.778845 kernel: raid6: int64x8 gen() 3414 MB/s Jul 12 00:25:52.796845 kernel: raid6: int64x8 xor() 2085 MB/s Jul 12 00:25:52.814845 kernel: raid6: int64x4 gen() 3788 MB/s Jul 12 00:25:52.832845 kernel: raid6: int64x4 xor() 2190 MB/s Jul 12 00:25:52.850844 kernel: raid6: int64x2 gen() 3591 MB/s Jul 12 00:25:52.868844 kernel: raid6: int64x2 xor() 1948 MB/s Jul 12 00:25:52.886844 kernel: raid6: int64x1 gen() 2761 MB/s Jul 12 00:25:52.906339 kernel: raid6: int64x1 xor() 1450 MB/s Jul 12 00:25:52.906376 kernel: raid6: using algorithm neonx4 gen() 6492 MB/s Jul 12 00:25:52.906401 kernel: raid6: .... xor() 4929 MB/s, rmw enabled Jul 12 00:25:52.908149 kernel: raid6: using neon recovery algorithm Jul 12 00:25:52.928450 kernel: xor: measuring software checksum speed Jul 12 00:25:52.928511 kernel: 8regs : 9006 MB/sec Jul 12 00:25:52.930364 kernel: 32regs : 11100 MB/sec Jul 12 00:25:52.932341 kernel: arm64_neon : 9436 MB/sec Jul 12 00:25:52.932371 kernel: xor: using function: 32regs (11100 MB/sec) Jul 12 00:25:53.030857 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 12 00:25:53.048194 systemd[1]: Finished dracut-pre-udev.service. Jul 12 00:25:53.063633 kernel: audit: type=1130 audit(1752279953.046:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:53.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:53.053000 audit: BPF prog-id=7 op=LOAD Jul 12 00:25:53.060000 audit: BPF prog-id=8 op=LOAD Jul 12 00:25:53.064297 systemd[1]: Starting systemd-udevd.service... Jul 12 00:25:53.093354 systemd-udevd[510]: Using default interface naming scheme 'v252'. Jul 12 00:25:53.104103 systemd[1]: Started systemd-udevd.service. Jul 12 00:25:53.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:53.109700 systemd[1]: Starting dracut-pre-trigger.service... Jul 12 00:25:53.141607 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Jul 12 00:25:53.200795 systemd[1]: Finished dracut-pre-trigger.service. Jul 12 00:25:53.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:53.202468 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:25:53.300666 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:25:53.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:53.427785 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:25:53.427870 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 12 00:25:53.437673 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:25:53.437754 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 12 00:25:53.454841 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 12 00:25:53.455105 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 12 00:25:53.455307 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 12 00:25:53.455502 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:92:65:49:fa:ab Jul 12 00:25:53.460239 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:25:53.460288 kernel: GPT:9289727 != 16777215 Jul 12 00:25:53.460312 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:25:53.462436 kernel: GPT:9289727 != 16777215 Jul 12 00:25:53.463742 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:25:53.467181 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:25:53.470529 (udev-worker)[574]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:53.543858 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (556) Jul 12 00:25:53.592597 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 12 00:25:53.640246 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 12 00:25:53.644889 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 12 00:25:53.661698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:25:53.685563 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 12 00:25:53.699335 systemd[1]: Starting disk-uuid.service... Jul 12 00:25:53.717848 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:25:53.717999 disk-uuid[670]: Primary Header is updated. Jul 12 00:25:53.717999 disk-uuid[670]: Secondary Entries is updated. Jul 12 00:25:53.717999 disk-uuid[670]: Secondary Header is updated. Jul 12 00:25:54.752572 disk-uuid[671]: The operation has completed successfully. Jul 12 00:25:54.754925 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:25:54.923777 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:25:54.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:54.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:54.924014 systemd[1]: Finished disk-uuid.service. Jul 12 00:25:54.940362 systemd[1]: Starting verity-setup.service... Jul 12 00:25:54.977850 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:25:55.075857 systemd[1]: Found device dev-mapper-usr.device. Jul 12 00:25:55.081152 systemd[1]: Mounting sysusr-usr.mount... Jul 12 00:25:55.085501 systemd[1]: Finished verity-setup.service. Jul 12 00:25:55.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:55.179846 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 12 00:25:55.180395 systemd[1]: Mounted sysusr-usr.mount. Jul 12 00:25:55.183831 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 12 00:25:55.191133 systemd[1]: Starting ignition-setup.service... Jul 12 00:25:55.196167 systemd[1]: Starting parse-ip-for-networkd.service... Jul 12 00:25:55.230295 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:25:55.230364 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:25:55.230388 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:25:55.278857 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:25:55.297467 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:25:55.318260 systemd[1]: Finished ignition-setup.service. Jul 12 00:25:55.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:55.321756 systemd[1]: Starting ignition-fetch-offline.service... Jul 12 00:25:55.337801 systemd[1]: Finished parse-ip-for-networkd.service. Jul 12 00:25:55.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:55.341000 audit: BPF prog-id=9 op=LOAD Jul 12 00:25:55.344147 systemd[1]: Starting systemd-networkd.service... Jul 12 00:25:55.391137 systemd-networkd[1184]: lo: Link UP Jul 12 00:25:55.391161 systemd-networkd[1184]: lo: Gained carrier Jul 12 00:25:55.392192 systemd-networkd[1184]: Enumeration completed Jul 12 00:25:55.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:55.392335 systemd[1]: Started systemd-networkd.service. Jul 12 00:25:55.392783 systemd-networkd[1184]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:25:55.400787 systemd[1]: Reached target network.target. Jul 12 00:25:55.403547 systemd-networkd[1184]: eth0: Link UP Jul 12 00:25:55.403555 systemd-networkd[1184]: eth0: Gained carrier Jul 12 00:25:55.420581 systemd[1]: Starting iscsiuio.service... Jul 12 00:25:55.428049 systemd-networkd[1184]: eth0: DHCPv4 address 172.31.16.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:25:55.441129 systemd[1]: Started iscsiuio.service. Jul 12 00:25:55.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:55.447291 systemd[1]: Starting iscsid.service... Jul 12 00:25:55.457589 iscsid[1189]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:25:55.457589 iscsid[1189]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 12 00:25:55.457589 iscsid[1189]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 12 00:25:55.457589 iscsid[1189]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 12 00:25:55.457589 iscsid[1189]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:25:55.479174 iscsid[1189]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 12 00:25:55.488108 systemd[1]: Started iscsid.service. Jul 12 00:25:55.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:55.495140 systemd[1]: Starting dracut-initqueue.service... Jul 12 00:25:55.521291 systemd[1]: Finished dracut-initqueue.service. Jul 12 00:25:55.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:55.528690 systemd[1]: Reached target remote-fs-pre.target. Jul 12 00:25:55.532400 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:25:55.535884 systemd[1]: Reached target remote-fs.target. Jul 12 00:25:55.540543 systemd[1]: Starting dracut-pre-mount.service... Jul 12 00:25:55.556622 systemd[1]: Finished dracut-pre-mount.service. Jul 12 00:25:55.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.147316 ignition[1180]: Ignition 2.14.0 Jul 12 00:25:56.147344 ignition[1180]: Stage: fetch-offline Jul 12 00:25:56.147863 ignition[1180]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:56.150677 ignition[1180]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:56.178549 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:56.181336 ignition[1180]: Ignition finished successfully Jul 12 00:25:56.183664 systemd[1]: Finished ignition-fetch-offline.service. Jul 12 00:25:56.206975 kernel: kauditd_printk_skb: 16 callbacks suppressed Jul 12 00:25:56.207016 kernel: audit: type=1130 audit(1752279956.188:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.191781 systemd[1]: Starting ignition-fetch.service... Jul 12 00:25:56.207702 ignition[1208]: Ignition 2.14.0 Jul 12 00:25:56.208061 ignition[1208]: Stage: fetch Jul 12 00:25:56.209057 ignition[1208]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:56.209121 ignition[1208]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:56.229508 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:56.232284 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:56.258340 ignition[1208]: INFO : PUT result: OK Jul 12 00:25:56.262313 ignition[1208]: DEBUG : parsed url from cmdline: "" Jul 12 00:25:56.262313 ignition[1208]: INFO : no config URL provided Jul 12 00:25:56.262313 ignition[1208]: INFO : reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:25:56.268671 ignition[1208]: INFO : no config at "/usr/lib/ignition/user.ign" Jul 12 00:25:56.268671 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:56.268671 ignition[1208]: INFO : PUT result: OK Jul 12 00:25:56.275226 ignition[1208]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 12 00:25:56.275226 ignition[1208]: INFO : GET result: OK Jul 12 00:25:56.282669 ignition[1208]: DEBUG : parsing config with SHA512: b9353595db66e8a3d98cab68c7e2e05d43e7dd62da671d50e0ec14dd6afdd3e8c8d2ad71c6fbc8f7dc7f4999e7be2da70037d7dfc6859862871612139155788c Jul 12 00:25:56.280367 ignition[1208]: fetch: fetch complete Jul 12 00:25:56.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.279517 unknown[1208]: fetched base config from "system" Jul 12 00:25:56.280379 ignition[1208]: fetch: fetch passed Jul 12 00:25:56.279535 unknown[1208]: fetched base config from "system" Jul 12 00:25:56.280461 ignition[1208]: Ignition finished successfully Jul 12 00:25:56.279549 unknown[1208]: fetched user config from "aws" Jul 12 00:25:56.307403 kernel: audit: type=1130 audit(1752279956.286:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.283238 systemd[1]: Finished ignition-fetch.service. Jul 12 00:25:56.307833 systemd[1]: Starting ignition-kargs.service... Jul 12 00:25:56.320835 ignition[1214]: Ignition 2.14.0 Jul 12 00:25:56.320855 ignition[1214]: Stage: kargs Jul 12 00:25:56.321144 ignition[1214]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:56.321200 ignition[1214]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:56.339268 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:56.342510 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:56.345144 ignition[1214]: INFO : PUT result: OK Jul 12 00:25:56.350022 ignition[1214]: kargs: kargs passed Jul 12 00:25:56.350201 ignition[1214]: Ignition finished successfully Jul 12 00:25:56.352838 systemd[1]: Finished ignition-kargs.service. Jul 12 00:25:56.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.357729 systemd[1]: Starting ignition-disks.service... Jul 12 00:25:56.368865 kernel: audit: type=1130 audit(1752279956.355:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.380865 ignition[1220]: Ignition 2.14.0 Jul 12 00:25:56.382586 ignition[1220]: Stage: disks Jul 12 00:25:56.384216 ignition[1220]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:56.386651 ignition[1220]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:56.397371 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:56.399915 ignition[1220]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:56.402633 ignition[1220]: INFO : PUT result: OK Jul 12 00:25:56.407180 ignition[1220]: disks: disks passed Jul 12 00:25:56.407284 ignition[1220]: Ignition finished successfully Jul 12 00:25:56.410782 systemd[1]: Finished ignition-disks.service. Jul 12 00:25:56.416691 systemd[1]: Reached target initrd-root-device.target. Jul 12 00:25:56.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.427518 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:25:56.429431 kernel: audit: type=1130 audit(1752279956.415:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.429349 systemd[1]: Reached target local-fs.target. Jul 12 00:25:56.432583 systemd[1]: Reached target sysinit.target. Jul 12 00:25:56.435667 systemd[1]: Reached target basic.target. Jul 12 00:25:56.441544 systemd[1]: Starting systemd-fsck-root.service... Jul 12 00:25:56.479273 systemd-fsck[1228]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 12 00:25:56.490203 systemd[1]: Finished systemd-fsck-root.service. Jul 12 00:25:56.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.495188 systemd[1]: Mounting sysroot.mount... Jul 12 00:25:56.504393 kernel: audit: type=1130 audit(1752279956.492:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.516841 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 12 00:25:56.518508 systemd[1]: Mounted sysroot.mount. Jul 12 00:25:56.518905 systemd[1]: Reached target initrd-root-fs.target. Jul 12 00:25:56.534138 systemd[1]: Mounting sysroot-usr.mount... Jul 12 00:25:56.536473 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 12 00:25:56.536555 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:25:56.536605 systemd[1]: Reached target ignition-diskful.target. Jul 12 00:25:56.553314 systemd[1]: Mounted sysroot-usr.mount. Jul 12 00:25:56.578780 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:25:56.584340 systemd[1]: Starting initrd-setup-root.service... Jul 12 00:25:56.599939 initrd-setup-root[1250]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:25:56.611869 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1245) Jul 12 00:25:56.618361 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:25:56.618411 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:25:56.621100 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:25:56.623239 initrd-setup-root[1259]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:25:56.631627 initrd-setup-root[1282]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:25:56.639693 initrd-setup-root[1290]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:25:56.652860 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:25:56.663665 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:25:56.847466 systemd[1]: Finished initrd-setup-root.service. Jul 12 00:25:56.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.853211 systemd[1]: Starting ignition-mount.service... Jul 12 00:25:56.866500 kernel: audit: type=1130 audit(1752279956.849:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.867028 systemd[1]: Starting sysroot-boot.service... Jul 12 00:25:56.879038 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 12 00:25:56.879214 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 12 00:25:56.904206 ignition[1311]: INFO : Ignition 2.14.0 Jul 12 00:25:56.904206 ignition[1311]: INFO : Stage: mount Jul 12 00:25:56.908429 ignition[1311]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:56.908429 ignition[1311]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:56.924916 systemd[1]: Finished sysroot-boot.service. Jul 12 00:25:56.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.935848 kernel: audit: type=1130 audit(1752279956.927:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.942760 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:56.945663 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:56.949153 ignition[1311]: INFO : PUT result: OK Jul 12 00:25:56.953392 ignition[1311]: INFO : mount: mount passed Jul 12 00:25:56.955189 ignition[1311]: INFO : Ignition finished successfully Jul 12 00:25:56.957343 systemd[1]: Finished ignition-mount.service. Jul 12 00:25:56.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.961793 systemd[1]: Starting ignition-files.service... Jul 12 00:25:56.971001 kernel: audit: type=1130 audit(1752279956.959:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:56.978502 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:25:57.005860 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1320) Jul 12 00:25:57.012023 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:25:57.012066 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:25:57.012090 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:25:57.027859 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:25:57.033256 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:25:57.051953 ignition[1339]: INFO : Ignition 2.14.0 Jul 12 00:25:57.051953 ignition[1339]: INFO : Stage: files Jul 12 00:25:57.055390 ignition[1339]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:57.055390 ignition[1339]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:57.072307 ignition[1339]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:57.075014 ignition[1339]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:57.078496 ignition[1339]: INFO : PUT result: OK Jul 12 00:25:57.082888 ignition[1339]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:25:57.088723 ignition[1339]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:25:57.088723 ignition[1339]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:25:57.139639 ignition[1339]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:25:57.145161 ignition[1339]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:25:57.149976 unknown[1339]: wrote ssh authorized keys file for user: core Jul 12 00:25:57.152423 ignition[1339]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:25:57.156032 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:25:57.160951 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:25:57.169591 ignition[1339]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432173662" Jul 12 00:25:57.172705 ignition[1339]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432173662": device or resource busy Jul 12 00:25:57.172705 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3432173662", trying btrfs: device or resource busy Jul 12 00:25:57.172705 ignition[1339]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432173662" Jul 12 00:25:57.182854 ignition[1339]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432173662" Jul 12 00:25:57.186311 ignition[1339]: INFO : op(3): [started] unmounting "/mnt/oem3432173662" Jul 12 00:25:57.186311 ignition[1339]: INFO : op(3): [finished] unmounting "/mnt/oem3432173662" Jul 12 00:25:57.186311 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:25:57.186311 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:25:57.186311 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:25:57.210153 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:25:57.210153 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:25:57.210153 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:25:57.210153 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:25:57.210153 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:25:57.210153 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:25:57.210153 ignition[1339]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1860245916" Jul 12 00:25:57.210153 ignition[1339]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1860245916": device or resource busy Jul 12 00:25:57.210153 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1860245916", trying btrfs: device or resource busy Jul 12 00:25:57.210153 ignition[1339]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1860245916" Jul 12 00:25:57.210153 ignition[1339]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1860245916" Jul 12 00:25:57.210153 ignition[1339]: INFO : op(6): [started] unmounting "/mnt/oem1860245916" Jul 12 00:25:57.262789 ignition[1339]: INFO : op(6): [finished] unmounting "/mnt/oem1860245916" Jul 12 00:25:57.262789 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:25:57.262789 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:25:57.262789 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:25:57.262789 ignition[1339]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1246598538" Jul 12 00:25:57.262789 ignition[1339]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1246598538": device or resource busy Jul 12 00:25:57.262789 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1246598538", trying btrfs: device or resource busy Jul 12 00:25:57.262789 ignition[1339]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1246598538" Jul 12 00:25:57.262789 ignition[1339]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1246598538" Jul 12 00:25:57.262789 ignition[1339]: INFO : op(9): [started] unmounting "/mnt/oem1246598538" Jul 12 00:25:57.262789 ignition[1339]: INFO : op(9): [finished] unmounting "/mnt/oem1246598538" Jul 12 00:25:57.262789 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:25:57.262789 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:25:57.262789 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:25:57.313110 ignition[1339]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem74976486" Jul 12 00:25:57.313110 ignition[1339]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem74976486": device or resource busy Jul 12 00:25:57.313110 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem74976486", trying btrfs: device or resource busy Jul 12 00:25:57.313110 ignition[1339]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem74976486" Jul 12 00:25:57.313110 ignition[1339]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem74976486" Jul 12 00:25:57.313110 ignition[1339]: INFO : op(c): [started] unmounting "/mnt/oem74976486" Jul 12 00:25:57.313110 ignition[1339]: INFO : op(c): [finished] unmounting "/mnt/oem74976486" Jul 12 00:25:57.313110 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:25:57.313110 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:25:57.313110 ignition[1339]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:25:57.480983 systemd-networkd[1184]: eth0: Gained IPv6LL Jul 12 00:25:57.805196 ignition[1339]: INFO : GET result: OK Jul 12 00:25:58.406844 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(b): [started] processing unit "nvidia.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(b): [finished] processing unit "nvidia.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(d): [started] processing unit "amazon-ssm-agent.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(d): op(e): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(d): op(e): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(d): [finished] processing unit "amazon-ssm-agent.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(f): [started] setting preset to enabled for "nvidia.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(f): [finished] setting preset to enabled for "nvidia.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(10): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(10): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(11): [started] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:25:58.411806 ignition[1339]: INFO : files: op(11): [finished] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:25:58.461886 ignition[1339]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:25:58.461886 ignition[1339]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:25:58.461886 ignition[1339]: INFO : files: files passed Jul 12 00:25:58.461886 ignition[1339]: INFO : Ignition finished successfully Jul 12 00:25:58.482300 kernel: audit: type=1130 audit(1752279958.468:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.467260 systemd[1]: Finished ignition-files.service. Jul 12 00:25:58.492164 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 12 00:25:58.495900 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 12 00:25:58.503386 systemd[1]: Starting ignition-quench.service... Jul 12 00:25:58.517433 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:25:58.517648 systemd[1]: Finished ignition-quench.service. Jul 12 00:25:58.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.528903 initrd-setup-root-after-ignition[1364]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:25:58.533529 kernel: audit: type=1130 audit(1752279958.521:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.533860 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 12 00:25:58.538097 systemd[1]: Reached target ignition-complete.target. Jul 12 00:25:58.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.545774 systemd[1]: Starting initrd-parse-etc.service... Jul 12 00:25:58.574911 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:25:58.576212 systemd[1]: Finished initrd-parse-etc.service. Jul 12 00:25:58.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.579931 systemd[1]: Reached target initrd-fs.target. Jul 12 00:25:58.582202 systemd[1]: Reached target initrd.target. Jul 12 00:25:58.585430 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 12 00:25:58.586947 systemd[1]: Starting dracut-pre-pivot.service... Jul 12 00:25:58.613740 systemd[1]: Finished dracut-pre-pivot.service. Jul 12 00:25:58.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.618642 systemd[1]: Starting initrd-cleanup.service... Jul 12 00:25:58.637799 systemd[1]: Stopped target nss-lookup.target. Jul 12 00:25:58.641585 systemd[1]: Stopped target remote-cryptsetup.target. Jul 12 00:25:58.645538 systemd[1]: Stopped target timers.target. Jul 12 00:25:58.648899 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:25:58.651156 systemd[1]: Stopped dracut-pre-pivot.service. Jul 12 00:25:58.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.655020 systemd[1]: Stopped target initrd.target. Jul 12 00:25:58.658442 systemd[1]: Stopped target basic.target. Jul 12 00:25:58.661678 systemd[1]: Stopped target ignition-complete.target. Jul 12 00:25:58.665278 systemd[1]: Stopped target ignition-diskful.target. Jul 12 00:25:58.668886 systemd[1]: Stopped target initrd-root-device.target. Jul 12 00:25:58.675993 systemd[1]: Stopped target remote-fs.target. Jul 12 00:25:58.679357 systemd[1]: Stopped target remote-fs-pre.target. Jul 12 00:25:58.682958 systemd[1]: Stopped target sysinit.target. Jul 12 00:25:58.688598 systemd[1]: Stopped target local-fs.target. Jul 12 00:25:58.691916 systemd[1]: Stopped target local-fs-pre.target. Jul 12 00:25:58.698788 systemd[1]: Stopped target swap.target. Jul 12 00:25:58.701857 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:25:58.704054 systemd[1]: Stopped dracut-pre-mount.service. Jul 12 00:25:58.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.708722 systemd[1]: Stopped target cryptsetup.target. Jul 12 00:25:58.712078 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:25:58.714173 systemd[1]: Stopped dracut-initqueue.service. Jul 12 00:25:58.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.717850 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:25:58.720324 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 12 00:25:58.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.724453 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:25:58.726532 systemd[1]: Stopped ignition-files.service. Jul 12 00:25:58.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.740008 iscsid[1189]: iscsid shutting down. Jul 12 00:25:58.731339 systemd[1]: Stopping ignition-mount.service... Jul 12 00:25:58.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.757183 ignition[1377]: INFO : Ignition 2.14.0 Jul 12 00:25:58.757183 ignition[1377]: INFO : Stage: umount Jul 12 00:25:58.757183 ignition[1377]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:25:58.757183 ignition[1377]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:25:58.740154 systemd[1]: Stopping iscsid.service... Jul 12 00:25:58.741624 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:25:58.743176 systemd[1]: Stopped kmod-static-nodes.service. Jul 12 00:25:58.754472 systemd[1]: Stopping sysroot-boot.service... Jul 12 00:25:58.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.779182 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:25:58.779498 systemd[1]: Stopped systemd-udev-trigger.service. Jul 12 00:25:58.785129 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:25:58.785376 systemd[1]: Stopped dracut-pre-trigger.service. Jul 12 00:25:58.791185 ignition[1377]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:25:58.791185 ignition[1377]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:25:58.800734 ignition[1377]: INFO : PUT result: OK Jul 12 00:25:58.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.804648 systemd[1]: iscsid.service: Deactivated successfully. Jul 12 00:25:58.817424 ignition[1377]: INFO : umount: umount passed Jul 12 00:25:58.817424 ignition[1377]: INFO : Ignition finished successfully Jul 12 00:25:58.804917 systemd[1]: Stopped iscsid.service. Jul 12 00:25:58.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.811444 systemd[1]: Stopping iscsiuio.service... Jul 12 00:25:58.813476 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 12 00:25:58.813976 systemd[1]: Stopped iscsiuio.service. Jul 12 00:25:58.832173 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:25:58.832384 systemd[1]: Finished initrd-cleanup.service. Jul 12 00:25:58.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.839049 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:25:58.839387 systemd[1]: Stopped ignition-mount.service. Jul 12 00:25:58.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.846316 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:25:58.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.846417 systemd[1]: Stopped ignition-disks.service. Jul 12 00:25:58.851157 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:25:58.851234 systemd[1]: Stopped ignition-kargs.service. Jul 12 00:25:58.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.858667 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:25:58.858776 systemd[1]: Stopped ignition-fetch.service. Jul 12 00:25:58.860620 systemd[1]: Stopped target network.target. Jul 12 00:25:58.862290 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:25:58.862371 systemd[1]: Stopped ignition-fetch-offline.service. Jul 12 00:25:58.862635 systemd[1]: Stopped target paths.target. Jul 12 00:25:58.867490 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:25:58.874832 systemd[1]: Stopped systemd-ask-password-console.path. Jul 12 00:25:58.876785 systemd[1]: Stopped target slices.target. Jul 12 00:25:58.896378 systemd[1]: Stopped target sockets.target. Jul 12 00:25:58.899503 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:25:58.899594 systemd[1]: Closed iscsid.socket. Jul 12 00:25:58.904005 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:25:58.904083 systemd[1]: Closed iscsiuio.socket. Jul 12 00:25:58.908382 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:25:58.908480 systemd[1]: Stopped ignition-setup.service. Jul 12 00:25:58.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.913634 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:25:58.917152 systemd[1]: Stopping systemd-resolved.service... Jul 12 00:25:58.920878 systemd-networkd[1184]: eth0: DHCPv6 lease lost Jul 12 00:25:58.922267 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:25:58.926611 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:25:58.928905 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:25:58.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.934543 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:25:58.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.934741 systemd[1]: Stopped systemd-resolved.service. Jul 12 00:25:58.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.940000 audit: BPF prog-id=9 op=UNLOAD Jul 12 00:25:58.940000 audit: BPF prog-id=6 op=UNLOAD Jul 12 00:25:58.938256 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:25:58.938576 systemd[1]: Stopped sysroot-boot.service. Jul 12 00:25:58.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.941918 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:25:58.941986 systemd[1]: Closed systemd-networkd.socket. Jul 12 00:25:58.944700 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:25:58.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.945212 systemd[1]: Stopped initrd-setup-root.service. Jul 12 00:25:58.947937 systemd[1]: Stopping network-cleanup.service... Jul 12 00:25:58.956423 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:25:58.956540 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 12 00:25:58.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.958614 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:25:58.958695 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:25:58.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.970796 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:25:58.970947 systemd[1]: Stopped systemd-modules-load.service. Jul 12 00:25:58.975193 systemd[1]: Stopping systemd-udevd.service... Jul 12 00:25:58.992260 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:25:58.997070 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:25:58.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:59.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:58.997411 systemd[1]: Stopped systemd-udevd.service. Jul 12 00:25:59.001630 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:25:59.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:59.001852 systemd[1]: Stopped network-cleanup.service. Jul 12 00:25:59.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:59.005484 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:25:59.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:59.005565 systemd[1]: Closed systemd-udevd-control.socket. Jul 12 00:25:59.008288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:25:59.008356 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 12 00:25:59.010145 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:25:59.010224 systemd[1]: Stopped dracut-pre-udev.service. Jul 12 00:25:59.012378 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:25:59.012457 systemd[1]: Stopped dracut-cmdline.service. Jul 12 00:25:59.016063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:25:59.016144 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 12 00:25:59.027013 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 12 00:25:59.032407 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:25:59.032911 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 12 00:25:59.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:59.065914 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:25:59.068439 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 12 00:25:59.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:59.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:59.072529 systemd[1]: Reached target initrd-switch-root.target. Jul 12 00:25:59.077394 systemd[1]: Starting initrd-switch-root.service... Jul 12 00:25:59.093118 systemd[1]: Switching root. Jul 12 00:25:59.122943 systemd-journald[309]: Journal stopped Jul 12 00:26:04.978340 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Jul 12 00:26:04.978473 kernel: SELinux: Class mctp_socket not defined in policy. Jul 12 00:26:04.978515 kernel: SELinux: Class anon_inode not defined in policy. Jul 12 00:26:04.978562 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 12 00:26:04.978595 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:26:04.978631 kernel: SELinux: policy capability open_perms=1 Jul 12 00:26:04.978661 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:26:04.978709 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:26:04.978744 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:26:04.978774 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:26:04.978805 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:26:04.979096 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:26:04.979132 systemd[1]: Successfully loaded SELinux policy in 121.377ms. Jul 12 00:26:04.979179 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.666ms. Jul 12 00:26:04.979218 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:26:04.979250 systemd[1]: Detected virtualization amazon. Jul 12 00:26:04.979283 systemd[1]: Detected architecture arm64. Jul 12 00:26:04.979313 systemd[1]: Detected first boot. Jul 12 00:26:04.979344 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:26:04.979375 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 12 00:26:04.979403 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:26:04.979441 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:26:04.979475 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:26:04.979508 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:26:04.979539 kernel: kauditd_printk_skb: 56 callbacks suppressed Jul 12 00:26:04.979568 kernel: audit: type=1334 audit(1752279964.520:86): prog-id=12 op=LOAD Jul 12 00:26:04.979599 kernel: audit: type=1334 audit(1752279964.521:87): prog-id=3 op=UNLOAD Jul 12 00:26:04.979628 kernel: audit: type=1334 audit(1752279964.523:88): prog-id=13 op=LOAD Jul 12 00:26:04.979661 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:26:04.979698 kernel: audit: type=1334 audit(1752279964.525:89): prog-id=14 op=LOAD Jul 12 00:26:04.979727 systemd[1]: Stopped initrd-switch-root.service. Jul 12 00:26:04.979757 kernel: audit: type=1334 audit(1752279964.525:90): prog-id=4 op=UNLOAD Jul 12 00:26:04.979786 kernel: audit: type=1334 audit(1752279964.525:91): prog-id=5 op=UNLOAD Jul 12 00:26:04.979883 kernel: audit: type=1131 audit(1752279964.525:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.979918 kernel: audit: type=1334 audit(1752279964.538:93): prog-id=12 op=UNLOAD Jul 12 00:26:04.979949 kernel: audit: type=1130 audit(1752279964.554:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.979982 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:26:04.980013 kernel: audit: type=1131 audit(1752279964.554:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.980042 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 12 00:26:04.980071 systemd[1]: Created slice system-addon\x2drun.slice. Jul 12 00:26:04.980104 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 12 00:26:04.980135 systemd[1]: Created slice system-getty.slice. Jul 12 00:26:04.980164 systemd[1]: Created slice system-modprobe.slice. Jul 12 00:26:04.980201 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 12 00:26:04.980233 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 12 00:26:04.980264 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 12 00:26:04.980296 systemd[1]: Created slice user.slice. Jul 12 00:26:04.980324 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:26:04.980356 systemd[1]: Started systemd-ask-password-wall.path. Jul 12 00:26:04.980390 systemd[1]: Set up automount boot.automount. Jul 12 00:26:04.980419 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 12 00:26:04.980451 systemd[1]: Stopped target initrd-switch-root.target. Jul 12 00:26:04.980484 systemd[1]: Stopped target initrd-fs.target. Jul 12 00:26:04.980517 systemd[1]: Stopped target initrd-root-fs.target. Jul 12 00:26:04.980547 systemd[1]: Reached target integritysetup.target. Jul 12 00:26:04.980578 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:26:04.980609 systemd[1]: Reached target remote-fs.target. Jul 12 00:26:04.980637 systemd[1]: Reached target slices.target. Jul 12 00:26:04.980666 systemd[1]: Reached target swap.target. Jul 12 00:26:04.980694 systemd[1]: Reached target torcx.target. Jul 12 00:26:04.980725 systemd[1]: Reached target veritysetup.target. Jul 12 00:26:04.980758 systemd[1]: Listening on systemd-coredump.socket. Jul 12 00:26:04.980789 systemd[1]: Listening on systemd-initctl.socket. Jul 12 00:26:04.980839 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:26:04.980882 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:26:04.986897 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:26:04.986938 systemd[1]: Listening on systemd-userdbd.socket. Jul 12 00:26:04.986970 systemd[1]: Mounting dev-hugepages.mount... Jul 12 00:26:04.987000 systemd[1]: Mounting dev-mqueue.mount... Jul 12 00:26:04.987034 systemd[1]: Mounting media.mount... Jul 12 00:26:04.987066 systemd[1]: Mounting sys-kernel-debug.mount... Jul 12 00:26:04.987103 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 12 00:26:04.987134 systemd[1]: Mounting tmp.mount... Jul 12 00:26:04.987164 systemd[1]: Starting flatcar-tmpfiles.service... Jul 12 00:26:04.987193 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:26:04.987223 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:26:04.987254 systemd[1]: Starting modprobe@configfs.service... Jul 12 00:26:04.987286 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:26:04.987315 systemd[1]: Starting modprobe@drm.service... Jul 12 00:26:04.987383 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:26:04.987423 systemd[1]: Starting modprobe@fuse.service... Jul 12 00:26:04.987453 systemd[1]: Starting modprobe@loop.service... Jul 12 00:26:04.987483 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:26:04.987512 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:26:04.987542 systemd[1]: Stopped systemd-fsck-root.service. Jul 12 00:26:04.987574 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:26:04.987603 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:26:04.987633 systemd[1]: Stopped systemd-journald.service. Jul 12 00:26:04.987662 systemd[1]: Starting systemd-journald.service... Jul 12 00:26:04.987694 kernel: loop: module loaded Jul 12 00:26:04.987723 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:26:04.987753 systemd[1]: Starting systemd-network-generator.service... Jul 12 00:26:04.987782 systemd[1]: Starting systemd-remount-fs.service... Jul 12 00:26:04.987826 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:26:04.987864 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:26:04.987893 systemd[1]: Stopped verity-setup.service. Jul 12 00:26:04.987924 systemd[1]: Mounted dev-hugepages.mount. Jul 12 00:26:04.987955 systemd[1]: Mounted dev-mqueue.mount. Jul 12 00:26:04.987988 systemd[1]: Mounted media.mount. Jul 12 00:26:04.988017 systemd[1]: Mounted sys-kernel-debug.mount. Jul 12 00:26:04.988046 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 12 00:26:04.988075 systemd[1]: Mounted tmp.mount. Jul 12 00:26:04.988103 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:26:04.988131 kernel: fuse: init (API version 7.34) Jul 12 00:26:04.988159 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:26:04.988190 systemd[1]: Finished modprobe@configfs.service. Jul 12 00:26:04.988220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:26:04.988255 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:26:04.988285 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:26:04.988315 systemd[1]: Finished modprobe@drm.service. Jul 12 00:26:04.988344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:26:04.988373 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:26:04.988405 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:26:04.988434 systemd[1]: Finished modprobe@fuse.service. Jul 12 00:26:04.988463 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:26:04.988491 systemd[1]: Finished modprobe@loop.service. Jul 12 00:26:04.988520 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:26:04.988550 systemd[1]: Finished systemd-network-generator.service. Jul 12 00:26:04.988584 systemd-journald[1490]: Journal started Jul 12 00:26:04.988681 systemd-journald[1490]: Runtime Journal (/run/log/journal/ec2148a99b2e40a8839e2944398c41b3) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:25:59.916000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:26:00.085000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:26:00.085000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:26:00.085000 audit: BPF prog-id=10 op=LOAD Jul 12 00:26:00.085000 audit: BPF prog-id=10 op=UNLOAD Jul 12 00:26:00.085000 audit: BPF prog-id=11 op=LOAD Jul 12 00:26:00.085000 audit: BPF prog-id=11 op=UNLOAD Jul 12 00:26:00.325000 audit[1411]: AVC avc: denied { associate } for pid=1411 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 12 00:26:00.325000 audit[1411]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1394 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:00.325000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:26:00.329000 audit[1411]: AVC avc: denied { associate } for pid=1411 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 12 00:26:00.329000 audit[1411]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145975 a2=1ed a3=0 items=2 ppid=1394 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:00.329000 audit: CWD cwd="/" Jul 12 00:26:00.329000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:00.329000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:00.329000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:26:04.520000 audit: BPF prog-id=12 op=LOAD Jul 12 00:26:04.521000 audit: BPF prog-id=3 op=UNLOAD Jul 12 00:26:04.523000 audit: BPF prog-id=13 op=LOAD Jul 12 00:26:04.525000 audit: BPF prog-id=14 op=LOAD Jul 12 00:26:04.525000 audit: BPF prog-id=4 op=UNLOAD Jul 12 00:26:04.525000 audit: BPF prog-id=5 op=UNLOAD Jul 12 00:26:04.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.538000 audit: BPF prog-id=12 op=UNLOAD Jul 12 00:26:04.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.833000 audit: BPF prog-id=15 op=LOAD Jul 12 00:26:04.833000 audit: BPF prog-id=16 op=LOAD Jul 12 00:26:04.833000 audit: BPF prog-id=17 op=LOAD Jul 12 00:26:04.833000 audit: BPF prog-id=13 op=UNLOAD Jul 12 00:26:04.833000 audit: BPF prog-id=14 op=UNLOAD Jul 12 00:26:04.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.972000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 12 00:26:04.972000 audit[1490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc4af8090 a2=4000 a3=1 items=0 ppid=1 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:04.972000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 12 00:26:04.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.998897 systemd[1]: Started systemd-journald.service. Jul 12 00:26:04.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.518197 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:26:00.306931 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:26:05.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:04.518218 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Jul 12 00:26:00.308216 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 12 00:26:04.528575 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:26:00.308267 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 12 00:26:04.999467 systemd[1]: Finished systemd-remount-fs.service. Jul 12 00:26:00.308335 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 12 00:26:05.002138 systemd[1]: Reached target network-pre.target. Jul 12 00:26:00.308361 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 12 00:26:00.308429 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 12 00:26:00.308461 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 12 00:26:00.308924 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 12 00:26:00.309012 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 12 00:26:05.006439 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 12 00:26:00.309048 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 12 00:26:05.010731 systemd[1]: Mounting sys-kernel-config.mount... Jul 12 00:26:00.318476 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 12 00:26:05.019010 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:26:00.318565 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 12 00:26:00.318613 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 12 00:26:00.318652 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 12 00:26:00.318702 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 12 00:26:00.318762 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 12 00:26:03.663980 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:26:03.664511 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:26:03.664734 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:26:03.665191 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:26:05.022267 systemd[1]: Starting systemd-hwdb-update.service... Jul 12 00:26:03.665299 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 12 00:26:03.665434 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2025-07-12T00:26:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 12 00:26:05.026282 systemd[1]: Starting systemd-journal-flush.service... Jul 12 00:26:05.031023 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:26:05.033757 systemd[1]: Starting systemd-random-seed.service... Jul 12 00:26:05.035635 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:26:05.037925 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:26:05.043468 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 12 00:26:05.045637 systemd[1]: Mounted sys-kernel-config.mount. Jul 12 00:26:05.076957 systemd-journald[1490]: Time spent on flushing to /var/log/journal/ec2148a99b2e40a8839e2944398c41b3 is 75.318ms for 1107 entries. Jul 12 00:26:05.076957 systemd-journald[1490]: System Journal (/var/log/journal/ec2148a99b2e40a8839e2944398c41b3) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:26:05.177753 systemd-journald[1490]: Received client request to flush runtime journal. Jul 12 00:26:05.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:05.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:05.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:05.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:05.088668 systemd[1]: Finished systemd-random-seed.service. Jul 12 00:26:05.184111 udevadm[1527]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:26:05.090850 systemd[1]: Reached target first-boot-complete.target. Jul 12 00:26:05.124877 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:26:05.156716 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:26:05.161096 systemd[1]: Starting systemd-udev-settle.service... Jul 12 00:26:05.179403 systemd[1]: Finished systemd-journal-flush.service. Jul 12 00:26:05.187421 systemd[1]: Finished flatcar-tmpfiles.service. Jul 12 00:26:05.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:05.191619 systemd[1]: Starting systemd-sysusers.service... Jul 12 00:26:05.395799 systemd[1]: Finished systemd-sysusers.service. Jul 12 00:26:05.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:05.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:05.918000 audit: BPF prog-id=18 op=LOAD Jul 12 00:26:05.918000 audit: BPF prog-id=19 op=LOAD Jul 12 00:26:05.918000 audit: BPF prog-id=7 op=UNLOAD Jul 12 00:26:05.918000 audit: BPF prog-id=8 op=UNLOAD Jul 12 00:26:05.912521 systemd[1]: Finished systemd-hwdb-update.service. Jul 12 00:26:05.921204 systemd[1]: Starting systemd-udevd.service... Jul 12 00:26:05.959254 systemd-udevd[1531]: Using default interface naming scheme 'v252'. Jul 12 00:26:06.031616 systemd[1]: Started systemd-udevd.service. Jul 12 00:26:06.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:06.033000 audit: BPF prog-id=20 op=LOAD Jul 12 00:26:06.036445 systemd[1]: Starting systemd-networkd.service... Jul 12 00:26:06.052000 audit: BPF prog-id=21 op=LOAD Jul 12 00:26:06.053000 audit: BPF prog-id=22 op=LOAD Jul 12 00:26:06.053000 audit: BPF prog-id=23 op=LOAD Jul 12 00:26:06.056282 systemd[1]: Starting systemd-userdbd.service... Jul 12 00:26:06.115286 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 12 00:26:06.136345 systemd[1]: Started systemd-userdbd.service. Jul 12 00:26:06.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:06.168140 (udev-worker)[1533]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:06.318958 systemd-networkd[1534]: lo: Link UP Jul 12 00:26:06.318981 systemd-networkd[1534]: lo: Gained carrier Jul 12 00:26:06.319913 systemd-networkd[1534]: Enumeration completed Jul 12 00:26:06.320077 systemd[1]: Started systemd-networkd.service. Jul 12 00:26:06.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:06.325098 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:26:06.329132 systemd-networkd[1534]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:26:06.342860 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:26:06.342244 systemd-networkd[1534]: eth0: Link UP Jul 12 00:26:06.342589 systemd-networkd[1534]: eth0: Gained carrier Jul 12 00:26:06.361095 systemd-networkd[1534]: eth0: DHCPv4 address 172.31.16.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:26:06.496540 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:26:06.503442 systemd[1]: Finished systemd-udev-settle.service. Jul 12 00:26:06.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:06.508016 systemd[1]: Starting lvm2-activation-early.service... Jul 12 00:26:06.540706 lvm[1650]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:26:06.578357 systemd[1]: Finished lvm2-activation-early.service. Jul 12 00:26:06.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:06.580623 systemd[1]: Reached target cryptsetup.target. Jul 12 00:26:06.584627 systemd[1]: Starting lvm2-activation.service... Jul 12 00:26:06.591798 lvm[1651]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:26:06.629465 systemd[1]: Finished lvm2-activation.service. Jul 12 00:26:06.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:06.631701 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:26:06.633656 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:26:06.633711 systemd[1]: Reached target local-fs.target. Jul 12 00:26:06.635799 systemd[1]: Reached target machines.target. Jul 12 00:26:06.639765 systemd[1]: Starting ldconfig.service... Jul 12 00:26:06.643258 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:26:06.643362 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:26:06.645527 systemd[1]: Starting systemd-boot-update.service... Jul 12 00:26:06.650263 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 12 00:26:06.661426 systemd[1]: Starting systemd-machine-id-commit.service... Jul 12 00:26:06.670767 systemd[1]: Starting systemd-sysext.service... Jul 12 00:26:06.677041 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1653 (bootctl) Jul 12 00:26:06.679389 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 12 00:26:06.703945 systemd[1]: Unmounting usr-share-oem.mount... Jul 12 00:26:06.728598 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 12 00:26:06.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:06.739782 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 12 00:26:06.740169 systemd[1]: Unmounted usr-share-oem.mount. Jul 12 00:26:06.780862 kernel: loop0: detected capacity change from 0 to 203944 Jul 12 00:26:06.875965 systemd-fsck[1663]: fsck.fat 4.2 (2021-01-31) Jul 12 00:26:06.875965 systemd-fsck[1663]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Jul 12 00:26:06.879210 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 12 00:26:06.886895 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:26:06.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:06.887391 systemd[1]: Mounting boot.mount... Jul 12 00:26:06.927671 systemd[1]: Mounted boot.mount. Jul 12 00:26:06.931668 kernel: loop1: detected capacity change from 0 to 203944 Jul 12 00:26:06.963345 systemd[1]: Finished systemd-boot-update.service. Jul 12 00:26:06.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:06.966995 (sd-sysext)[1680]: Using extensions 'kubernetes'. Jul 12 00:26:06.967754 (sd-sysext)[1680]: Merged extensions into '/usr'. Jul 12 00:26:06.999287 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:26:07.000367 systemd[1]: Finished systemd-machine-id-commit.service. Jul 12 00:26:07.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.011905 systemd[1]: Mounting usr-share-oem.mount... Jul 12 00:26:07.014022 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.021979 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:26:07.025936 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:26:07.030038 systemd[1]: Starting modprobe@loop.service... Jul 12 00:26:07.032083 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.032381 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:26:07.035464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:26:07.035783 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:26:07.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.043991 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:26:07.044294 systemd[1]: Finished modprobe@loop.service. Jul 12 00:26:07.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.046798 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.048983 systemd[1]: Mounted usr-share-oem.mount. Jul 12 00:26:07.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.051742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:26:07.052024 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:26:07.055612 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:26:07.056370 systemd[1]: Finished systemd-sysext.service. Jul 12 00:26:07.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.061029 systemd[1]: Starting ensure-sysext.service... Jul 12 00:26:07.068030 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 12 00:26:07.080051 systemd[1]: Reloading. Jul 12 00:26:07.146953 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 12 00:26:07.179924 /usr/lib/systemd/system-generators/torcx-generator[1709]: time="2025-07-12T00:26:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:26:07.181744 /usr/lib/systemd/system-generators/torcx-generator[1709]: time="2025-07-12T00:26:07Z" level=info msg="torcx already run" Jul 12 00:26:07.182149 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:26:07.205234 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:26:07.386300 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:26:07.386535 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:26:07.425555 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:26:07.528956 systemd-networkd[1534]: eth0: Gained IPv6LL Jul 12 00:26:07.571000 audit: BPF prog-id=24 op=LOAD Jul 12 00:26:07.572000 audit: BPF prog-id=21 op=UNLOAD Jul 12 00:26:07.572000 audit: BPF prog-id=25 op=LOAD Jul 12 00:26:07.572000 audit: BPF prog-id=26 op=LOAD Jul 12 00:26:07.572000 audit: BPF prog-id=22 op=UNLOAD Jul 12 00:26:07.572000 audit: BPF prog-id=23 op=UNLOAD Jul 12 00:26:07.574000 audit: BPF prog-id=27 op=LOAD Jul 12 00:26:07.574000 audit: BPF prog-id=15 op=UNLOAD Jul 12 00:26:07.574000 audit: BPF prog-id=28 op=LOAD Jul 12 00:26:07.574000 audit: BPF prog-id=29 op=LOAD Jul 12 00:26:07.574000 audit: BPF prog-id=16 op=UNLOAD Jul 12 00:26:07.575000 audit: BPF prog-id=17 op=UNLOAD Jul 12 00:26:07.578000 audit: BPF prog-id=30 op=LOAD Jul 12 00:26:07.579000 audit: BPF prog-id=31 op=LOAD Jul 12 00:26:07.579000 audit: BPF prog-id=18 op=UNLOAD Jul 12 00:26:07.579000 audit: BPF prog-id=19 op=UNLOAD Jul 12 00:26:07.583000 audit: BPF prog-id=32 op=LOAD Jul 12 00:26:07.583000 audit: BPF prog-id=20 op=UNLOAD Jul 12 00:26:07.592150 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:26:07.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.597099 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 12 00:26:07.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.612908 systemd[1]: Starting audit-rules.service... Jul 12 00:26:07.622033 systemd[1]: Starting clean-ca-certificates.service... Jul 12 00:26:07.630000 audit: BPF prog-id=33 op=LOAD Jul 12 00:26:07.636000 audit: BPF prog-id=34 op=LOAD Jul 12 00:26:07.627797 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 12 00:26:07.634344 systemd[1]: Starting systemd-resolved.service... Jul 12 00:26:07.640540 systemd[1]: Starting systemd-timesyncd.service... Jul 12 00:26:07.647365 systemd[1]: Starting systemd-update-utmp.service... Jul 12 00:26:07.656034 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.658756 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:26:07.664142 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:26:07.670052 systemd[1]: Starting modprobe@loop.service... Jul 12 00:26:07.671863 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.672166 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:26:07.678012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:26:07.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.678295 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:26:07.690836 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.693405 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:26:07.695525 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.695845 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:26:07.702710 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.702000 audit[1769]: SYSTEM_BOOT pid=1769 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.706417 systemd[1]: Starting modprobe@drm.service... Jul 12 00:26:07.709059 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.709358 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:26:07.714202 systemd[1]: Finished clean-ca-certificates.service. Jul 12 00:26:07.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.717625 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:26:07.723342 systemd[1]: Finished systemd-update-utmp.service. Jul 12 00:26:07.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.726203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:26:07.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.726460 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:26:07.734275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:26:07.734568 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:26:07.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.739706 systemd[1]: Finished ensure-sysext.service. Jul 12 00:26:07.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.742067 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:26:07.751316 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:26:07.751609 systemd[1]: Finished modprobe@loop.service. Jul 12 00:26:07.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.757616 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:26:07.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.757978 systemd[1]: Finished modprobe@drm.service. Jul 12 00:26:07.760586 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:26:07.789487 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 12 00:26:07.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.871209 systemd[1]: Started systemd-timesyncd.service. Jul 12 00:26:07.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:07.873308 systemd[1]: Reached target time-set.target. Jul 12 00:26:07.875000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 12 00:26:07.875000 audit[1788]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc171d670 a2=420 a3=0 items=0 ppid=1763 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:07.875000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 12 00:26:07.878114 augenrules[1788]: No rules Jul 12 00:26:07.881612 systemd[1]: Finished audit-rules.service. Jul 12 00:26:07.890952 systemd-resolved[1767]: Positive Trust Anchors: Jul 12 00:26:07.891460 systemd-resolved[1767]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:26:07.891612 systemd-resolved[1767]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:26:07.928254 systemd-resolved[1767]: Defaulting to hostname 'linux'. Jul 12 00:26:07.931653 systemd[1]: Started systemd-resolved.service. Jul 12 00:26:07.933736 systemd[1]: Reached target network.target. Jul 12 00:26:07.935505 systemd[1]: Reached target network-online.target. Jul 12 00:26:07.937411 systemd[1]: Reached target nss-lookup.target. Jul 12 00:26:08.079972 ldconfig[1652]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:26:08.089802 systemd[1]: Finished ldconfig.service. Jul 12 00:26:08.090185 systemd-timesyncd[1768]: Contacted time server 104.234.61.117:123 (0.flatcar.pool.ntp.org). Jul 12 00:26:08.090298 systemd-timesyncd[1768]: Initial clock synchronization to Sat 2025-07-12 00:26:08.321236 UTC. Jul 12 00:26:08.094398 systemd[1]: Starting systemd-update-done.service... Jul 12 00:26:08.110075 systemd[1]: Finished systemd-update-done.service. Jul 12 00:26:08.112430 systemd[1]: Reached target sysinit.target. Jul 12 00:26:08.114377 systemd[1]: Started motdgen.path. Jul 12 00:26:08.115955 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 12 00:26:08.118588 systemd[1]: Started logrotate.timer. Jul 12 00:26:08.120530 systemd[1]: Started mdadm.timer. Jul 12 00:26:08.122193 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 12 00:26:08.124170 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:26:08.124349 systemd[1]: Reached target paths.target. Jul 12 00:26:08.126066 systemd[1]: Reached target timers.target. Jul 12 00:26:08.129273 systemd[1]: Listening on dbus.socket. Jul 12 00:26:08.133334 systemd[1]: Starting docker.socket... Jul 12 00:26:08.140136 systemd[1]: Listening on sshd.socket. Jul 12 00:26:08.142166 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:26:08.143238 systemd[1]: Listening on docker.socket. Jul 12 00:26:08.145279 systemd[1]: Reached target sockets.target. Jul 12 00:26:08.147148 systemd[1]: Reached target basic.target. Jul 12 00:26:08.148999 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:26:08.149195 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:26:08.155019 systemd[1]: Started amazon-ssm-agent.service. Jul 12 00:26:08.159861 systemd[1]: Starting containerd.service... Jul 12 00:26:08.164046 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 12 00:26:08.168308 systemd[1]: Starting dbus.service... Jul 12 00:26:08.172027 systemd[1]: Starting enable-oem-cloudinit.service... Jul 12 00:26:08.176200 systemd[1]: Starting extend-filesystems.service... Jul 12 00:26:08.178972 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 12 00:26:08.182422 systemd[1]: Starting kubelet.service... Jul 12 00:26:08.186946 systemd[1]: Starting motdgen.service... Jul 12 00:26:08.194487 systemd[1]: Started nvidia.service. Jul 12 00:26:08.199702 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 12 00:26:08.203999 systemd[1]: Starting sshd-keygen.service... Jul 12 00:26:08.211578 systemd[1]: Starting systemd-logind.service... Jul 12 00:26:08.214960 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:26:08.268887 jq[1801]: false Jul 12 00:26:08.215107 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:26:08.215985 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:26:08.219177 systemd[1]: Starting update-engine.service... Jul 12 00:26:08.223751 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 12 00:26:08.308314 jq[1810]: true Jul 12 00:26:08.266147 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:26:08.266497 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 12 00:26:08.271807 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:26:08.272178 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 12 00:26:08.341348 jq[1829]: true Jul 12 00:26:08.444689 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:26:08.445088 systemd[1]: Finished motdgen.service. Jul 12 00:26:08.455427 extend-filesystems[1802]: Found loop1 Jul 12 00:26:08.458969 extend-filesystems[1802]: Found nvme0n1 Jul 12 00:26:08.458969 extend-filesystems[1802]: Found nvme0n1p1 Jul 12 00:26:08.458969 extend-filesystems[1802]: Found nvme0n1p2 Jul 12 00:26:08.458969 extend-filesystems[1802]: Found nvme0n1p3 Jul 12 00:26:08.458969 extend-filesystems[1802]: Found usr Jul 12 00:26:08.458969 extend-filesystems[1802]: Found nvme0n1p4 Jul 12 00:26:08.458969 extend-filesystems[1802]: Found nvme0n1p6 Jul 12 00:26:08.479284 extend-filesystems[1802]: Found nvme0n1p7 Jul 12 00:26:08.479284 extend-filesystems[1802]: Found nvme0n1p9 Jul 12 00:26:08.479284 extend-filesystems[1802]: Checking size of /dev/nvme0n1p9 Jul 12 00:26:08.496406 dbus-daemon[1800]: [system] SELinux support is enabled Jul 12 00:26:08.496708 systemd[1]: Started dbus.service. Jul 12 00:26:08.502328 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:26:08.502387 systemd[1]: Reached target system-config.target. Jul 12 00:26:08.519139 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:26:08.519182 systemd[1]: Reached target user-config.target. Jul 12 00:26:08.579998 dbus-daemon[1800]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1534 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 12 00:26:08.586210 systemd[1]: Starting systemd-hostnamed.service... Jul 12 00:26:08.592992 extend-filesystems[1802]: Resized partition /dev/nvme0n1p9 Jul 12 00:26:08.598152 bash[1853]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:26:08.599841 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 12 00:26:08.603471 extend-filesystems[1863]: resize2fs 1.46.5 (30-Dec-2021) Jul 12 00:26:08.630865 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 12 00:26:08.700865 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 12 00:26:08.717166 update_engine[1809]: I0712 00:26:08.711436 1809 main.cc:92] Flatcar Update Engine starting Jul 12 00:26:08.720945 extend-filesystems[1863]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 12 00:26:08.720945 extend-filesystems[1863]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:26:08.720945 extend-filesystems[1863]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 12 00:26:08.733552 extend-filesystems[1802]: Resized filesystem in /dev/nvme0n1p9 Jul 12 00:26:08.724556 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:26:08.737726 update_engine[1809]: I0712 00:26:08.737684 1809 update_check_scheduler.cc:74] Next update check in 4m56s Jul 12 00:26:08.749293 amazon-ssm-agent[1797]: 2025/07/12 00:26:08 Failed to load instance info from vault. RegistrationKey does not exist. Jul 12 00:26:08.781053 env[1817]: time="2025-07-12T00:26:08.751312753Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 12 00:26:08.777444 systemd[1]: Finished extend-filesystems.service. Jul 12 00:26:08.783213 systemd[1]: Started update-engine.service. Jul 12 00:26:08.788101 systemd[1]: Started locksmithd.service. Jul 12 00:26:08.805225 amazon-ssm-agent[1797]: Initializing new seelog logger Jul 12 00:26:08.807143 amazon-ssm-agent[1797]: New Seelog Logger Creation Complete Jul 12 00:26:08.811390 amazon-ssm-agent[1797]: 2025/07/12 00:26:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:26:08.811536 amazon-ssm-agent[1797]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:26:08.812202 amazon-ssm-agent[1797]: 2025/07/12 00:26:08 processing appconfig overrides Jul 12 00:26:08.842896 systemd[1]: nvidia.service: Deactivated successfully. Jul 12 00:26:08.864251 env[1817]: time="2025-07-12T00:26:08.864167738Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:26:08.864484 env[1817]: time="2025-07-12T00:26:08.864436994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:08.874998 env[1817]: time="2025-07-12T00:26:08.874915994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:26:08.874998 env[1817]: time="2025-07-12T00:26:08.874988294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:08.875480 env[1817]: time="2025-07-12T00:26:08.875418278Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:26:08.875574 env[1817]: time="2025-07-12T00:26:08.875474450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:08.875574 env[1817]: time="2025-07-12T00:26:08.875509094Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 12 00:26:08.875574 env[1817]: time="2025-07-12T00:26:08.875534042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:08.875890 env[1817]: time="2025-07-12T00:26:08.875844182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:08.876546 env[1817]: time="2025-07-12T00:26:08.876477038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:26:08.876836 env[1817]: time="2025-07-12T00:26:08.876769190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:26:08.884999 env[1817]: time="2025-07-12T00:26:08.884906270Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:26:08.895000 env[1817]: time="2025-07-12T00:26:08.894896450Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 12 00:26:08.895000 env[1817]: time="2025-07-12T00:26:08.894993878Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:26:08.907713 env[1817]: time="2025-07-12T00:26:08.907638086Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:26:08.907865 env[1817]: time="2025-07-12T00:26:08.907719254Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:26:08.907865 env[1817]: time="2025-07-12T00:26:08.907752350Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:26:08.907865 env[1817]: time="2025-07-12T00:26:08.907836290Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:26:08.908020 env[1817]: time="2025-07-12T00:26:08.907878374Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:26:08.908020 env[1817]: time="2025-07-12T00:26:08.907911674Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:26:08.908020 env[1817]: time="2025-07-12T00:26:08.907943810Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:26:08.908481 env[1817]: time="2025-07-12T00:26:08.908425706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:26:08.908559 env[1817]: time="2025-07-12T00:26:08.908486402Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 12 00:26:08.908559 env[1817]: time="2025-07-12T00:26:08.908521190Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:26:08.908665 env[1817]: time="2025-07-12T00:26:08.908552234Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:26:08.908665 env[1817]: time="2025-07-12T00:26:08.908582210Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:26:08.908899 env[1817]: time="2025-07-12T00:26:08.908853002Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:26:08.909089 env[1817]: time="2025-07-12T00:26:08.909043814Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:26:08.909472 env[1817]: time="2025-07-12T00:26:08.909426230Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:26:08.909545 env[1817]: time="2025-07-12T00:26:08.909486782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.909545 env[1817]: time="2025-07-12T00:26:08.909525338Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:26:08.909777 env[1817]: time="2025-07-12T00:26:08.909735218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.909882 env[1817]: time="2025-07-12T00:26:08.909779726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.909882 env[1817]: time="2025-07-12T00:26:08.909830798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.909882 env[1817]: time="2025-07-12T00:26:08.909864086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.910033 env[1817]: time="2025-07-12T00:26:08.909894098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.910033 env[1817]: time="2025-07-12T00:26:08.909923222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.910033 env[1817]: time="2025-07-12T00:26:08.909954662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.910033 env[1817]: time="2025-07-12T00:26:08.909983462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.910033 env[1817]: time="2025-07-12T00:26:08.910015538Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:26:08.910355 env[1817]: time="2025-07-12T00:26:08.910274762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.910355 env[1817]: time="2025-07-12T00:26:08.910318478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.910465 env[1817]: time="2025-07-12T00:26:08.910348886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.910465 env[1817]: time="2025-07-12T00:26:08.910396226Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:26:08.910465 env[1817]: time="2025-07-12T00:26:08.910433030Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 12 00:26:08.910610 env[1817]: time="2025-07-12T00:26:08.910461722Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:26:08.910610 env[1817]: time="2025-07-12T00:26:08.910496858Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 12 00:26:08.910610 env[1817]: time="2025-07-12T00:26:08.910560350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:26:08.911130 env[1817]: time="2025-07-12T00:26:08.911017838Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:26:08.913045 env[1817]: time="2025-07-12T00:26:08.911133206Z" level=info msg="Connect containerd service" Jul 12 00:26:08.913045 env[1817]: time="2025-07-12T00:26:08.911198954Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:26:08.913045 env[1817]: time="2025-07-12T00:26:08.912419222Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:26:08.913045 env[1817]: time="2025-07-12T00:26:08.912998150Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:26:08.915355 env[1817]: time="2025-07-12T00:26:08.913087886Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:26:08.915355 env[1817]: time="2025-07-12T00:26:08.913183010Z" level=info msg="containerd successfully booted in 0.163103s" Jul 12 00:26:08.913305 systemd[1]: Started containerd.service. Jul 12 00:26:08.916109 systemd-logind[1808]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:26:08.922146 env[1817]: time="2025-07-12T00:26:08.922062158Z" level=info msg="Start subscribing containerd event" Jul 12 00:26:08.922322 env[1817]: time="2025-07-12T00:26:08.922219742Z" level=info msg="Start recovering state" Jul 12 00:26:08.922478 env[1817]: time="2025-07-12T00:26:08.922360790Z" level=info msg="Start event monitor" Jul 12 00:26:08.922546 env[1817]: time="2025-07-12T00:26:08.922488218Z" level=info msg="Start snapshots syncer" Jul 12 00:26:08.922546 env[1817]: time="2025-07-12T00:26:08.922517066Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:26:08.922649 env[1817]: time="2025-07-12T00:26:08.922546466Z" level=info msg="Start streaming server" Jul 12 00:26:08.924889 systemd-logind[1808]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 12 00:26:08.927209 systemd-logind[1808]: New seat seat0. Jul 12 00:26:08.937532 systemd[1]: Started systemd-logind.service. Jul 12 00:26:09.096481 dbus-daemon[1800]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 12 00:26:09.096973 systemd[1]: Started systemd-hostnamed.service. Jul 12 00:26:09.103121 dbus-daemon[1800]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1860 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 12 00:26:09.108308 systemd[1]: Starting polkit.service... Jul 12 00:26:09.137473 polkitd[1920]: Started polkitd version 121 Jul 12 00:26:09.168594 polkitd[1920]: Loading rules from directory /etc/polkit-1/rules.d Jul 12 00:26:09.169394 polkitd[1920]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 12 00:26:09.172044 polkitd[1920]: Finished loading, compiling and executing 2 rules Jul 12 00:26:09.172987 dbus-daemon[1800]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 12 00:26:09.173275 systemd[1]: Started polkit.service. Jul 12 00:26:09.177041 polkitd[1920]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 12 00:26:09.202980 systemd-hostnamed[1860]: Hostname set to (transient) Jul 12 00:26:09.203143 systemd-resolved[1767]: System hostname changed to 'ip-172-31-16-163'. Jul 12 00:26:09.251475 coreos-metadata[1799]: Jul 12 00:26:09.251 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:26:09.256061 coreos-metadata[1799]: Jul 12 00:26:09.255 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Jul 12 00:26:09.260035 coreos-metadata[1799]: Jul 12 00:26:09.259 INFO Fetch successful Jul 12 00:26:09.260035 coreos-metadata[1799]: Jul 12 00:26:09.259 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 12 00:26:09.262766 coreos-metadata[1799]: Jul 12 00:26:09.262 INFO Fetch successful Jul 12 00:26:09.269023 unknown[1799]: wrote ssh authorized keys file for user: core Jul 12 00:26:09.301237 update-ssh-keys[1940]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:26:09.302587 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 12 00:26:09.557192 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Create new startup processor Jul 12 00:26:09.560149 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [LongRunningPluginsManager] registered plugins: {} Jul 12 00:26:09.560280 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Initializing bookkeeping folders Jul 12 00:26:09.560280 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO removing the completed state files Jul 12 00:26:09.560280 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Initializing bookkeeping folders for long running plugins Jul 12 00:26:09.560280 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Jul 12 00:26:09.560507 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Initializing healthcheck folders for long running plugins Jul 12 00:26:09.560507 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Initializing locations for inventory plugin Jul 12 00:26:09.560507 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Initializing default location for custom inventory Jul 12 00:26:09.560507 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Initializing default location for file inventory Jul 12 00:26:09.560507 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Initializing default location for role inventory Jul 12 00:26:09.560507 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Init the cloudwatchlogs publisher Jul 12 00:26:09.560507 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform independent plugin aws:softwareInventory Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform independent plugin aws:runPowerShellScript Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform independent plugin aws:configureDocker Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform independent plugin aws:runDockerAction Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform independent plugin aws:refreshAssociation Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform independent plugin aws:downloadContent Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform independent plugin aws:runDocument Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform independent plugin aws:updateSsmAgent Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform independent plugin aws:configurePackage Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Successfully loaded platform dependent plugin aws:runShellScript Jul 12 00:26:09.560896 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Jul 12 00:26:09.561393 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO OS: linux, Arch: arm64 Jul 12 00:26:09.573876 amazon-ssm-agent[1797]: datastore file /var/lib/amazon/ssm/i-0a7a5f9f1bc2ad0ab/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Jul 12 00:26:09.658001 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] Starting document processing engine... Jul 12 00:26:09.752704 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] [EngineProcessor] Starting Jul 12 00:26:09.847894 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Jul 12 00:26:09.917431 locksmithd[1878]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:26:09.942386 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessageGatewayService] Starting session document processing engine... Jul 12 00:26:10.037164 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessageGatewayService] [EngineProcessor] Starting Jul 12 00:26:10.132167 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Jul 12 00:26:10.227271 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0a7a5f9f1bc2ad0ab, requestId: 60d85a30-4ffb-42de-bce6-076066d1f1eb Jul 12 00:26:10.322615 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] Starting message polling Jul 12 00:26:10.418150 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] Starting send replies to MDS Jul 12 00:26:10.513835 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [instanceID=i-0a7a5f9f1bc2ad0ab] Starting association polling Jul 12 00:26:10.609845 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Jul 12 00:26:10.706137 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] [Association] Launching response handler Jul 12 00:26:10.802371 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Jul 12 00:26:10.898934 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Jul 12 00:26:10.937399 systemd[1]: Started kubelet.service. Jul 12 00:26:10.995665 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Jul 12 00:26:11.092597 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [OfflineService] Starting document processing engine... Jul 12 00:26:11.127620 sshd_keygen[1832]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:26:11.177462 systemd[1]: Finished sshd-keygen.service. Jul 12 00:26:11.182394 systemd[1]: Starting issuegen.service... Jul 12 00:26:11.189658 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [OfflineService] [EngineProcessor] Starting Jul 12 00:26:11.195171 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:26:11.195529 systemd[1]: Finished issuegen.service. Jul 12 00:26:11.200547 systemd[1]: Starting systemd-user-sessions.service... Jul 12 00:26:11.217113 systemd[1]: Finished systemd-user-sessions.service. Jul 12 00:26:11.221856 systemd[1]: Started getty@tty1.service. Jul 12 00:26:11.226374 systemd[1]: Started serial-getty@ttyS0.service. Jul 12 00:26:11.228741 systemd[1]: Reached target getty.target. Jul 12 00:26:11.232652 systemd[1]: Reached target multi-user.target. Jul 12 00:26:11.239234 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 12 00:26:11.259656 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 12 00:26:11.260050 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 12 00:26:11.262478 systemd[1]: Startup finished in 1.190s (kernel) + 8.186s (initrd) + 11.475s (userspace) = 20.852s. Jul 12 00:26:11.287036 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [OfflineService] [EngineProcessor] Initial processing Jul 12 00:26:11.384492 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [OfflineService] Starting message polling Jul 12 00:26:11.482207 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [OfflineService] Starting send replies to MDS Jul 12 00:26:11.580139 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [LongRunningPluginsManager] starting long running plugin manager Jul 12 00:26:11.678289 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Jul 12 00:26:11.776717 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [HealthCheck] HealthCheck reporting agent health. Jul 12 00:26:11.875496 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Jul 12 00:26:11.974063 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [StartupProcessor] Executing startup processor tasks Jul 12 00:26:12.071984 kubelet[1994]: E0712 00:26:12.071923 1994 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:26:12.073002 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Jul 12 00:26:12.075992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:26:12.076309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:26:12.076726 systemd[1]: kubelet.service: Consumed 1.592s CPU time. Jul 12 00:26:12.172357 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Jul 12 00:26:12.271616 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 Jul 12 00:26:12.371151 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessageGatewayService] listening reply. Jul 12 00:26:12.470935 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0a7a5f9f1bc2ad0ab?role=subscribe&stream=input Jul 12 00:26:12.570712 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0a7a5f9f1bc2ad0ab?role=subscribe&stream=input Jul 12 00:26:12.670760 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessageGatewayService] Starting receiving message from control channel Jul 12 00:26:12.771005 amazon-ssm-agent[1797]: 2025-07-12 00:26:09 INFO [MessageGatewayService] [EngineProcessor] Initial processing Jul 12 00:26:17.379983 systemd[1]: Created slice system-sshd.slice. Jul 12 00:26:17.383757 systemd[1]: Started sshd@0-172.31.16.163:22-147.75.109.163:39474.service. Jul 12 00:26:17.582221 sshd[2015]: Accepted publickey for core from 147.75.109.163 port 39474 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:17.588142 sshd[2015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:17.612433 systemd[1]: Created slice user-500.slice. Jul 12 00:26:17.615358 systemd[1]: Starting user-runtime-dir@500.service... Jul 12 00:26:17.624963 systemd-logind[1808]: New session 1 of user core. Jul 12 00:26:17.637410 systemd[1]: Finished user-runtime-dir@500.service. Jul 12 00:26:17.640864 systemd[1]: Starting user@500.service... Jul 12 00:26:17.648531 (systemd)[2018]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:17.836873 systemd[2018]: Queued start job for default target default.target. Jul 12 00:26:17.837922 systemd[2018]: Reached target paths.target. Jul 12 00:26:17.837977 systemd[2018]: Reached target sockets.target. Jul 12 00:26:17.838010 systemd[2018]: Reached target timers.target. Jul 12 00:26:17.838040 systemd[2018]: Reached target basic.target. Jul 12 00:26:17.838134 systemd[2018]: Reached target default.target. Jul 12 00:26:17.838200 systemd[2018]: Startup finished in 177ms. Jul 12 00:26:17.839105 systemd[1]: Started user@500.service. Jul 12 00:26:17.841083 systemd[1]: Started session-1.scope. Jul 12 00:26:17.995085 systemd[1]: Started sshd@1-172.31.16.163:22-147.75.109.163:39478.service. Jul 12 00:26:18.161809 sshd[2027]: Accepted publickey for core from 147.75.109.163 port 39478 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:18.164294 sshd[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:18.173582 systemd[1]: Started session-2.scope. Jul 12 00:26:18.174532 systemd-logind[1808]: New session 2 of user core. Jul 12 00:26:18.304053 sshd[2027]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:18.310142 systemd[1]: sshd@1-172.31.16.163:22-147.75.109.163:39478.service: Deactivated successfully. Jul 12 00:26:18.310794 systemd-logind[1808]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:26:18.311440 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:26:18.313424 systemd-logind[1808]: Removed session 2. Jul 12 00:26:18.331470 systemd[1]: Started sshd@2-172.31.16.163:22-147.75.109.163:39482.service. Jul 12 00:26:18.498993 sshd[2033]: Accepted publickey for core from 147.75.109.163 port 39482 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:18.501986 sshd[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:18.509411 systemd-logind[1808]: New session 3 of user core. Jul 12 00:26:18.510353 systemd[1]: Started session-3.scope. Jul 12 00:26:18.630139 sshd[2033]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:18.635730 systemd[1]: sshd@2-172.31.16.163:22-147.75.109.163:39482.service: Deactivated successfully. Jul 12 00:26:18.636962 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:26:18.638540 systemd-logind[1808]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:26:18.640078 systemd-logind[1808]: Removed session 3. Jul 12 00:26:18.658185 systemd[1]: Started sshd@3-172.31.16.163:22-147.75.109.163:39494.service. Jul 12 00:26:18.781214 amazon-ssm-agent[1797]: 2025-07-12 00:26:18 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Jul 12 00:26:18.826646 sshd[2039]: Accepted publickey for core from 147.75.109.163 port 39494 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:18.829605 sshd[2039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:18.837929 systemd[1]: Started session-4.scope. Jul 12 00:26:18.839936 systemd-logind[1808]: New session 4 of user core. Jul 12 00:26:18.968998 sshd[2039]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:18.974721 systemd-logind[1808]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:26:18.975147 systemd[1]: sshd@3-172.31.16.163:22-147.75.109.163:39494.service: Deactivated successfully. Jul 12 00:26:18.976385 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:26:18.977685 systemd-logind[1808]: Removed session 4. Jul 12 00:26:18.997553 systemd[1]: Started sshd@4-172.31.16.163:22-147.75.109.163:39510.service. Jul 12 00:26:19.168097 sshd[2045]: Accepted publickey for core from 147.75.109.163 port 39510 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:19.171046 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:19.179638 systemd[1]: Started session-5.scope. Jul 12 00:26:19.180417 systemd-logind[1808]: New session 5 of user core. Jul 12 00:26:19.305724 sudo[2048]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:26:19.306267 sudo[2048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:26:19.331443 systemd[1]: Starting coreos-metadata.service... Jul 12 00:26:19.490735 coreos-metadata[2052]: Jul 12 00:26:19.490 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:26:19.491888 coreos-metadata[2052]: Jul 12 00:26:19.491 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Jul 12 00:26:19.492755 coreos-metadata[2052]: Jul 12 00:26:19.492 INFO Fetch successful Jul 12 00:26:19.493071 coreos-metadata[2052]: Jul 12 00:26:19.492 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Jul 12 00:26:19.493635 coreos-metadata[2052]: Jul 12 00:26:19.493 INFO Fetch successful Jul 12 00:26:19.493950 coreos-metadata[2052]: Jul 12 00:26:19.493 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Jul 12 00:26:19.494505 coreos-metadata[2052]: Jul 12 00:26:19.494 INFO Fetch successful Jul 12 00:26:19.494765 coreos-metadata[2052]: Jul 12 00:26:19.494 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Jul 12 00:26:19.495360 coreos-metadata[2052]: Jul 12 00:26:19.495 INFO Fetch successful Jul 12 00:26:19.495642 coreos-metadata[2052]: Jul 12 00:26:19.495 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Jul 12 00:26:19.496235 coreos-metadata[2052]: Jul 12 00:26:19.496 INFO Fetch successful Jul 12 00:26:19.496497 coreos-metadata[2052]: Jul 12 00:26:19.496 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Jul 12 00:26:19.497111 coreos-metadata[2052]: Jul 12 00:26:19.496 INFO Fetch successful Jul 12 00:26:19.497371 coreos-metadata[2052]: Jul 12 00:26:19.497 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Jul 12 00:26:19.497984 coreos-metadata[2052]: Jul 12 00:26:19.497 INFO Fetch successful Jul 12 00:26:19.498243 coreos-metadata[2052]: Jul 12 00:26:19.498 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Jul 12 00:26:19.498834 coreos-metadata[2052]: Jul 12 00:26:19.498 INFO Fetch successful Jul 12 00:26:19.512449 systemd[1]: Finished coreos-metadata.service. Jul 12 00:26:20.523726 systemd[1]: Stopped kubelet.service. Jul 12 00:26:20.524210 systemd[1]: kubelet.service: Consumed 1.592s CPU time. Jul 12 00:26:20.528311 systemd[1]: Starting kubelet.service... Jul 12 00:26:20.586265 systemd[1]: Reloading. Jul 12 00:26:20.762885 /usr/lib/systemd/system-generators/torcx-generator[2107]: time="2025-07-12T00:26:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:26:20.762993 /usr/lib/systemd/system-generators/torcx-generator[2107]: time="2025-07-12T00:26:20Z" level=info msg="torcx already run" Jul 12 00:26:20.953356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:26:20.953396 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:26:20.992197 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:26:21.209041 systemd[1]: Started kubelet.service. Jul 12 00:26:21.212643 systemd[1]: Stopping kubelet.service... Jul 12 00:26:21.213358 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:26:21.214603 systemd[1]: Stopped kubelet.service. Jul 12 00:26:21.219302 systemd[1]: Starting kubelet.service... Jul 12 00:26:21.526917 systemd[1]: Started kubelet.service. Jul 12 00:26:21.609522 kubelet[2169]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:21.609522 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:26:21.609522 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:26:21.610116 kubelet[2169]: I0712 00:26:21.609632 2169 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:26:22.700365 kubelet[2169]: I0712 00:26:22.700296 2169 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:26:22.700365 kubelet[2169]: I0712 00:26:22.700347 2169 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:26:22.701126 kubelet[2169]: I0712 00:26:22.700774 2169 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:26:22.754034 kubelet[2169]: I0712 00:26:22.753973 2169 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:26:22.772791 kubelet[2169]: E0712 00:26:22.772722 2169 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:26:22.772791 kubelet[2169]: I0712 00:26:22.772778 2169 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:26:22.792179 kubelet[2169]: I0712 00:26:22.792143 2169 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:26:22.793238 kubelet[2169]: I0712 00:26:22.793211 2169 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:26:22.793885 kubelet[2169]: I0712 00:26:22.793840 2169 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:26:22.794680 kubelet[2169]: I0712 00:26:22.794033 2169 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.16.163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:26:22.795162 kubelet[2169]: I0712 00:26:22.795136 2169 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:26:22.795316 kubelet[2169]: I0712 00:26:22.795296 2169 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:26:22.795787 kubelet[2169]: I0712 00:26:22.795767 2169 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:22.800673 kubelet[2169]: I0712 00:26:22.800638 2169 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:26:22.800977 kubelet[2169]: I0712 00:26:22.800953 2169 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:26:22.801145 kubelet[2169]: I0712 00:26:22.801124 2169 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:26:22.801271 kubelet[2169]: I0712 00:26:22.801249 2169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:26:22.801408 kubelet[2169]: E0712 00:26:22.801369 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:22.801408 kubelet[2169]: E0712 00:26:22.801306 2169 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:22.824170 kubelet[2169]: I0712 00:26:22.824135 2169 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:26:22.825691 kubelet[2169]: I0712 00:26:22.825665 2169 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:26:22.826964 kubelet[2169]: W0712 00:26:22.826939 2169 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:26:22.829002 kubelet[2169]: I0712 00:26:22.828974 2169 server.go:1274] "Started kubelet" Jul 12 00:26:22.844928 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 12 00:26:22.845094 kubelet[2169]: I0712 00:26:22.842063 2169 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:26:22.845094 kubelet[2169]: I0712 00:26:22.842543 2169 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:26:22.845094 kubelet[2169]: I0712 00:26:22.842628 2169 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:26:22.845413 kubelet[2169]: I0712 00:26:22.845372 2169 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:26:22.845743 kubelet[2169]: I0712 00:26:22.845718 2169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:26:22.856374 kubelet[2169]: I0712 00:26:22.856308 2169 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:26:22.866189 kubelet[2169]: I0712 00:26:22.866154 2169 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:26:22.866605 kubelet[2169]: I0712 00:26:22.866581 2169 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:26:22.866920 kubelet[2169]: I0712 00:26:22.866899 2169 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:26:22.868101 kubelet[2169]: I0712 00:26:22.868065 2169 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:26:22.873898 kubelet[2169]: I0712 00:26:22.870470 2169 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:26:22.873898 kubelet[2169]: E0712 00:26:22.873196 2169 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.163\" not found" Jul 12 00:26:22.877905 kubelet[2169]: E0712 00:26:22.877810 2169 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:26:22.879632 kubelet[2169]: I0712 00:26:22.879578 2169 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:26:22.897125 kubelet[2169]: E0712 00:26:22.897077 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.16.163\" not found" node="172.31.16.163" Jul 12 00:26:22.909998 kubelet[2169]: I0712 00:26:22.909956 2169 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:26:22.910207 kubelet[2169]: I0712 00:26:22.910182 2169 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:26:22.910373 kubelet[2169]: I0712 00:26:22.910353 2169 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:26:22.913020 kubelet[2169]: I0712 00:26:22.912983 2169 policy_none.go:49] "None policy: Start" Jul 12 00:26:22.915452 kubelet[2169]: I0712 00:26:22.915409 2169 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:26:22.915721 kubelet[2169]: I0712 00:26:22.915701 2169 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:26:22.933951 systemd[1]: Created slice kubepods.slice. Jul 12 00:26:22.944566 systemd[1]: Created slice kubepods-burstable.slice. Jul 12 00:26:22.955456 systemd[1]: Created slice kubepods-besteffort.slice. Jul 12 00:26:22.969631 kubelet[2169]: I0712 00:26:22.969594 2169 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:26:22.970368 kubelet[2169]: I0712 00:26:22.970342 2169 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:26:22.971154 kubelet[2169]: I0712 00:26:22.971096 2169 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:26:22.971949 kubelet[2169]: I0712 00:26:22.971917 2169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:26:22.982022 kubelet[2169]: E0712 00:26:22.981983 2169 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.163\" not found" Jul 12 00:26:23.035132 kubelet[2169]: I0712 00:26:23.035037 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:26:23.037044 kubelet[2169]: I0712 00:26:23.036983 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:26:23.037044 kubelet[2169]: I0712 00:26:23.037041 2169 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:26:23.037267 kubelet[2169]: I0712 00:26:23.037080 2169 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:26:23.037267 kubelet[2169]: E0712 00:26:23.037170 2169 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 12 00:26:23.072733 kubelet[2169]: I0712 00:26:23.072676 2169 kubelet_node_status.go:72] "Attempting to register node" node="172.31.16.163" Jul 12 00:26:23.086307 kubelet[2169]: I0712 00:26:23.086256 2169 kubelet_node_status.go:75] "Successfully registered node" node="172.31.16.163" Jul 12 00:26:23.102185 kubelet[2169]: I0712 00:26:23.102131 2169 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 12 00:26:23.103341 env[1817]: time="2025-07-12T00:26:23.103233097Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:26:23.104314 kubelet[2169]: I0712 00:26:23.104264 2169 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 12 00:26:23.577914 sudo[2048]: pam_unix(sudo:session): session closed for user root Jul 12 00:26:23.602438 sshd[2045]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:23.607344 systemd[1]: sshd@4-172.31.16.163:22-147.75.109.163:39510.service: Deactivated successfully. Jul 12 00:26:23.608584 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:26:23.609696 systemd-logind[1808]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:26:23.611304 systemd-logind[1808]: Removed session 5. Jul 12 00:26:23.712465 kubelet[2169]: I0712 00:26:23.712428 2169 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 12 00:26:23.713416 kubelet[2169]: W0712 00:26:23.713364 2169 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 12 00:26:23.713644 kubelet[2169]: W0712 00:26:23.713457 2169 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 12 00:26:23.713644 kubelet[2169]: W0712 00:26:23.713538 2169 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 12 00:26:23.801690 kubelet[2169]: I0712 00:26:23.801658 2169 apiserver.go:52] "Watching apiserver" Jul 12 00:26:23.801962 kubelet[2169]: E0712 00:26:23.801726 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:23.819121 systemd[1]: Created slice kubepods-besteffort-pod72a28b7a_b7d6_45dc_b901_74240606c6cd.slice. Jul 12 00:26:23.841980 systemd[1]: Created slice kubepods-burstable-podad9b5cda_133b_4315_8134_d9b365f631c5.slice. Jul 12 00:26:23.867706 kubelet[2169]: I0712 00:26:23.867645 2169 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:26:23.874259 kubelet[2169]: I0712 00:26:23.874198 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72a28b7a-b7d6-45dc-b901-74240606c6cd-kube-proxy\") pod \"kube-proxy-kpth2\" (UID: \"72a28b7a-b7d6-45dc-b901-74240606c6cd\") " pod="kube-system/kube-proxy-kpth2" Jul 12 00:26:23.874370 kubelet[2169]: I0712 00:26:23.874257 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-bpf-maps\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874370 kubelet[2169]: I0712 00:26:23.874301 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-cgroup\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874370 kubelet[2169]: I0712 00:26:23.874339 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cni-path\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874556 kubelet[2169]: I0712 00:26:23.874374 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-xtables-lock\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874556 kubelet[2169]: I0712 00:26:23.874412 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlhw5\" (UniqueName: \"kubernetes.io/projected/ad9b5cda-133b-4315-8134-d9b365f631c5-kube-api-access-vlhw5\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874556 kubelet[2169]: I0712 00:26:23.874451 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-etc-cni-netd\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874556 kubelet[2169]: I0712 00:26:23.874486 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-lib-modules\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874556 kubelet[2169]: I0712 00:26:23.874523 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad9b5cda-133b-4315-8134-d9b365f631c5-hubble-tls\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874886 kubelet[2169]: I0712 00:26:23.874558 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-run\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874886 kubelet[2169]: I0712 00:26:23.874592 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-config-path\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874886 kubelet[2169]: I0712 00:26:23.874624 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-host-proc-sys-net\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874886 kubelet[2169]: I0712 00:26:23.874659 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-host-proc-sys-kernel\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874886 kubelet[2169]: I0712 00:26:23.874693 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-hostproc\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.874886 kubelet[2169]: I0712 00:26:23.874731 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad9b5cda-133b-4315-8134-d9b365f631c5-clustermesh-secrets\") pod \"cilium-fxnjd\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " pod="kube-system/cilium-fxnjd" Jul 12 00:26:23.875223 kubelet[2169]: I0712 00:26:23.874767 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72a28b7a-b7d6-45dc-b901-74240606c6cd-xtables-lock\") pod \"kube-proxy-kpth2\" (UID: \"72a28b7a-b7d6-45dc-b901-74240606c6cd\") " pod="kube-system/kube-proxy-kpth2" Jul 12 00:26:23.875223 kubelet[2169]: I0712 00:26:23.874800 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72a28b7a-b7d6-45dc-b901-74240606c6cd-lib-modules\") pod \"kube-proxy-kpth2\" (UID: \"72a28b7a-b7d6-45dc-b901-74240606c6cd\") " pod="kube-system/kube-proxy-kpth2" Jul 12 00:26:23.875223 kubelet[2169]: I0712 00:26:23.874874 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrkcn\" (UniqueName: \"kubernetes.io/projected/72a28b7a-b7d6-45dc-b901-74240606c6cd-kube-api-access-rrkcn\") pod \"kube-proxy-kpth2\" (UID: \"72a28b7a-b7d6-45dc-b901-74240606c6cd\") " pod="kube-system/kube-proxy-kpth2" Jul 12 00:26:23.977204 kubelet[2169]: I0712 00:26:23.977159 2169 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 12 00:26:24.143138 env[1817]: time="2025-07-12T00:26:24.142982609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpth2,Uid:72a28b7a-b7d6-45dc-b901-74240606c6cd,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:24.160455 env[1817]: time="2025-07-12T00:26:24.159883158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fxnjd,Uid:ad9b5cda-133b-4315-8134-d9b365f631c5,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:24.670682 env[1817]: time="2025-07-12T00:26:24.670602387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:24.674126 env[1817]: time="2025-07-12T00:26:24.674070552Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:24.675716 env[1817]: time="2025-07-12T00:26:24.675675041Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:24.677317 env[1817]: time="2025-07-12T00:26:24.677260158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:24.683287 env[1817]: time="2025-07-12T00:26:24.683215710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:24.686910 env[1817]: time="2025-07-12T00:26:24.686861010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:24.692133 env[1817]: time="2025-07-12T00:26:24.692037579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:24.693717 env[1817]: time="2025-07-12T00:26:24.693673089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:24.723698 env[1817]: time="2025-07-12T00:26:24.723569686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:24.723937 env[1817]: time="2025-07-12T00:26:24.723708152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:24.723937 env[1817]: time="2025-07-12T00:26:24.723797491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:24.723937 env[1817]: time="2025-07-12T00:26:24.723885866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:24.724231 env[1817]: time="2025-07-12T00:26:24.723671939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:24.724231 env[1817]: time="2025-07-12T00:26:24.724182471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:24.724413 env[1817]: time="2025-07-12T00:26:24.724351258Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe32013b353c701542e8fdaff63dac6092af18b005b19a9e97d7e2db93b687cd pid=2228 runtime=io.containerd.runc.v2 Jul 12 00:26:24.725243 env[1817]: time="2025-07-12T00:26:24.725136155Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda pid=2227 runtime=io.containerd.runc.v2 Jul 12 00:26:24.756162 systemd[1]: Started cri-containerd-67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda.scope. Jul 12 00:26:24.764063 systemd[1]: Started cri-containerd-fe32013b353c701542e8fdaff63dac6092af18b005b19a9e97d7e2db93b687cd.scope. Jul 12 00:26:24.802632 kubelet[2169]: E0712 00:26:24.802560 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:24.830083 env[1817]: time="2025-07-12T00:26:24.830006767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fxnjd,Uid:ad9b5cda-133b-4315-8134-d9b365f631c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\"" Jul 12 00:26:24.835960 env[1817]: time="2025-07-12T00:26:24.835902398Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:26:24.869100 env[1817]: time="2025-07-12T00:26:24.869036006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpth2,Uid:72a28b7a-b7d6-45dc-b901-74240606c6cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe32013b353c701542e8fdaff63dac6092af18b005b19a9e97d7e2db93b687cd\"" Jul 12 00:26:24.991625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884965362.mount: Deactivated successfully. Jul 12 00:26:25.802974 kubelet[2169]: E0712 00:26:25.802900 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:26.803584 kubelet[2169]: E0712 00:26:26.803478 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:27.804117 kubelet[2169]: E0712 00:26:27.803968 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:28.804910 kubelet[2169]: E0712 00:26:28.804803 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:29.805557 kubelet[2169]: E0712 00:26:29.805501 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:30.806409 kubelet[2169]: E0712 00:26:30.806147 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:30.964174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251224836.mount: Deactivated successfully. Jul 12 00:26:31.807310 kubelet[2169]: E0712 00:26:31.807251 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:32.808432 kubelet[2169]: E0712 00:26:32.808364 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:33.809026 kubelet[2169]: E0712 00:26:33.808932 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:34.810591 kubelet[2169]: E0712 00:26:34.810532 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:35.128164 env[1817]: time="2025-07-12T00:26:35.127972151Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:35.164983 env[1817]: time="2025-07-12T00:26:35.164912506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:35.181710 env[1817]: time="2025-07-12T00:26:35.181638216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:35.182274 env[1817]: time="2025-07-12T00:26:35.182211370Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:26:35.188856 env[1817]: time="2025-07-12T00:26:35.188767690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:26:35.189372 env[1817]: time="2025-07-12T00:26:35.189310145Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:26:35.526080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786334820.mount: Deactivated successfully. Jul 12 00:26:35.751717 env[1817]: time="2025-07-12T00:26:35.751526882Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\"" Jul 12 00:26:35.753853 env[1817]: time="2025-07-12T00:26:35.753773304Z" level=info msg="StartContainer for \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\"" Jul 12 00:26:35.798129 systemd[1]: Started cri-containerd-f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631.scope. Jul 12 00:26:35.811641 kubelet[2169]: E0712 00:26:35.811517 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:35.863859 env[1817]: time="2025-07-12T00:26:35.861718007Z" level=info msg="StartContainer for \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\" returns successfully" Jul 12 00:26:35.884156 systemd[1]: cri-containerd-f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631.scope: Deactivated successfully. Jul 12 00:26:36.225783 env[1817]: time="2025-07-12T00:26:36.225684368Z" level=info msg="shim disconnected" id=f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631 Jul 12 00:26:36.226437 env[1817]: time="2025-07-12T00:26:36.225786063Z" level=warning msg="cleaning up after shim disconnected" id=f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631 namespace=k8s.io Jul 12 00:26:36.226437 env[1817]: time="2025-07-12T00:26:36.225808581Z" level=info msg="cleaning up dead shim" Jul 12 00:26:36.249396 env[1817]: time="2025-07-12T00:26:36.249002758Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2352 runtime=io.containerd.runc.v2\n" Jul 12 00:26:36.517472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631-rootfs.mount: Deactivated successfully. Jul 12 00:26:36.812194 kubelet[2169]: E0712 00:26:36.812128 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:37.088396 env[1817]: time="2025-07-12T00:26:37.087868297Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:26:37.118222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483213554.mount: Deactivated successfully. Jul 12 00:26:37.130088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319887388.mount: Deactivated successfully. Jul 12 00:26:37.131437 env[1817]: time="2025-07-12T00:26:37.131363094Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\"" Jul 12 00:26:37.132733 env[1817]: time="2025-07-12T00:26:37.132662435Z" level=info msg="StartContainer for \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\"" Jul 12 00:26:37.168798 systemd[1]: Started cri-containerd-05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d.scope. Jul 12 00:26:37.262330 env[1817]: time="2025-07-12T00:26:37.262205369Z" level=info msg="StartContainer for \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\" returns successfully" Jul 12 00:26:37.278774 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:26:37.280103 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:26:37.280573 systemd[1]: Stopping systemd-sysctl.service... Jul 12 00:26:37.286539 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:26:37.296117 systemd[1]: cri-containerd-05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d.scope: Deactivated successfully. Jul 12 00:26:37.304015 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:26:37.424066 env[1817]: time="2025-07-12T00:26:37.423917858Z" level=info msg="shim disconnected" id=05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d Jul 12 00:26:37.424358 env[1817]: time="2025-07-12T00:26:37.424321551Z" level=warning msg="cleaning up after shim disconnected" id=05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d namespace=k8s.io Jul 12 00:26:37.425186 env[1817]: time="2025-07-12T00:26:37.424447962Z" level=info msg="cleaning up dead shim" Jul 12 00:26:37.439208 env[1817]: time="2025-07-12T00:26:37.439152181Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2418 runtime=io.containerd.runc.v2\n" Jul 12 00:26:37.515502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289152372.mount: Deactivated successfully. Jul 12 00:26:37.812984 kubelet[2169]: E0712 00:26:37.812898 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:38.095108 env[1817]: time="2025-07-12T00:26:38.094768511Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:26:38.130484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3877012564.mount: Deactivated successfully. Jul 12 00:26:38.141311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290196809.mount: Deactivated successfully. Jul 12 00:26:38.147661 env[1817]: time="2025-07-12T00:26:38.147576634Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\"" Jul 12 00:26:38.149130 env[1817]: time="2025-07-12T00:26:38.149079541Z" level=info msg="StartContainer for \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\"" Jul 12 00:26:38.200701 systemd[1]: Started cri-containerd-b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9.scope. Jul 12 00:26:38.261959 env[1817]: time="2025-07-12T00:26:38.261760690Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:38.263541 env[1817]: time="2025-07-12T00:26:38.263494436Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:38.265972 env[1817]: time="2025-07-12T00:26:38.265900670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:38.268306 env[1817]: time="2025-07-12T00:26:38.268238523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:38.269513 env[1817]: time="2025-07-12T00:26:38.269468170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:26:38.274181 env[1817]: time="2025-07-12T00:26:38.274126394Z" level=info msg="CreateContainer within sandbox \"fe32013b353c701542e8fdaff63dac6092af18b005b19a9e97d7e2db93b687cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:26:38.298499 systemd[1]: cri-containerd-b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9.scope: Deactivated successfully. Jul 12 00:26:38.306945 env[1817]: time="2025-07-12T00:26:38.306887088Z" level=info msg="StartContainer for \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\" returns successfully" Jul 12 00:26:38.308526 env[1817]: time="2025-07-12T00:26:38.307449242Z" level=info msg="CreateContainer within sandbox \"fe32013b353c701542e8fdaff63dac6092af18b005b19a9e97d7e2db93b687cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"328e279b1b63368e0dffbecb78c8c29d954d51af57f1bf1eb83626f5fa734126\"" Jul 12 00:26:38.309748 env[1817]: time="2025-07-12T00:26:38.309699587Z" level=info msg="StartContainer for \"328e279b1b63368e0dffbecb78c8c29d954d51af57f1bf1eb83626f5fa734126\"" Jul 12 00:26:38.346612 systemd[1]: Started cri-containerd-328e279b1b63368e0dffbecb78c8c29d954d51af57f1bf1eb83626f5fa734126.scope. Jul 12 00:26:38.436155 env[1817]: time="2025-07-12T00:26:38.436093001Z" level=info msg="shim disconnected" id=b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9 Jul 12 00:26:38.436636 env[1817]: time="2025-07-12T00:26:38.436585597Z" level=warning msg="cleaning up after shim disconnected" id=b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9 namespace=k8s.io Jul 12 00:26:38.436797 env[1817]: time="2025-07-12T00:26:38.436766233Z" level=info msg="cleaning up dead shim" Jul 12 00:26:38.458036 env[1817]: time="2025-07-12T00:26:38.457960388Z" level=info msg="StartContainer for \"328e279b1b63368e0dffbecb78c8c29d954d51af57f1bf1eb83626f5fa734126\" returns successfully" Jul 12 00:26:38.460301 env[1817]: time="2025-07-12T00:26:38.460245481Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2508 runtime=io.containerd.runc.v2\n" Jul 12 00:26:38.813967 kubelet[2169]: E0712 00:26:38.813785 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:39.104970 env[1817]: time="2025-07-12T00:26:39.104784546Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:26:39.134117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791883110.mount: Deactivated successfully. Jul 12 00:26:39.148181 kubelet[2169]: I0712 00:26:39.148016 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kpth2" podStartSLOduration=2.747159292 podStartE2EDuration="16.147679662s" podCreationTimestamp="2025-07-12 00:26:23 +0000 UTC" firstStartedPulling="2025-07-12 00:26:24.871100586 +0000 UTC m=+3.333546011" lastFinishedPulling="2025-07-12 00:26:38.271620956 +0000 UTC m=+16.734066381" observedRunningTime="2025-07-12 00:26:39.114888796 +0000 UTC m=+17.577334245" watchObservedRunningTime="2025-07-12 00:26:39.147679662 +0000 UTC m=+17.610125135" Jul 12 00:26:39.151231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755096033.mount: Deactivated successfully. Jul 12 00:26:39.155996 env[1817]: time="2025-07-12T00:26:39.155936488Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\"" Jul 12 00:26:39.157132 env[1817]: time="2025-07-12T00:26:39.157082671Z" level=info msg="StartContainer for \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\"" Jul 12 00:26:39.185230 systemd[1]: Started cri-containerd-cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363.scope. Jul 12 00:26:39.237060 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 12 00:26:39.253792 systemd[1]: cri-containerd-cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363.scope: Deactivated successfully. Jul 12 00:26:39.256197 env[1817]: time="2025-07-12T00:26:39.256049558Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad9b5cda_133b_4315_8134_d9b365f631c5.slice/cri-containerd-cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363.scope/memory.events\": no such file or directory" Jul 12 00:26:39.261249 env[1817]: time="2025-07-12T00:26:39.261191875Z" level=info msg="StartContainer for \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\" returns successfully" Jul 12 00:26:39.300201 env[1817]: time="2025-07-12T00:26:39.300138042Z" level=info msg="shim disconnected" id=cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363 Jul 12 00:26:39.300947 env[1817]: time="2025-07-12T00:26:39.300898738Z" level=warning msg="cleaning up after shim disconnected" id=cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363 namespace=k8s.io Jul 12 00:26:39.301080 env[1817]: time="2025-07-12T00:26:39.301052863Z" level=info msg="cleaning up dead shim" Jul 12 00:26:39.315385 env[1817]: time="2025-07-12T00:26:39.315329428Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2701 runtime=io.containerd.runc.v2\n" Jul 12 00:26:39.815104 kubelet[2169]: E0712 00:26:39.815052 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:40.110504 env[1817]: time="2025-07-12T00:26:40.110370461Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:26:40.143784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3892624487.mount: Deactivated successfully. Jul 12 00:26:40.153831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818678094.mount: Deactivated successfully. Jul 12 00:26:40.163630 env[1817]: time="2025-07-12T00:26:40.163570449Z" level=info msg="CreateContainer within sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\"" Jul 12 00:26:40.164652 env[1817]: time="2025-07-12T00:26:40.164603525Z" level=info msg="StartContainer for \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\"" Jul 12 00:26:40.193742 systemd[1]: Started cri-containerd-79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4.scope. Jul 12 00:26:40.281601 env[1817]: time="2025-07-12T00:26:40.281450145Z" level=info msg="StartContainer for \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\" returns successfully" Jul 12 00:26:40.536266 kubelet[2169]: I0712 00:26:40.536134 2169 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:26:40.579878 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:26:40.815265 kubelet[2169]: E0712 00:26:40.815188 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:41.393860 kernel: Initializing XFRM netlink socket Jul 12 00:26:41.399855 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:26:41.815344 kubelet[2169]: E0712 00:26:41.815308 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:42.802241 kubelet[2169]: E0712 00:26:42.802178 2169 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:42.816510 kubelet[2169]: E0712 00:26:42.816465 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:43.197741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 12 00:26:43.196756 systemd-networkd[1534]: cilium_host: Link UP Jul 12 00:26:43.197129 systemd-networkd[1534]: cilium_net: Link UP Jul 12 00:26:43.197137 systemd-networkd[1534]: cilium_net: Gained carrier Jul 12 00:26:43.197516 systemd-networkd[1534]: cilium_host: Gained carrier Jul 12 00:26:43.197701 (udev-worker)[2848]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:43.198756 (udev-worker)[2847]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:43.370838 systemd-networkd[1534]: cilium_vxlan: Link UP Jul 12 00:26:43.370855 systemd-networkd[1534]: cilium_vxlan: Gained carrier Jul 12 00:26:43.817180 kubelet[2169]: E0712 00:26:43.817111 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:43.881469 systemd-networkd[1534]: cilium_net: Gained IPv6LL Jul 12 00:26:43.910885 kernel: NET: Registered PF_ALG protocol family Jul 12 00:26:44.009516 systemd-networkd[1534]: cilium_host: Gained IPv6LL Jul 12 00:26:44.818090 kubelet[2169]: E0712 00:26:44.818025 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:45.033432 systemd-networkd[1534]: cilium_vxlan: Gained IPv6LL Jul 12 00:26:45.167303 (udev-worker)[2858]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:45.185700 systemd-networkd[1534]: lxc_health: Link UP Jul 12 00:26:45.211646 systemd-networkd[1534]: lxc_health: Gained carrier Jul 12 00:26:45.211964 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:26:45.819237 kubelet[2169]: E0712 00:26:45.819173 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:46.463475 kubelet[2169]: I0712 00:26:46.463365 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fxnjd" podStartSLOduration=13.112830606 podStartE2EDuration="23.463342419s" podCreationTimestamp="2025-07-12 00:26:23 +0000 UTC" firstStartedPulling="2025-07-12 00:26:24.834494019 +0000 UTC m=+3.296939444" lastFinishedPulling="2025-07-12 00:26:35.18500576 +0000 UTC m=+13.647451257" observedRunningTime="2025-07-12 00:26:41.142327202 +0000 UTC m=+19.604772651" watchObservedRunningTime="2025-07-12 00:26:46.463342419 +0000 UTC m=+24.925787856" Jul 12 00:26:46.613842 systemd[1]: Created slice kubepods-besteffort-pod85f63800_a442_46e6_8226_a3cfc72e79f9.slice. Jul 12 00:26:46.640523 kubelet[2169]: I0712 00:26:46.640456 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjqx\" (UniqueName: \"kubernetes.io/projected/85f63800-a442-46e6-8226-a3cfc72e79f9-kube-api-access-wqjqx\") pod \"nginx-deployment-8587fbcb89-rn46j\" (UID: \"85f63800-a442-46e6-8226-a3cfc72e79f9\") " pod="default/nginx-deployment-8587fbcb89-rn46j" Jul 12 00:26:46.761554 systemd-networkd[1534]: lxc_health: Gained IPv6LL Jul 12 00:26:46.820231 kubelet[2169]: E0712 00:26:46.820161 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:46.921099 env[1817]: time="2025-07-12T00:26:46.921023841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rn46j,Uid:85f63800-a442-46e6-8226-a3cfc72e79f9,Namespace:default,Attempt:0,}" Jul 12 00:26:47.015433 systemd-networkd[1534]: lxcb39bd3e4413d: Link UP Jul 12 00:26:47.026892 kernel: eth0: renamed from tmp6eedb Jul 12 00:26:47.041224 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:26:47.041355 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb39bd3e4413d: link becomes ready Jul 12 00:26:47.039040 systemd-networkd[1534]: lxcb39bd3e4413d: Gained carrier Jul 12 00:26:47.573752 amazon-ssm-agent[1797]: 2025-07-12 00:26:47 INFO [HealthCheck] HealthCheck reporting agent health. Jul 12 00:26:47.821094 kubelet[2169]: E0712 00:26:47.821019 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:48.106487 systemd-networkd[1534]: lxcb39bd3e4413d: Gained IPv6LL Jul 12 00:26:48.801204 amazon-ssm-agent[1797]: 2025-07-12 00:26:48 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Jul 12 00:26:48.821835 kubelet[2169]: E0712 00:26:48.821751 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:49.822223 kubelet[2169]: E0712 00:26:49.822174 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:50.823163 kubelet[2169]: E0712 00:26:50.823114 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:51.824981 kubelet[2169]: E0712 00:26:51.824906 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:52.825214 kubelet[2169]: E0712 00:26:52.825166 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:53.826230 kubelet[2169]: E0712 00:26:53.826147 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:54.072736 env[1817]: time="2025-07-12T00:26:54.072183009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:54.072736 env[1817]: time="2025-07-12T00:26:54.072476236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:54.072736 env[1817]: time="2025-07-12T00:26:54.072506173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:54.073460 env[1817]: time="2025-07-12T00:26:54.072805534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6eedbfc047e6882270873e15b84b3bb8ba07d1bca3ca5fafd50f39f0d4927d0a pid=3225 runtime=io.containerd.runc.v2 Jul 12 00:26:54.109418 systemd[1]: run-containerd-runc-k8s.io-6eedbfc047e6882270873e15b84b3bb8ba07d1bca3ca5fafd50f39f0d4927d0a-runc.iMZrrB.mount: Deactivated successfully. Jul 12 00:26:54.120211 systemd[1]: Started cri-containerd-6eedbfc047e6882270873e15b84b3bb8ba07d1bca3ca5fafd50f39f0d4927d0a.scope. Jul 12 00:26:54.195928 env[1817]: time="2025-07-12T00:26:54.195872537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-rn46j,Uid:85f63800-a442-46e6-8226-a3cfc72e79f9,Namespace:default,Attempt:0,} returns sandbox id \"6eedbfc047e6882270873e15b84b3bb8ba07d1bca3ca5fafd50f39f0d4927d0a\"" Jul 12 00:26:54.199209 env[1817]: time="2025-07-12T00:26:54.199160095Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 12 00:26:54.411145 update_engine[1809]: I0712 00:26:54.410260 1809 update_attempter.cc:509] Updating boot flags... Jul 12 00:26:54.826371 kubelet[2169]: E0712 00:26:54.826288 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:55.826989 kubelet[2169]: E0712 00:26:55.826917 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:56.827761 kubelet[2169]: E0712 00:26:56.827691 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:57.707204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961727695.mount: Deactivated successfully. Jul 12 00:26:57.828509 kubelet[2169]: E0712 00:26:57.828446 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:58.829592 kubelet[2169]: E0712 00:26:58.829531 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:59.830184 kubelet[2169]: E0712 00:26:59.830021 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:26:59.871002 env[1817]: time="2025-07-12T00:26:59.870933912Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:59.873205 env[1817]: time="2025-07-12T00:26:59.873143364Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:59.876499 env[1817]: time="2025-07-12T00:26:59.876447363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:59.879576 env[1817]: time="2025-07-12T00:26:59.879527424Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:59.881239 env[1817]: time="2025-07-12T00:26:59.881185125Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 12 00:26:59.887087 env[1817]: time="2025-07-12T00:26:59.887028572Z" level=info msg="CreateContainer within sandbox \"6eedbfc047e6882270873e15b84b3bb8ba07d1bca3ca5fafd50f39f0d4927d0a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 12 00:26:59.904344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496289804.mount: Deactivated successfully. Jul 12 00:26:59.916774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737776776.mount: Deactivated successfully. Jul 12 00:26:59.922529 env[1817]: time="2025-07-12T00:26:59.922444006Z" level=info msg="CreateContainer within sandbox \"6eedbfc047e6882270873e15b84b3bb8ba07d1bca3ca5fafd50f39f0d4927d0a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b86c0a4e624326bbb6c5b7ba71c4e7b71fd7f1ec6dc5e9e346b12d76a611ef8f\"" Jul 12 00:26:59.923690 env[1817]: time="2025-07-12T00:26:59.923643867Z" level=info msg="StartContainer for \"b86c0a4e624326bbb6c5b7ba71c4e7b71fd7f1ec6dc5e9e346b12d76a611ef8f\"" Jul 12 00:26:59.969520 systemd[1]: Started cri-containerd-b86c0a4e624326bbb6c5b7ba71c4e7b71fd7f1ec6dc5e9e346b12d76a611ef8f.scope. Jul 12 00:27:00.048221 env[1817]: time="2025-07-12T00:27:00.048157994Z" level=info msg="StartContainer for \"b86c0a4e624326bbb6c5b7ba71c4e7b71fd7f1ec6dc5e9e346b12d76a611ef8f\" returns successfully" Jul 12 00:27:00.173869 kubelet[2169]: I0712 00:27:00.172914 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-rn46j" podStartSLOduration=8.48730345 podStartE2EDuration="14.172893223s" podCreationTimestamp="2025-07-12 00:26:46 +0000 UTC" firstStartedPulling="2025-07-12 00:26:54.198368893 +0000 UTC m=+32.660814306" lastFinishedPulling="2025-07-12 00:26:59.883958654 +0000 UTC m=+38.346404079" observedRunningTime="2025-07-12 00:27:00.171512457 +0000 UTC m=+38.633957894" watchObservedRunningTime="2025-07-12 00:27:00.172893223 +0000 UTC m=+38.635338672" Jul 12 00:27:00.830844 kubelet[2169]: E0712 00:27:00.830784 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:01.831535 kubelet[2169]: E0712 00:27:01.831466 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:02.802246 kubelet[2169]: E0712 00:27:02.802206 2169 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:02.831992 kubelet[2169]: E0712 00:27:02.831943 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:03.832909 kubelet[2169]: E0712 00:27:03.832865 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:04.833613 kubelet[2169]: E0712 00:27:04.833574 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:05.834935 kubelet[2169]: E0712 00:27:05.834875 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:06.837713 kubelet[2169]: E0712 00:27:06.837663 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:06.930652 systemd[1]: Created slice kubepods-besteffort-podf1bcebfb_a8d7_4b08_a6dc_79919492eb8c.slice. Jul 12 00:27:06.971931 kubelet[2169]: I0712 00:27:06.971885 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f1bcebfb-a8d7-4b08-a6dc-79919492eb8c-data\") pod \"nfs-server-provisioner-0\" (UID: \"f1bcebfb-a8d7-4b08-a6dc-79919492eb8c\") " pod="default/nfs-server-provisioner-0" Jul 12 00:27:06.972216 kubelet[2169]: I0712 00:27:06.972186 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzgkn\" (UniqueName: \"kubernetes.io/projected/f1bcebfb-a8d7-4b08-a6dc-79919492eb8c-kube-api-access-pzgkn\") pod \"nfs-server-provisioner-0\" (UID: \"f1bcebfb-a8d7-4b08-a6dc-79919492eb8c\") " pod="default/nfs-server-provisioner-0" Jul 12 00:27:07.238115 env[1817]: time="2025-07-12T00:27:07.237467839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f1bcebfb-a8d7-4b08-a6dc-79919492eb8c,Namespace:default,Attempt:0,}" Jul 12 00:27:07.303099 systemd-networkd[1534]: lxcd69ba438f9f7: Link UP Jul 12 00:27:07.305038 (udev-worker)[3600]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:27:07.308881 kernel: eth0: renamed from tmp47b14 Jul 12 00:27:07.317861 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:27:07.318000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd69ba438f9f7: link becomes ready Jul 12 00:27:07.320734 systemd-networkd[1534]: lxcd69ba438f9f7: Gained carrier Jul 12 00:27:07.615444 env[1817]: time="2025-07-12T00:27:07.615316302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:07.615759 env[1817]: time="2025-07-12T00:27:07.615437823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:07.615759 env[1817]: time="2025-07-12T00:27:07.615500090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:07.617478 env[1817]: time="2025-07-12T00:27:07.616148875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47b142cae2a966196c63bbb8d4a96fbead9a8a7929e634d579162a7f35937969 pid=3615 runtime=io.containerd.runc.v2 Jul 12 00:27:07.651015 systemd[1]: Started cri-containerd-47b142cae2a966196c63bbb8d4a96fbead9a8a7929e634d579162a7f35937969.scope. Jul 12 00:27:07.740257 env[1817]: time="2025-07-12T00:27:07.740202645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f1bcebfb-a8d7-4b08-a6dc-79919492eb8c,Namespace:default,Attempt:0,} returns sandbox id \"47b142cae2a966196c63bbb8d4a96fbead9a8a7929e634d579162a7f35937969\"" Jul 12 00:27:07.743555 env[1817]: time="2025-07-12T00:27:07.743506808Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 12 00:27:07.837942 kubelet[2169]: E0712 00:27:07.837866 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:08.093491 systemd[1]: run-containerd-runc-k8s.io-47b142cae2a966196c63bbb8d4a96fbead9a8a7929e634d579162a7f35937969-runc.ThlzUL.mount: Deactivated successfully. Jul 12 00:27:08.713537 systemd-networkd[1534]: lxcd69ba438f9f7: Gained IPv6LL Jul 12 00:27:08.838965 kubelet[2169]: E0712 00:27:08.838746 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:09.839197 kubelet[2169]: E0712 00:27:09.839147 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:10.840926 kubelet[2169]: E0712 00:27:10.840828 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:10.874389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999693122.mount: Deactivated successfully. Jul 12 00:27:11.841258 kubelet[2169]: E0712 00:27:11.841192 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:12.841705 kubelet[2169]: E0712 00:27:12.841632 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:13.842308 kubelet[2169]: E0712 00:27:13.842229 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:14.411312 env[1817]: time="2025-07-12T00:27:14.411244090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:14.416083 env[1817]: time="2025-07-12T00:27:14.416028541Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:14.420906 env[1817]: time="2025-07-12T00:27:14.420837103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:14.426230 env[1817]: time="2025-07-12T00:27:14.426153239Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:14.428231 env[1817]: time="2025-07-12T00:27:14.428179438Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 12 00:27:14.432926 env[1817]: time="2025-07-12T00:27:14.432865704Z" level=info msg="CreateContainer within sandbox \"47b142cae2a966196c63bbb8d4a96fbead9a8a7929e634d579162a7f35937969\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 12 00:27:14.456496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1874281943.mount: Deactivated successfully. Jul 12 00:27:14.470269 env[1817]: time="2025-07-12T00:27:14.470209361Z" level=info msg="CreateContainer within sandbox \"47b142cae2a966196c63bbb8d4a96fbead9a8a7929e634d579162a7f35937969\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"efbc3c756d25d644abbe08f4cfdfba792ac315330efd01845661d54cbc01c51e\"" Jul 12 00:27:14.471371 env[1817]: time="2025-07-12T00:27:14.471166768Z" level=info msg="StartContainer for \"efbc3c756d25d644abbe08f4cfdfba792ac315330efd01845661d54cbc01c51e\"" Jul 12 00:27:14.509156 systemd[1]: Started cri-containerd-efbc3c756d25d644abbe08f4cfdfba792ac315330efd01845661d54cbc01c51e.scope. Jul 12 00:27:14.587171 env[1817]: time="2025-07-12T00:27:14.587055329Z" level=info msg="StartContainer for \"efbc3c756d25d644abbe08f4cfdfba792ac315330efd01845661d54cbc01c51e\" returns successfully" Jul 12 00:27:14.842459 kubelet[2169]: E0712 00:27:14.842387 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:15.212529 kubelet[2169]: I0712 00:27:15.212235 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.524732546 podStartE2EDuration="9.21221442s" podCreationTimestamp="2025-07-12 00:27:06 +0000 UTC" firstStartedPulling="2025-07-12 00:27:07.742613865 +0000 UTC m=+46.205059278" lastFinishedPulling="2025-07-12 00:27:14.430095727 +0000 UTC m=+52.892541152" observedRunningTime="2025-07-12 00:27:15.210945529 +0000 UTC m=+53.673390990" watchObservedRunningTime="2025-07-12 00:27:15.21221442 +0000 UTC m=+53.674659845" Jul 12 00:27:15.842931 kubelet[2169]: E0712 00:27:15.842888 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:16.844101 kubelet[2169]: E0712 00:27:16.844031 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:17.845106 kubelet[2169]: E0712 00:27:17.845034 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:18.845887 kubelet[2169]: E0712 00:27:18.845831 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:19.846474 kubelet[2169]: E0712 00:27:19.846430 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:20.847453 kubelet[2169]: E0712 00:27:20.847389 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:21.848565 kubelet[2169]: E0712 00:27:21.848497 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:22.801955 kubelet[2169]: E0712 00:27:22.801886 2169 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:22.848792 kubelet[2169]: E0712 00:27:22.848761 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:23.849957 kubelet[2169]: E0712 00:27:23.849889 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:24.737710 systemd[1]: Created slice kubepods-besteffort-pod656a1034_cd98_411a_9d4a_260892e9a837.slice. Jul 12 00:27:24.789890 kubelet[2169]: I0712 00:27:24.789644 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac369da3-8167-4283-bf6f-42605f426601\" (UniqueName: \"kubernetes.io/nfs/656a1034-cd98-411a-9d4a-260892e9a837-pvc-ac369da3-8167-4283-bf6f-42605f426601\") pod \"test-pod-1\" (UID: \"656a1034-cd98-411a-9d4a-260892e9a837\") " pod="default/test-pod-1" Jul 12 00:27:24.790184 kubelet[2169]: I0712 00:27:24.790155 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8sr\" (UniqueName: \"kubernetes.io/projected/656a1034-cd98-411a-9d4a-260892e9a837-kube-api-access-lr8sr\") pod \"test-pod-1\" (UID: \"656a1034-cd98-411a-9d4a-260892e9a837\") " pod="default/test-pod-1" Jul 12 00:27:24.850615 kubelet[2169]: E0712 00:27:24.850578 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:24.942857 kernel: FS-Cache: Loaded Jul 12 00:27:24.996363 kernel: RPC: Registered named UNIX socket transport module. Jul 12 00:27:24.996518 kernel: RPC: Registered udp transport module. Jul 12 00:27:25.001397 kernel: RPC: Registered tcp transport module. Jul 12 00:27:25.001516 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 12 00:27:25.083008 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 12 00:27:25.342029 kernel: NFS: Registering the id_resolver key type Jul 12 00:27:25.342189 kernel: Key type id_resolver registered Jul 12 00:27:25.342251 kernel: Key type id_legacy registered Jul 12 00:27:25.393746 nfsidmap[3737]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 12 00:27:25.399292 nfsidmap[3738]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 12 00:27:25.647260 env[1817]: time="2025-07-12T00:27:25.647083071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:656a1034-cd98-411a-9d4a-260892e9a837,Namespace:default,Attempt:0,}" Jul 12 00:27:25.697644 systemd-networkd[1534]: lxc9731b2eb8c89: Link UP Jul 12 00:27:25.705084 (udev-worker)[3723]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:27:25.707263 kernel: eth0: renamed from tmp6cf38 Jul 12 00:27:25.719627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:27:25.719759 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9731b2eb8c89: link becomes ready Jul 12 00:27:25.719613 systemd-networkd[1534]: lxc9731b2eb8c89: Gained carrier Jul 12 00:27:25.851842 kubelet[2169]: E0712 00:27:25.851732 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:25.998842 env[1817]: time="2025-07-12T00:27:25.998607106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:25.998842 env[1817]: time="2025-07-12T00:27:25.998764226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:25.999581 env[1817]: time="2025-07-12T00:27:25.999509971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:26.000037 env[1817]: time="2025-07-12T00:27:25.999949984Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cf3827008db99c13a48f1e37132ea6321ea1feb15812d193508bacaf3c8bfb4 pid=3765 runtime=io.containerd.runc.v2 Jul 12 00:27:26.032968 systemd[1]: run-containerd-runc-k8s.io-6cf3827008db99c13a48f1e37132ea6321ea1feb15812d193508bacaf3c8bfb4-runc.i36jlN.mount: Deactivated successfully. Jul 12 00:27:26.041448 systemd[1]: Started cri-containerd-6cf3827008db99c13a48f1e37132ea6321ea1feb15812d193508bacaf3c8bfb4.scope. Jul 12 00:27:26.117571 env[1817]: time="2025-07-12T00:27:26.117500953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:656a1034-cd98-411a-9d4a-260892e9a837,Namespace:default,Attempt:0,} returns sandbox id \"6cf3827008db99c13a48f1e37132ea6321ea1feb15812d193508bacaf3c8bfb4\"" Jul 12 00:27:26.120632 env[1817]: time="2025-07-12T00:27:26.120581950Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 12 00:27:26.428148 env[1817]: time="2025-07-12T00:27:26.428093193Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:26.432179 env[1817]: time="2025-07-12T00:27:26.432096134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:26.435960 env[1817]: time="2025-07-12T00:27:26.435898788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:26.439736 env[1817]: time="2025-07-12T00:27:26.439672362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:26.441315 env[1817]: time="2025-07-12T00:27:26.441265378Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 12 00:27:26.446339 env[1817]: time="2025-07-12T00:27:26.446285734Z" level=info msg="CreateContainer within sandbox \"6cf3827008db99c13a48f1e37132ea6321ea1feb15812d193508bacaf3c8bfb4\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 12 00:27:26.484413 env[1817]: time="2025-07-12T00:27:26.484351369Z" level=info msg="CreateContainer within sandbox \"6cf3827008db99c13a48f1e37132ea6321ea1feb15812d193508bacaf3c8bfb4\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1db8a15b987167c8d17eda5a289a3d9ed8a205dc86e44a7ba33b5704e6aaddd5\"" Jul 12 00:27:26.485796 env[1817]: time="2025-07-12T00:27:26.485700916Z" level=info msg="StartContainer for \"1db8a15b987167c8d17eda5a289a3d9ed8a205dc86e44a7ba33b5704e6aaddd5\"" Jul 12 00:27:26.516009 systemd[1]: Started cri-containerd-1db8a15b987167c8d17eda5a289a3d9ed8a205dc86e44a7ba33b5704e6aaddd5.scope. Jul 12 00:27:26.580976 env[1817]: time="2025-07-12T00:27:26.580914533Z" level=info msg="StartContainer for \"1db8a15b987167c8d17eda5a289a3d9ed8a205dc86e44a7ba33b5704e6aaddd5\" returns successfully" Jul 12 00:27:26.851975 kubelet[2169]: E0712 00:27:26.851919 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:26.922439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774601777.mount: Deactivated successfully. Jul 12 00:27:27.242177 kubelet[2169]: I0712 00:27:27.241996 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.917952597 podStartE2EDuration="20.241953579s" podCreationTimestamp="2025-07-12 00:27:07 +0000 UTC" firstStartedPulling="2025-07-12 00:27:26.119878979 +0000 UTC m=+64.582324404" lastFinishedPulling="2025-07-12 00:27:26.443879961 +0000 UTC m=+64.906325386" observedRunningTime="2025-07-12 00:27:27.241899946 +0000 UTC m=+65.704345407" watchObservedRunningTime="2025-07-12 00:27:27.241953579 +0000 UTC m=+65.704399004" Jul 12 00:27:27.337077 systemd-networkd[1534]: lxc9731b2eb8c89: Gained IPv6LL Jul 12 00:27:27.852327 kubelet[2169]: E0712 00:27:27.852284 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:28.853835 kubelet[2169]: E0712 00:27:28.853744 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:29.854840 kubelet[2169]: E0712 00:27:29.854744 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:30.855151 kubelet[2169]: E0712 00:27:30.855082 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:31.855794 kubelet[2169]: E0712 00:27:31.855751 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:32.857149 kubelet[2169]: E0712 00:27:32.857109 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:33.302723 systemd[1]: run-containerd-runc-k8s.io-79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4-runc.jpJ3yj.mount: Deactivated successfully. Jul 12 00:27:33.342522 env[1817]: time="2025-07-12T00:27:33.342436979Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:27:33.362631 env[1817]: time="2025-07-12T00:27:33.362558730Z" level=info msg="StopContainer for \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\" with timeout 2 (s)" Jul 12 00:27:33.363594 env[1817]: time="2025-07-12T00:27:33.363548398Z" level=info msg="Stop container \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\" with signal terminated" Jul 12 00:27:33.375468 systemd-networkd[1534]: lxc_health: Link DOWN Jul 12 00:27:33.375481 systemd-networkd[1534]: lxc_health: Lost carrier Jul 12 00:27:33.412706 systemd[1]: cri-containerd-79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4.scope: Deactivated successfully. Jul 12 00:27:33.413459 systemd[1]: cri-containerd-79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4.scope: Consumed 13.907s CPU time. Jul 12 00:27:33.446683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4-rootfs.mount: Deactivated successfully. Jul 12 00:27:33.779388 env[1817]: time="2025-07-12T00:27:33.778437856Z" level=info msg="shim disconnected" id=79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4 Jul 12 00:27:33.779388 env[1817]: time="2025-07-12T00:27:33.778510798Z" level=warning msg="cleaning up after shim disconnected" id=79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4 namespace=k8s.io Jul 12 00:27:33.779388 env[1817]: time="2025-07-12T00:27:33.778533420Z" level=info msg="cleaning up dead shim" Jul 12 00:27:33.791767 env[1817]: time="2025-07-12T00:27:33.791677737Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3890 runtime=io.containerd.runc.v2\n" Jul 12 00:27:33.794993 env[1817]: time="2025-07-12T00:27:33.794926259Z" level=info msg="StopContainer for \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\" returns successfully" Jul 12 00:27:33.795786 env[1817]: time="2025-07-12T00:27:33.795734039Z" level=info msg="StopPodSandbox for \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\"" Jul 12 00:27:33.795971 env[1817]: time="2025-07-12T00:27:33.795885840Z" level=info msg="Container to stop \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:33.795971 env[1817]: time="2025-07-12T00:27:33.795921400Z" level=info msg="Container to stop \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:33.795971 env[1817]: time="2025-07-12T00:27:33.795949974Z" level=info msg="Container to stop \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:33.796259 env[1817]: time="2025-07-12T00:27:33.795979041Z" level=info msg="Container to stop \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:33.796259 env[1817]: time="2025-07-12T00:27:33.796006895Z" level=info msg="Container to stop \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:33.799253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda-shm.mount: Deactivated successfully. Jul 12 00:27:33.812891 systemd[1]: cri-containerd-67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda.scope: Deactivated successfully. Jul 12 00:27:33.858170 kubelet[2169]: E0712 00:27:33.858117 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:33.858857 env[1817]: time="2025-07-12T00:27:33.858607324Z" level=info msg="shim disconnected" id=67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda Jul 12 00:27:33.858857 env[1817]: time="2025-07-12T00:27:33.858706225Z" level=warning msg="cleaning up after shim disconnected" id=67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda namespace=k8s.io Jul 12 00:27:33.858857 env[1817]: time="2025-07-12T00:27:33.858760398Z" level=info msg="cleaning up dead shim" Jul 12 00:27:33.872394 env[1817]: time="2025-07-12T00:27:33.872317431Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3921 runtime=io.containerd.runc.v2\n" Jul 12 00:27:33.872931 env[1817]: time="2025-07-12T00:27:33.872869084Z" level=info msg="TearDown network for sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" successfully" Jul 12 00:27:33.872931 env[1817]: time="2025-07-12T00:27:33.872918961Z" level=info msg="StopPodSandbox for \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" returns successfully" Jul 12 00:27:34.044960 kubelet[2169]: I0712 00:27:34.044785 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cni-path\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.044960 kubelet[2169]: I0712 00:27:34.044867 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-xtables-lock\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.044960 kubelet[2169]: I0712 00:27:34.044917 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlhw5\" (UniqueName: \"kubernetes.io/projected/ad9b5cda-133b-4315-8134-d9b365f631c5-kube-api-access-vlhw5\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.045378 kubelet[2169]: I0712 00:27:34.045328 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cni-path" (OuterVolumeSpecName: "cni-path") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.045603 kubelet[2169]: I0712 00:27:34.045574 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.046368 kubelet[2169]: I0712 00:27:34.046336 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-host-proc-sys-net\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.046597 kubelet[2169]: I0712 00:27:34.046555 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad9b5cda-133b-4315-8134-d9b365f631c5-clustermesh-secrets\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.046794 kubelet[2169]: I0712 00:27:34.046770 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-lib-modules\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.046982 kubelet[2169]: I0712 00:27:34.046956 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-config-path\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.047129 kubelet[2169]: I0712 00:27:34.047100 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-host-proc-sys-kernel\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.047324 kubelet[2169]: I0712 00:27:34.047299 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-hostproc\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.047475 kubelet[2169]: I0712 00:27:34.047450 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-cgroup\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.047613 kubelet[2169]: I0712 00:27:34.047588 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-etc-cni-netd\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.047758 kubelet[2169]: I0712 00:27:34.047734 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad9b5cda-133b-4315-8134-d9b365f631c5-hubble-tls\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.047944 kubelet[2169]: I0712 00:27:34.047919 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-run\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.048093 kubelet[2169]: I0712 00:27:34.048066 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-bpf-maps\") pod \"ad9b5cda-133b-4315-8134-d9b365f631c5\" (UID: \"ad9b5cda-133b-4315-8134-d9b365f631c5\") " Jul 12 00:27:34.048287 kubelet[2169]: I0712 00:27:34.048261 2169 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cni-path\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.048446 kubelet[2169]: I0712 00:27:34.048421 2169 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-xtables-lock\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.048636 kubelet[2169]: I0712 00:27:34.048595 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.048800 kubelet[2169]: I0712 00:27:34.048773 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.051008 kubelet[2169]: I0712 00:27:34.050954 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9b5cda-133b-4315-8134-d9b365f631c5-kube-api-access-vlhw5" (OuterVolumeSpecName: "kube-api-access-vlhw5") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "kube-api-access-vlhw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:34.051289 kubelet[2169]: I0712 00:27:34.051262 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-hostproc" (OuterVolumeSpecName: "hostproc") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.051435 kubelet[2169]: I0712 00:27:34.051409 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.056060 kubelet[2169]: I0712 00:27:34.056008 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad9b5cda-133b-4315-8134-d9b365f631c5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:27:34.056312 kubelet[2169]: I0712 00:27:34.056285 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.056446 kubelet[2169]: I0712 00:27:34.056421 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.057949 kubelet[2169]: I0712 00:27:34.057763 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:27:34.058099 kubelet[2169]: I0712 00:27:34.057980 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.058099 kubelet[2169]: I0712 00:27:34.058027 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:34.061986 kubelet[2169]: I0712 00:27:34.061932 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9b5cda-133b-4315-8134-d9b365f631c5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ad9b5cda-133b-4315-8134-d9b365f631c5" (UID: "ad9b5cda-133b-4315-8134-d9b365f631c5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:34.149140 kubelet[2169]: I0712 00:27:34.149088 2169 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-config-path\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149140 kubelet[2169]: I0712 00:27:34.149133 2169 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-host-proc-sys-kernel\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149336 kubelet[2169]: I0712 00:27:34.149160 2169 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-hostproc\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149336 kubelet[2169]: I0712 00:27:34.149181 2169 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-lib-modules\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149336 kubelet[2169]: I0712 00:27:34.149203 2169 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad9b5cda-133b-4315-8134-d9b365f631c5-hubble-tls\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149336 kubelet[2169]: I0712 00:27:34.149223 2169 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-run\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149336 kubelet[2169]: I0712 00:27:34.149246 2169 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-cilium-cgroup\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149336 kubelet[2169]: I0712 00:27:34.149267 2169 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-etc-cni-netd\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149336 kubelet[2169]: I0712 00:27:34.149287 2169 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-bpf-maps\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149336 kubelet[2169]: I0712 00:27:34.149308 2169 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlhw5\" (UniqueName: \"kubernetes.io/projected/ad9b5cda-133b-4315-8134-d9b365f631c5-kube-api-access-vlhw5\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149788 kubelet[2169]: I0712 00:27:34.149328 2169 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad9b5cda-133b-4315-8134-d9b365f631c5-host-proc-sys-net\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.149788 kubelet[2169]: I0712 00:27:34.149349 2169 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad9b5cda-133b-4315-8134-d9b365f631c5-clustermesh-secrets\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:34.245273 kubelet[2169]: I0712 00:27:34.245232 2169 scope.go:117] "RemoveContainer" containerID="79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4" Jul 12 00:27:34.248802 env[1817]: time="2025-07-12T00:27:34.248682400Z" level=info msg="RemoveContainer for \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\"" Jul 12 00:27:34.254172 systemd[1]: Removed slice kubepods-burstable-podad9b5cda_133b_4315_8134_d9b365f631c5.slice. Jul 12 00:27:34.254399 systemd[1]: kubepods-burstable-podad9b5cda_133b_4315_8134_d9b365f631c5.slice: Consumed 14.162s CPU time. Jul 12 00:27:34.256522 env[1817]: time="2025-07-12T00:27:34.256436642Z" level=info msg="RemoveContainer for \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\" returns successfully" Jul 12 00:27:34.257278 kubelet[2169]: I0712 00:27:34.257243 2169 scope.go:117] "RemoveContainer" containerID="cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363" Jul 12 00:27:34.260679 env[1817]: time="2025-07-12T00:27:34.260628131Z" level=info msg="RemoveContainer for \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\"" Jul 12 00:27:34.264742 env[1817]: time="2025-07-12T00:27:34.264687236Z" level=info msg="RemoveContainer for \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\" returns successfully" Jul 12 00:27:34.266211 kubelet[2169]: I0712 00:27:34.266178 2169 scope.go:117] "RemoveContainer" containerID="b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9" Jul 12 00:27:34.268267 env[1817]: time="2025-07-12T00:27:34.268208754Z" level=info msg="RemoveContainer for \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\"" Jul 12 00:27:34.271585 env[1817]: time="2025-07-12T00:27:34.271524670Z" level=info msg="RemoveContainer for \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\" returns successfully" Jul 12 00:27:34.272302 kubelet[2169]: I0712 00:27:34.272270 2169 scope.go:117] "RemoveContainer" containerID="05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d" Jul 12 00:27:34.274537 env[1817]: time="2025-07-12T00:27:34.274482306Z" level=info msg="RemoveContainer for \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\"" Jul 12 00:27:34.279472 env[1817]: time="2025-07-12T00:27:34.279417164Z" level=info msg="RemoveContainer for \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\" returns successfully" Jul 12 00:27:34.280023 kubelet[2169]: I0712 00:27:34.279974 2169 scope.go:117] "RemoveContainer" containerID="f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631" Jul 12 00:27:34.283169 env[1817]: time="2025-07-12T00:27:34.283118470Z" level=info msg="RemoveContainer for \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\"" Jul 12 00:27:34.287394 env[1817]: time="2025-07-12T00:27:34.287342193Z" level=info msg="RemoveContainer for \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\" returns successfully" Jul 12 00:27:34.287947 kubelet[2169]: I0712 00:27:34.287910 2169 scope.go:117] "RemoveContainer" containerID="79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4" Jul 12 00:27:34.288673 env[1817]: time="2025-07-12T00:27:34.288524069Z" level=error msg="ContainerStatus for \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\": not found" Jul 12 00:27:34.289166 kubelet[2169]: E0712 00:27:34.289124 2169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\": not found" containerID="79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4" Jul 12 00:27:34.289339 kubelet[2169]: I0712 00:27:34.289201 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4"} err="failed to get container status \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"79d11cd9027046f3c9a7b77692968c1d85579973507dd9dca263de871c5a98c4\": not found" Jul 12 00:27:34.289414 kubelet[2169]: I0712 00:27:34.289363 2169 scope.go:117] "RemoveContainer" containerID="cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363" Jul 12 00:27:34.289920 env[1817]: time="2025-07-12T00:27:34.289756938Z" level=error msg="ContainerStatus for \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\": not found" Jul 12 00:27:34.290358 kubelet[2169]: E0712 00:27:34.290316 2169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\": not found" containerID="cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363" Jul 12 00:27:34.290448 kubelet[2169]: I0712 00:27:34.290383 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363"} err="failed to get container status \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf10147bc0e587ec0a7e4398623e3e9d068c85224e6d27aa56bbfce9cb40b363\": not found" Jul 12 00:27:34.290448 kubelet[2169]: I0712 00:27:34.290416 2169 scope.go:117] "RemoveContainer" containerID="b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9" Jul 12 00:27:34.290911 env[1817]: time="2025-07-12T00:27:34.290761174Z" level=error msg="ContainerStatus for \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\": not found" Jul 12 00:27:34.291384 kubelet[2169]: E0712 00:27:34.291159 2169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\": not found" containerID="b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9" Jul 12 00:27:34.291384 kubelet[2169]: I0712 00:27:34.291205 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9"} err="failed to get container status \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b680041458b4a250b8f6f013a904eafad1daba0d06376b1ac535969daac463f9\": not found" Jul 12 00:27:34.291384 kubelet[2169]: I0712 00:27:34.291242 2169 scope.go:117] "RemoveContainer" containerID="05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d" Jul 12 00:27:34.292243 env[1817]: time="2025-07-12T00:27:34.292124914Z" level=error msg="ContainerStatus for \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\": not found" Jul 12 00:27:34.295542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda-rootfs.mount: Deactivated successfully. Jul 12 00:27:34.295702 systemd[1]: var-lib-kubelet-pods-ad9b5cda\x2d133b\x2d4315\x2d8134\x2dd9b365f631c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvlhw5.mount: Deactivated successfully. Jul 12 00:27:34.295875 systemd[1]: var-lib-kubelet-pods-ad9b5cda\x2d133b\x2d4315\x2d8134\x2dd9b365f631c5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:27:34.296015 systemd[1]: var-lib-kubelet-pods-ad9b5cda\x2d133b\x2d4315\x2d8134\x2dd9b365f631c5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:27:34.300921 kubelet[2169]: E0712 00:27:34.292535 2169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\": not found" containerID="05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d" Jul 12 00:27:34.301038 kubelet[2169]: I0712 00:27:34.300943 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d"} err="failed to get container status \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\": rpc error: code = NotFound desc = an error occurred when try to find container \"05e29b9c82246734bdc1ce3723ac0eb2f32bbebd2d6437b5aed56c9138bc367d\": not found" Jul 12 00:27:34.301038 kubelet[2169]: I0712 00:27:34.301004 2169 scope.go:117] "RemoveContainer" containerID="f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631" Jul 12 00:27:34.301636 env[1817]: time="2025-07-12T00:27:34.301524081Z" level=error msg="ContainerStatus for \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\": not found" Jul 12 00:27:34.302050 kubelet[2169]: E0712 00:27:34.302007 2169 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\": not found" containerID="f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631" Jul 12 00:27:34.302176 kubelet[2169]: I0712 00:27:34.302056 2169 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631"} err="failed to get container status \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1a9bd7c2d445138e0448edb2bacea315bb0eadb596ec9c042e4ba2fd35c3631\": not found" Jul 12 00:27:34.859102 kubelet[2169]: E0712 00:27:34.859036 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:35.043010 kubelet[2169]: I0712 00:27:35.042746 2169 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad9b5cda-133b-4315-8134-d9b365f631c5" path="/var/lib/kubelet/pods/ad9b5cda-133b-4315-8134-d9b365f631c5/volumes" Jul 12 00:27:35.859766 kubelet[2169]: E0712 00:27:35.859699 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:36.704077 kubelet[2169]: E0712 00:27:36.704037 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad9b5cda-133b-4315-8134-d9b365f631c5" containerName="mount-bpf-fs" Jul 12 00:27:36.704317 kubelet[2169]: E0712 00:27:36.704295 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad9b5cda-133b-4315-8134-d9b365f631c5" containerName="clean-cilium-state" Jul 12 00:27:36.704448 kubelet[2169]: E0712 00:27:36.704427 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad9b5cda-133b-4315-8134-d9b365f631c5" containerName="mount-cgroup" Jul 12 00:27:36.704591 kubelet[2169]: E0712 00:27:36.704569 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad9b5cda-133b-4315-8134-d9b365f631c5" containerName="apply-sysctl-overwrites" Jul 12 00:27:36.704716 kubelet[2169]: E0712 00:27:36.704695 2169 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad9b5cda-133b-4315-8134-d9b365f631c5" containerName="cilium-agent" Jul 12 00:27:36.704900 kubelet[2169]: I0712 00:27:36.704877 2169 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9b5cda-133b-4315-8134-d9b365f631c5" containerName="cilium-agent" Jul 12 00:27:36.714987 systemd[1]: Created slice kubepods-besteffort-pod58987d51_ff57_4142_b8b5_26e935d381db.slice. Jul 12 00:27:36.715834 kubelet[2169]: W0712 00:27:36.715742 2169 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.16.163" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.163' and this object Jul 12 00:27:36.715949 kubelet[2169]: E0712 00:27:36.715841 2169 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172.31.16.163\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.16.163' and this object" logger="UnhandledError" Jul 12 00:27:36.749421 systemd[1]: Created slice kubepods-burstable-pode5dd3e4e_c6fe_437f_b248_2dc990938ed8.slice. Jul 12 00:27:36.860957 kubelet[2169]: E0712 00:27:36.860889 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:36.867181 kubelet[2169]: I0712 00:27:36.867129 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-config-path\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867298 kubelet[2169]: I0712 00:27:36.867191 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-host-proc-sys-kernel\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867298 kubelet[2169]: I0712 00:27:36.867235 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-hostproc\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867298 kubelet[2169]: I0712 00:27:36.867273 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cni-path\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867517 kubelet[2169]: I0712 00:27:36.867310 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-etc-cni-netd\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867517 kubelet[2169]: I0712 00:27:36.867358 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-lib-modules\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867517 kubelet[2169]: I0712 00:27:36.867395 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-hubble-tls\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867517 kubelet[2169]: I0712 00:27:36.867433 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-bpf-maps\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867517 kubelet[2169]: I0712 00:27:36.867489 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-clustermesh-secrets\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867806 kubelet[2169]: I0712 00:27:36.867528 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-host-proc-sys-net\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867806 kubelet[2169]: I0712 00:27:36.867570 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-run\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867806 kubelet[2169]: I0712 00:27:36.867620 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-xtables-lock\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.867806 kubelet[2169]: I0712 00:27:36.867658 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58987d51-ff57-4142-b8b5-26e935d381db-cilium-config-path\") pod \"cilium-operator-5d85765b45-vcwbz\" (UID: \"58987d51-ff57-4142-b8b5-26e935d381db\") " pod="kube-system/cilium-operator-5d85765b45-vcwbz" Jul 12 00:27:36.867806 kubelet[2169]: I0712 00:27:36.867694 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28h2\" (UniqueName: \"kubernetes.io/projected/58987d51-ff57-4142-b8b5-26e935d381db-kube-api-access-p28h2\") pod \"cilium-operator-5d85765b45-vcwbz\" (UID: \"58987d51-ff57-4142-b8b5-26e935d381db\") " pod="kube-system/cilium-operator-5d85765b45-vcwbz" Jul 12 00:27:36.868142 kubelet[2169]: I0712 00:27:36.867729 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-cgroup\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.868142 kubelet[2169]: I0712 00:27:36.867768 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-ipsec-secrets\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:36.868142 kubelet[2169]: I0712 00:27:36.867802 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxpj\" (UniqueName: \"kubernetes.io/projected/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-kube-api-access-7gxpj\") pod \"cilium-d8lm5\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " pod="kube-system/cilium-d8lm5" Jul 12 00:27:37.255573 kubelet[2169]: E0712 00:27:37.255518 2169 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-d8lm5" podUID="e5dd3e4e-c6fe-437f-b248-2dc990938ed8" Jul 12 00:27:37.861956 kubelet[2169]: E0712 00:27:37.861895 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:37.969073 kubelet[2169]: E0712 00:27:37.969037 2169 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 12 00:27:37.969342 kubelet[2169]: E0712 00:27:37.969318 2169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/58987d51-ff57-4142-b8b5-26e935d381db-cilium-config-path podName:58987d51-ff57-4142-b8b5-26e935d381db nodeName:}" failed. No retries permitted until 2025-07-12 00:27:38.469259754 +0000 UTC m=+76.931705179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/58987d51-ff57-4142-b8b5-26e935d381db-cilium-config-path") pod "cilium-operator-5d85765b45-vcwbz" (UID: "58987d51-ff57-4142-b8b5-26e935d381db") : failed to sync configmap cache: timed out waiting for the condition Jul 12 00:27:37.973277 kubelet[2169]: E0712 00:27:37.973227 2169 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 12 00:27:37.973420 kubelet[2169]: E0712 00:27:37.973311 2169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-config-path podName:e5dd3e4e-c6fe-437f-b248-2dc990938ed8 nodeName:}" failed. No retries permitted until 2025-07-12 00:27:38.473288171 +0000 UTC m=+76.935733596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-config-path") pod "cilium-d8lm5" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8") : failed to sync configmap cache: timed out waiting for the condition Jul 12 00:27:37.993998 kubelet[2169]: E0712 00:27:37.993943 2169 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:27:38.379726 kubelet[2169]: I0712 00:27:38.379662 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gxpj\" (UniqueName: \"kubernetes.io/projected/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-kube-api-access-7gxpj\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.379726 kubelet[2169]: I0712 00:27:38.379730 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cni-path\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.379964 kubelet[2169]: I0712 00:27:38.379778 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-clustermesh-secrets\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.379964 kubelet[2169]: I0712 00:27:38.379906 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cni-path" (OuterVolumeSpecName: "cni-path") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.380449 kubelet[2169]: I0712 00:27:38.380410 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-host-proc-sys-net\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380544 kubelet[2169]: I0712 00:27:38.380472 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-run\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380544 kubelet[2169]: I0712 00:27:38.380508 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-xtables-lock\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380663 kubelet[2169]: I0712 00:27:38.380542 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-cgroup\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380663 kubelet[2169]: I0712 00:27:38.380579 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-bpf-maps\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380663 kubelet[2169]: I0712 00:27:38.380617 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-ipsec-secrets\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380663 kubelet[2169]: I0712 00:27:38.380656 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-host-proc-sys-kernel\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380918 kubelet[2169]: I0712 00:27:38.380690 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-hostproc\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380918 kubelet[2169]: I0712 00:27:38.380722 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-etc-cni-netd\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380918 kubelet[2169]: I0712 00:27:38.380753 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-lib-modules\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380918 kubelet[2169]: I0712 00:27:38.380790 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-hubble-tls\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.380918 kubelet[2169]: I0712 00:27:38.380914 2169 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cni-path\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.384433 kubelet[2169]: I0712 00:27:38.384365 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.384605 kubelet[2169]: I0712 00:27:38.384449 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.384605 kubelet[2169]: I0712 00:27:38.384489 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.384605 kubelet[2169]: I0712 00:27:38.384526 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.384605 kubelet[2169]: I0712 00:27:38.384564 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.384880 kubelet[2169]: I0712 00:27:38.384604 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-hostproc" (OuterVolumeSpecName: "hostproc") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.388999 systemd[1]: var-lib-kubelet-pods-e5dd3e4e\x2dc6fe\x2d437f\x2db248\x2d2dc990938ed8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7gxpj.mount: Deactivated successfully. Jul 12 00:27:38.393041 systemd[1]: var-lib-kubelet-pods-e5dd3e4e\x2dc6fe\x2d437f\x2db248\x2d2dc990938ed8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:27:38.394917 kubelet[2169]: I0712 00:27:38.394853 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.395103 kubelet[2169]: I0712 00:27:38.395052 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:38.395197 kubelet[2169]: I0712 00:27:38.395122 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.395197 kubelet[2169]: I0712 00:27:38.395163 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:38.397021 kubelet[2169]: I0712 00:27:38.396943 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-kube-api-access-7gxpj" (OuterVolumeSpecName: "kube-api-access-7gxpj") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "kube-api-access-7gxpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:38.403610 systemd[1]: var-lib-kubelet-pods-e5dd3e4e\x2dc6fe\x2d437f\x2db248\x2d2dc990938ed8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:27:38.407466 systemd[1]: var-lib-kubelet-pods-e5dd3e4e\x2dc6fe\x2d437f\x2db248\x2d2dc990938ed8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 12 00:27:38.410148 kubelet[2169]: I0712 00:27:38.410069 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:27:38.410413 kubelet[2169]: I0712 00:27:38.410363 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:27:38.481623 kubelet[2169]: I0712 00:27:38.481567 2169 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-bpf-maps\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.481623 kubelet[2169]: I0712 00:27:38.481619 2169 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-ipsec-secrets\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.481849 kubelet[2169]: I0712 00:27:38.481645 2169 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-host-proc-sys-kernel\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.481849 kubelet[2169]: I0712 00:27:38.481666 2169 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-hostproc\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.481849 kubelet[2169]: I0712 00:27:38.481688 2169 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-etc-cni-netd\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.481849 kubelet[2169]: I0712 00:27:38.481708 2169 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-lib-modules\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.481849 kubelet[2169]: I0712 00:27:38.481728 2169 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-hubble-tls\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.481849 kubelet[2169]: I0712 00:27:38.481749 2169 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gxpj\" (UniqueName: \"kubernetes.io/projected/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-kube-api-access-7gxpj\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.481849 kubelet[2169]: I0712 00:27:38.481769 2169 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-clustermesh-secrets\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.481849 kubelet[2169]: I0712 00:27:38.481789 2169 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-host-proc-sys-net\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.482880 kubelet[2169]: I0712 00:27:38.481809 2169 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-run\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.483024 kubelet[2169]: I0712 00:27:38.483000 2169 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-xtables-lock\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.483166 kubelet[2169]: I0712 00:27:38.483143 2169 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-cgroup\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.521464 env[1817]: time="2025-07-12T00:27:38.521390443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vcwbz,Uid:58987d51-ff57-4142-b8b5-26e935d381db,Namespace:kube-system,Attempt:0,}" Jul 12 00:27:38.548613 env[1817]: time="2025-07-12T00:27:38.548445080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:38.548996 env[1817]: time="2025-07-12T00:27:38.548886717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:38.549186 env[1817]: time="2025-07-12T00:27:38.549110944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:38.550941 env[1817]: time="2025-07-12T00:27:38.549666254Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bdbd50ccbad58b2def1a040cb4f21f2330793899e1643f38d4acbcba20c2eea pid=3955 runtime=io.containerd.runc.v2 Jul 12 00:27:38.572396 systemd[1]: Started cri-containerd-9bdbd50ccbad58b2def1a040cb4f21f2330793899e1643f38d4acbcba20c2eea.scope. Jul 12 00:27:38.653058 env[1817]: time="2025-07-12T00:27:38.652910335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vcwbz,Uid:58987d51-ff57-4142-b8b5-26e935d381db,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bdbd50ccbad58b2def1a040cb4f21f2330793899e1643f38d4acbcba20c2eea\"" Jul 12 00:27:38.656000 env[1817]: time="2025-07-12T00:27:38.655945665Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:27:38.684366 kubelet[2169]: I0712 00:27:38.684316 2169 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-config-path\") pod \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\" (UID: \"e5dd3e4e-c6fe-437f-b248-2dc990938ed8\") " Jul 12 00:27:38.690189 kubelet[2169]: I0712 00:27:38.690114 2169 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e5dd3e4e-c6fe-437f-b248-2dc990938ed8" (UID: "e5dd3e4e-c6fe-437f-b248-2dc990938ed8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:27:38.784888 kubelet[2169]: I0712 00:27:38.784807 2169 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5dd3e4e-c6fe-437f-b248-2dc990938ed8-cilium-config-path\") on node \"172.31.16.163\" DevicePath \"\"" Jul 12 00:27:38.862561 kubelet[2169]: E0712 00:27:38.862496 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:39.050654 systemd[1]: Removed slice kubepods-burstable-pode5dd3e4e_c6fe_437f_b248_2dc990938ed8.slice. Jul 12 00:27:39.331096 systemd[1]: Created slice kubepods-burstable-pode3e085b5_4fdd_40ad_9985_7b2b8cb55c80.slice. Jul 12 00:27:39.493018 kubelet[2169]: I0712 00:27:39.492976 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-hubble-tls\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.493250 kubelet[2169]: I0712 00:27:39.493220 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-cilium-run\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.493406 kubelet[2169]: I0712 00:27:39.493381 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-hostproc\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.493590 kubelet[2169]: I0712 00:27:39.493565 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-xtables-lock\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.493740 kubelet[2169]: I0712 00:27:39.493715 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-host-proc-sys-kernel\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.493984 kubelet[2169]: I0712 00:27:39.493958 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-etc-cni-netd\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.494155 kubelet[2169]: I0712 00:27:39.494117 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-cilium-ipsec-secrets\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.494323 kubelet[2169]: I0712 00:27:39.494285 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-host-proc-sys-net\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.494487 kubelet[2169]: I0712 00:27:39.494450 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-clustermesh-secrets\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.494650 kubelet[2169]: I0712 00:27:39.494614 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-bpf-maps\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.494834 kubelet[2169]: I0712 00:27:39.494792 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-cilium-cgroup\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.494981 kubelet[2169]: I0712 00:27:39.494957 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-cni-path\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.495142 kubelet[2169]: I0712 00:27:39.495117 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-lib-modules\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.495302 kubelet[2169]: I0712 00:27:39.495277 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-cilium-config-path\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.495488 kubelet[2169]: I0712 00:27:39.495451 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct5sm\" (UniqueName: \"kubernetes.io/projected/e3e085b5-4fdd-40ad-9985-7b2b8cb55c80-kube-api-access-ct5sm\") pod \"cilium-54pjf\" (UID: \"e3e085b5-4fdd-40ad-9985-7b2b8cb55c80\") " pod="kube-system/cilium-54pjf" Jul 12 00:27:39.863038 kubelet[2169]: E0712 00:27:39.862973 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:39.944579 env[1817]: time="2025-07-12T00:27:39.944482667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54pjf,Uid:e3e085b5-4fdd-40ad-9985-7b2b8cb55c80,Namespace:kube-system,Attempt:0,}" Jul 12 00:27:39.975527 env[1817]: time="2025-07-12T00:27:39.975236923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:39.975527 env[1817]: time="2025-07-12T00:27:39.975306553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:39.975527 env[1817]: time="2025-07-12T00:27:39.975332031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:39.976102 env[1817]: time="2025-07-12T00:27:39.975989601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a pid=4003 runtime=io.containerd.runc.v2 Jul 12 00:27:40.002573 systemd[1]: Started cri-containerd-216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a.scope. Jul 12 00:27:40.069983 env[1817]: time="2025-07-12T00:27:40.069919117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54pjf,Uid:e3e085b5-4fdd-40ad-9985-7b2b8cb55c80,Namespace:kube-system,Attempt:0,} returns sandbox id \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\"" Jul 12 00:27:40.074158 env[1817]: time="2025-07-12T00:27:40.074096000Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:27:40.107110 env[1817]: time="2025-07-12T00:27:40.107044875Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"baa99000c3a63010ff467e7cfa8b8f922d5d07cbcc1e2357a7f1663a12255f2c\"" Jul 12 00:27:40.108191 env[1817]: time="2025-07-12T00:27:40.108139869Z" level=info msg="StartContainer for \"baa99000c3a63010ff467e7cfa8b8f922d5d07cbcc1e2357a7f1663a12255f2c\"" Jul 12 00:27:40.136391 systemd[1]: Started cri-containerd-baa99000c3a63010ff467e7cfa8b8f922d5d07cbcc1e2357a7f1663a12255f2c.scope. Jul 12 00:27:40.221488 env[1817]: time="2025-07-12T00:27:40.221408401Z" level=info msg="StartContainer for \"baa99000c3a63010ff467e7cfa8b8f922d5d07cbcc1e2357a7f1663a12255f2c\" returns successfully" Jul 12 00:27:40.234875 systemd[1]: cri-containerd-baa99000c3a63010ff467e7cfa8b8f922d5d07cbcc1e2357a7f1663a12255f2c.scope: Deactivated successfully. Jul 12 00:27:40.302250 env[1817]: time="2025-07-12T00:27:40.302163502Z" level=info msg="shim disconnected" id=baa99000c3a63010ff467e7cfa8b8f922d5d07cbcc1e2357a7f1663a12255f2c Jul 12 00:27:40.302250 env[1817]: time="2025-07-12T00:27:40.302234140Z" level=warning msg="cleaning up after shim disconnected" id=baa99000c3a63010ff467e7cfa8b8f922d5d07cbcc1e2357a7f1663a12255f2c namespace=k8s.io Jul 12 00:27:40.302594 env[1817]: time="2025-07-12T00:27:40.302257289Z" level=info msg="cleaning up dead shim" Jul 12 00:27:40.319093 env[1817]: time="2025-07-12T00:27:40.319024479Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4092 runtime=io.containerd.runc.v2\n" Jul 12 00:27:40.863495 kubelet[2169]: E0712 00:27:40.863432 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:40.944118 env[1817]: time="2025-07-12T00:27:40.944033297Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:40.946224 env[1817]: time="2025-07-12T00:27:40.946160351Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:40.949145 env[1817]: time="2025-07-12T00:27:40.949093596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:27:40.950252 env[1817]: time="2025-07-12T00:27:40.950206052Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:27:40.954682 env[1817]: time="2025-07-12T00:27:40.954628871Z" level=info msg="CreateContainer within sandbox \"9bdbd50ccbad58b2def1a040cb4f21f2330793899e1643f38d4acbcba20c2eea\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:27:40.977326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664701696.mount: Deactivated successfully. Jul 12 00:27:40.985783 env[1817]: time="2025-07-12T00:27:40.985720125Z" level=info msg="CreateContainer within sandbox \"9bdbd50ccbad58b2def1a040cb4f21f2330793899e1643f38d4acbcba20c2eea\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"30224f8c2f2977f9188657b15cf80a0ccadc5bdfd651db64bfc82e5f4eea296c\"" Jul 12 00:27:40.986952 env[1817]: time="2025-07-12T00:27:40.986906782Z" level=info msg="StartContainer for \"30224f8c2f2977f9188657b15cf80a0ccadc5bdfd651db64bfc82e5f4eea296c\"" Jul 12 00:27:41.023633 systemd[1]: Started cri-containerd-30224f8c2f2977f9188657b15cf80a0ccadc5bdfd651db64bfc82e5f4eea296c.scope. Jul 12 00:27:41.043201 kubelet[2169]: I0712 00:27:41.043153 2169 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5dd3e4e-c6fe-437f-b248-2dc990938ed8" path="/var/lib/kubelet/pods/e5dd3e4e-c6fe-437f-b248-2dc990938ed8/volumes" Jul 12 00:27:41.094559 env[1817]: time="2025-07-12T00:27:41.094496495Z" level=info msg="StartContainer for \"30224f8c2f2977f9188657b15cf80a0ccadc5bdfd651db64bfc82e5f4eea296c\" returns successfully" Jul 12 00:27:41.275341 env[1817]: time="2025-07-12T00:27:41.275191334Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:27:41.304786 env[1817]: time="2025-07-12T00:27:41.304701339Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f14089fcad03c89fe6ccc1ccd48963989124947015deae19bed4b9062bc5dcb5\"" Jul 12 00:27:41.306424 env[1817]: time="2025-07-12T00:27:41.306372283Z" level=info msg="StartContainer for \"f14089fcad03c89fe6ccc1ccd48963989124947015deae19bed4b9062bc5dcb5\"" Jul 12 00:27:41.351383 systemd[1]: Started cri-containerd-f14089fcad03c89fe6ccc1ccd48963989124947015deae19bed4b9062bc5dcb5.scope. Jul 12 00:27:41.449045 env[1817]: time="2025-07-12T00:27:41.448687554Z" level=info msg="StartContainer for \"f14089fcad03c89fe6ccc1ccd48963989124947015deae19bed4b9062bc5dcb5\" returns successfully" Jul 12 00:27:41.474196 systemd[1]: cri-containerd-f14089fcad03c89fe6ccc1ccd48963989124947015deae19bed4b9062bc5dcb5.scope: Deactivated successfully. Jul 12 00:27:41.506992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f14089fcad03c89fe6ccc1ccd48963989124947015deae19bed4b9062bc5dcb5-rootfs.mount: Deactivated successfully. Jul 12 00:27:41.576090 env[1817]: time="2025-07-12T00:27:41.576023459Z" level=info msg="shim disconnected" id=f14089fcad03c89fe6ccc1ccd48963989124947015deae19bed4b9062bc5dcb5 Jul 12 00:27:41.576474 env[1817]: time="2025-07-12T00:27:41.576438993Z" level=warning msg="cleaning up after shim disconnected" id=f14089fcad03c89fe6ccc1ccd48963989124947015deae19bed4b9062bc5dcb5 namespace=k8s.io Jul 12 00:27:41.576598 env[1817]: time="2025-07-12T00:27:41.576570128Z" level=info msg="cleaning up dead shim" Jul 12 00:27:41.590018 env[1817]: time="2025-07-12T00:27:41.589962713Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191 runtime=io.containerd.runc.v2\n" Jul 12 00:27:41.863985 kubelet[2169]: E0712 00:27:41.863848 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:42.290732 env[1817]: time="2025-07-12T00:27:42.290591114Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:27:42.315747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173219854.mount: Deactivated successfully. Jul 12 00:27:42.319845 kubelet[2169]: I0712 00:27:42.319744 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vcwbz" podStartSLOduration=4.022518502 podStartE2EDuration="6.319720643s" podCreationTimestamp="2025-07-12 00:27:36 +0000 UTC" firstStartedPulling="2025-07-12 00:27:38.655079505 +0000 UTC m=+77.117524930" lastFinishedPulling="2025-07-12 00:27:40.952281646 +0000 UTC m=+79.414727071" observedRunningTime="2025-07-12 00:27:41.314639047 +0000 UTC m=+79.777084508" watchObservedRunningTime="2025-07-12 00:27:42.319720643 +0000 UTC m=+80.782166080" Jul 12 00:27:42.329249 env[1817]: time="2025-07-12T00:27:42.329138510Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"35fcf17a3beb00f0f9f25c3b1498d5ae0c22541169ffb9114f374d3e2da30daf\"" Jul 12 00:27:42.330416 env[1817]: time="2025-07-12T00:27:42.330365417Z" level=info msg="StartContainer for \"35fcf17a3beb00f0f9f25c3b1498d5ae0c22541169ffb9114f374d3e2da30daf\"" Jul 12 00:27:42.362633 systemd[1]: Started cri-containerd-35fcf17a3beb00f0f9f25c3b1498d5ae0c22541169ffb9114f374d3e2da30daf.scope. Jul 12 00:27:42.446923 env[1817]: time="2025-07-12T00:27:42.446259663Z" level=info msg="StartContainer for \"35fcf17a3beb00f0f9f25c3b1498d5ae0c22541169ffb9114f374d3e2da30daf\" returns successfully" Jul 12 00:27:42.449347 systemd[1]: cri-containerd-35fcf17a3beb00f0f9f25c3b1498d5ae0c22541169ffb9114f374d3e2da30daf.scope: Deactivated successfully. Jul 12 00:27:42.484261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35fcf17a3beb00f0f9f25c3b1498d5ae0c22541169ffb9114f374d3e2da30daf-rootfs.mount: Deactivated successfully. Jul 12 00:27:42.490991 env[1817]: time="2025-07-12T00:27:42.490890758Z" level=info msg="shim disconnected" id=35fcf17a3beb00f0f9f25c3b1498d5ae0c22541169ffb9114f374d3e2da30daf Jul 12 00:27:42.490991 env[1817]: time="2025-07-12T00:27:42.490974621Z" level=warning msg="cleaning up after shim disconnected" id=35fcf17a3beb00f0f9f25c3b1498d5ae0c22541169ffb9114f374d3e2da30daf namespace=k8s.io Jul 12 00:27:42.491351 env[1817]: time="2025-07-12T00:27:42.490997207Z" level=info msg="cleaning up dead shim" Jul 12 00:27:42.505534 env[1817]: time="2025-07-12T00:27:42.505463053Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4249 runtime=io.containerd.runc.v2\n" Jul 12 00:27:42.801304 kubelet[2169]: E0712 00:27:42.801236 2169 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:42.864013 kubelet[2169]: E0712 00:27:42.863968 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:42.995022 kubelet[2169]: E0712 00:27:42.994969 2169 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:27:43.297620 env[1817]: time="2025-07-12T00:27:43.297550749Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:27:43.318840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3438559407.mount: Deactivated successfully. Jul 12 00:27:43.327993 env[1817]: time="2025-07-12T00:27:43.327883691Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"34ee927dadbbe7b98243723e0231a56a6061207c18ed31e2215f2e3b9b4063b6\"" Jul 12 00:27:43.329253 env[1817]: time="2025-07-12T00:27:43.329207853Z" level=info msg="StartContainer for \"34ee927dadbbe7b98243723e0231a56a6061207c18ed31e2215f2e3b9b4063b6\"" Jul 12 00:27:43.359893 systemd[1]: Started cri-containerd-34ee927dadbbe7b98243723e0231a56a6061207c18ed31e2215f2e3b9b4063b6.scope. Jul 12 00:27:43.430310 systemd[1]: cri-containerd-34ee927dadbbe7b98243723e0231a56a6061207c18ed31e2215f2e3b9b4063b6.scope: Deactivated successfully. Jul 12 00:27:43.432258 env[1817]: time="2025-07-12T00:27:43.432197892Z" level=info msg="StartContainer for \"34ee927dadbbe7b98243723e0231a56a6061207c18ed31e2215f2e3b9b4063b6\" returns successfully" Jul 12 00:27:43.466981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34ee927dadbbe7b98243723e0231a56a6061207c18ed31e2215f2e3b9b4063b6-rootfs.mount: Deactivated successfully. Jul 12 00:27:43.474907 env[1817]: time="2025-07-12T00:27:43.474837049Z" level=info msg="shim disconnected" id=34ee927dadbbe7b98243723e0231a56a6061207c18ed31e2215f2e3b9b4063b6 Jul 12 00:27:43.475206 env[1817]: time="2025-07-12T00:27:43.474908827Z" level=warning msg="cleaning up after shim disconnected" id=34ee927dadbbe7b98243723e0231a56a6061207c18ed31e2215f2e3b9b4063b6 namespace=k8s.io Jul 12 00:27:43.475206 env[1817]: time="2025-07-12T00:27:43.474931581Z" level=info msg="cleaning up dead shim" Jul 12 00:27:43.488460 env[1817]: time="2025-07-12T00:27:43.488403870Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4303 runtime=io.containerd.runc.v2\n" Jul 12 00:27:43.864187 kubelet[2169]: E0712 00:27:43.864128 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:44.303099 env[1817]: time="2025-07-12T00:27:44.301262550Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:27:44.327393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167129978.mount: Deactivated successfully. Jul 12 00:27:44.338627 env[1817]: time="2025-07-12T00:27:44.338525290Z" level=info msg="CreateContainer within sandbox \"216f69de1c16330416290263c6e3fb1a8e5356e67694fc4dfb68b7339574b59a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba\"" Jul 12 00:27:44.339958 env[1817]: time="2025-07-12T00:27:44.339898439Z" level=info msg="StartContainer for \"89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba\"" Jul 12 00:27:44.372430 systemd[1]: Started cri-containerd-89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba.scope. Jul 12 00:27:44.462471 env[1817]: time="2025-07-12T00:27:44.462367805Z" level=info msg="StartContainer for \"89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba\" returns successfully" Jul 12 00:27:44.497650 systemd[1]: run-containerd-runc-k8s.io-89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba-runc.wPzleA.mount: Deactivated successfully. Jul 12 00:27:44.720183 kubelet[2169]: I0712 00:27:44.718947 2169 setters.go:600] "Node became not ready" node="172.31.16.163" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:27:44Z","lastTransitionTime":"2025-07-12T00:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:27:44.864317 kubelet[2169]: E0712 00:27:44.864263 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:45.187848 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 12 00:27:45.866024 kubelet[2169]: E0712 00:27:45.865956 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:46.038101 systemd[1]: run-containerd-runc-k8s.io-89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba-runc.2zOUZL.mount: Deactivated successfully. Jul 12 00:27:46.866899 kubelet[2169]: E0712 00:27:46.866838 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:47.867189 kubelet[2169]: E0712 00:27:47.867120 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:48.371155 systemd[1]: run-containerd-runc-k8s.io-89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba-runc.VLHuiZ.mount: Deactivated successfully. Jul 12 00:27:48.867774 kubelet[2169]: E0712 00:27:48.867719 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:49.233133 systemd-networkd[1534]: lxc_health: Link UP Jul 12 00:27:49.245916 (udev-worker)[4871]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:27:49.263966 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:27:49.265142 systemd-networkd[1534]: lxc_health: Gained carrier Jul 12 00:27:49.868166 kubelet[2169]: E0712 00:27:49.868118 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:49.977704 kubelet[2169]: I0712 00:27:49.977589 2169 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-54pjf" podStartSLOduration=10.97756795 podStartE2EDuration="10.97756795s" podCreationTimestamp="2025-07-12 00:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:27:45.344779685 +0000 UTC m=+83.807225122" watchObservedRunningTime="2025-07-12 00:27:49.97756795 +0000 UTC m=+88.440013387" Jul 12 00:27:50.639281 systemd[1]: run-containerd-runc-k8s.io-89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba-runc.g4BhaV.mount: Deactivated successfully. Jul 12 00:27:50.868991 kubelet[2169]: E0712 00:27:50.868894 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:51.081536 systemd-networkd[1534]: lxc_health: Gained IPv6LL Jul 12 00:27:51.869867 kubelet[2169]: E0712 00:27:51.869798 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:52.871213 kubelet[2169]: E0712 00:27:52.871166 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:53.064691 systemd[1]: run-containerd-runc-k8s.io-89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba-runc.35VNbP.mount: Deactivated successfully. Jul 12 00:27:53.872498 kubelet[2169]: E0712 00:27:53.872451 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:54.873679 kubelet[2169]: E0712 00:27:54.873609 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:55.391967 systemd[1]: run-containerd-runc-k8s.io-89c3a342902c20bec5d6db017d4e202a6a3d5d751b0c85f23fd4129c74f3a6ba-runc.Ch9R0V.mount: Deactivated successfully. Jul 12 00:27:55.874154 kubelet[2169]: E0712 00:27:55.874081 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:56.875003 kubelet[2169]: E0712 00:27:56.874931 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:57.875773 kubelet[2169]: E0712 00:27:57.875704 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:58.876745 kubelet[2169]: E0712 00:27:58.876680 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:27:59.877238 kubelet[2169]: E0712 00:27:59.877167 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:00.877514 kubelet[2169]: E0712 00:28:00.877447 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:01.877891 kubelet[2169]: E0712 00:28:01.877787 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:02.801619 kubelet[2169]: E0712 00:28:02.801563 2169 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:02.878632 kubelet[2169]: E0712 00:28:02.878587 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:03.879443 kubelet[2169]: E0712 00:28:03.879382 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:04.880408 kubelet[2169]: E0712 00:28:04.880320 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:05.881265 kubelet[2169]: E0712 00:28:05.881196 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:06.881432 kubelet[2169]: E0712 00:28:06.881378 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:07.882995 kubelet[2169]: E0712 00:28:07.882927 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:08.883728 kubelet[2169]: E0712 00:28:08.883684 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:09.884947 kubelet[2169]: E0712 00:28:09.884887 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:10.885063 kubelet[2169]: E0712 00:28:10.885023 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:11.886794 kubelet[2169]: E0712 00:28:11.886730 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:12.887346 kubelet[2169]: E0712 00:28:12.887259 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:13.887617 kubelet[2169]: E0712 00:28:13.887574 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:14.888840 kubelet[2169]: E0712 00:28:14.888769 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:14.994343 kubelet[2169]: E0712 00:28:14.994295 2169 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.16.163)" Jul 12 00:28:15.889559 kubelet[2169]: E0712 00:28:15.889495 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:16.891289 kubelet[2169]: E0712 00:28:16.891248 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:17.892658 kubelet[2169]: E0712 00:28:17.892618 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:18.893972 kubelet[2169]: E0712 00:28:18.893906 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:19.894792 kubelet[2169]: E0712 00:28:19.894748 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:20.895537 kubelet[2169]: E0712 00:28:20.895470 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:21.897441 kubelet[2169]: E0712 00:28:21.897398 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:22.801613 kubelet[2169]: E0712 00:28:22.801572 2169 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:22.877284 env[1817]: time="2025-07-12T00:28:22.877208699Z" level=info msg="StopPodSandbox for \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\"" Jul 12 00:28:22.877935 env[1817]: time="2025-07-12T00:28:22.877484804Z" level=info msg="TearDown network for sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" successfully" Jul 12 00:28:22.877935 env[1817]: time="2025-07-12T00:28:22.877568755Z" level=info msg="StopPodSandbox for \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" returns successfully" Jul 12 00:28:22.878718 env[1817]: time="2025-07-12T00:28:22.878645190Z" level=info msg="RemovePodSandbox for \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\"" Jul 12 00:28:22.878867 env[1817]: time="2025-07-12T00:28:22.878726345Z" level=info msg="Forcibly stopping sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\"" Jul 12 00:28:22.878963 env[1817]: time="2025-07-12T00:28:22.878929502Z" level=info msg="TearDown network for sandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" successfully" Jul 12 00:28:22.884596 env[1817]: time="2025-07-12T00:28:22.884519742Z" level=info msg="RemovePodSandbox \"67d29711121d7b41547452228bd6caf4b7767798402261b49badef0641797fda\" returns successfully" Jul 12 00:28:22.898414 kubelet[2169]: E0712 00:28:22.898381 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:23.899121 kubelet[2169]: E0712 00:28:23.899071 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:24.900264 kubelet[2169]: E0712 00:28:24.900212 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:24.994968 kubelet[2169]: E0712 00:28:24.994917 2169 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 12 00:28:25.900877 kubelet[2169]: E0712 00:28:25.900796 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:26.901456 kubelet[2169]: E0712 00:28:26.901388 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:26.948194 kubelet[2169]: E0712 00:28:26.948133 2169 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.163?timeout=10s\": unexpected EOF" Jul 12 00:28:26.953978 kubelet[2169]: E0712 00:28:26.953926 2169 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.163?timeout=10s\": read tcp 172.31.16.163:50688->172.31.29.19:6443: read: connection reset by peer" Jul 12 00:28:26.954879 kubelet[2169]: E0712 00:28:26.954784 2169 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.163?timeout=10s\": dial tcp 172.31.29.19:6443: connect: connection refused" Jul 12 00:28:26.954879 kubelet[2169]: I0712 00:28:26.954870 2169 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jul 12 00:28:26.955941 kubelet[2169]: E0712 00:28:26.955896 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.163?timeout=10s\": dial tcp 172.31.29.19:6443: connect: connection refused" interval="200ms" Jul 12 00:28:27.157110 kubelet[2169]: E0712 00:28:27.156944 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.163?timeout=10s\": dial tcp 172.31.29.19:6443: connect: connection refused" interval="400ms" Jul 12 00:28:27.558743 kubelet[2169]: E0712 00:28:27.558684 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.163?timeout=10s\": dial tcp 172.31.29.19:6443: connect: connection refused" interval="800ms" Jul 12 00:28:27.901858 kubelet[2169]: E0712 00:28:27.901712 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:28.902689 kubelet[2169]: E0712 00:28:28.902624 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:29.902830 kubelet[2169]: E0712 00:28:29.902757 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:30.903723 kubelet[2169]: E0712 00:28:30.903681 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:31.904731 kubelet[2169]: E0712 00:28:31.904689 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:32.905706 kubelet[2169]: E0712 00:28:32.905644 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:33.906625 kubelet[2169]: E0712 00:28:33.906583 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:34.908273 kubelet[2169]: E0712 00:28:34.908210 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:35.909348 kubelet[2169]: E0712 00:28:35.909303 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:36.910153 kubelet[2169]: E0712 00:28:36.910112 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:37.911220 kubelet[2169]: E0712 00:28:37.911164 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:38.360014 kubelet[2169]: E0712 00:28:38.359963 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.163?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Jul 12 00:28:38.912331 kubelet[2169]: E0712 00:28:38.912285 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:39.913372 kubelet[2169]: E0712 00:28:39.913326 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:40.914654 kubelet[2169]: E0712 00:28:40.914614 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:41.916157 kubelet[2169]: E0712 00:28:41.916108 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:42.802112 kubelet[2169]: E0712 00:28:42.802072 2169 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:42.917370 kubelet[2169]: E0712 00:28:42.917315 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:43.917782 kubelet[2169]: E0712 00:28:43.917714 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:44.918873 kubelet[2169]: E0712 00:28:44.918830 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:45.919571 kubelet[2169]: E0712 00:28:45.919508 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:46.920349 kubelet[2169]: E0712 00:28:46.920308 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:47.921350 kubelet[2169]: E0712 00:28:47.921274 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:48.922385 kubelet[2169]: E0712 00:28:48.922324 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:49.923114 kubelet[2169]: E0712 00:28:49.923046 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:49.961860 kubelet[2169]: E0712 00:28:49.961781 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.163?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jul 12 00:28:50.923727 kubelet[2169]: E0712 00:28:50.923661 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:28:51.924790 kubelet[2169]: E0712 00:28:51.924724 2169 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"