Jul 12 00:23:59.017329 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 12 00:23:59.017367 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Jul 11 23:15:18 -00 2025 Jul 12 00:23:59.017389 kernel: efi: EFI v2.70 by EDK II Jul 12 00:23:59.017405 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Jul 12 00:23:59.017419 kernel: ACPI: Early table checksum verification disabled Jul 12 00:23:59.017433 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 12 00:23:59.017449 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 12 00:23:59.017463 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 12 00:23:59.017477 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 12 00:23:59.017491 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 12 00:23:59.017510 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 12 00:23:59.017524 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 12 00:23:59.017538 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 12 00:23:59.017553 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 12 00:23:59.017624 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 12 00:23:59.017646 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 12 00:23:59.017662 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 12 00:23:59.017677 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 12 00:23:59.017692 kernel: printk: bootconsole [uart0] enabled Jul 12 00:23:59.017707 kernel: NUMA: Failed to initialise from firmware Jul 12 00:23:59.017723 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:23:59.017738 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Jul 12 00:23:59.017754 kernel: Zone ranges: Jul 12 00:23:59.017769 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:23:59.017784 kernel: DMA32 empty Jul 12 00:23:59.017799 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 12 00:23:59.017818 kernel: Movable zone start for each node Jul 12 00:23:59.017834 kernel: Early memory node ranges Jul 12 00:23:59.017849 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 12 00:23:59.017864 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 12 00:23:59.017879 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 12 00:23:59.017895 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 12 00:23:59.017910 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 12 00:23:59.021425 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 12 00:23:59.021458 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 12 00:23:59.021474 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 12 00:23:59.021490 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:23:59.021505 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 12 00:23:59.021529 kernel: psci: probing for conduit method from ACPI. Jul 12 00:23:59.021544 kernel: psci: PSCIv1.0 detected in firmware. Jul 12 00:23:59.021566 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:23:59.021583 kernel: psci: Trusted OS migration not required Jul 12 00:23:59.021599 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:23:59.021620 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 12 00:23:59.021636 kernel: ACPI: SRAT not present Jul 12 00:23:59.021653 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 12 00:23:59.021669 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 12 00:23:59.021685 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:23:59.021701 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:23:59.021717 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:23:59.021733 kernel: CPU features: detected: Spectre-v2 Jul 12 00:23:59.021750 kernel: CPU features: detected: Spectre-v3a Jul 12 00:23:59.021766 kernel: CPU features: detected: Spectre-BHB Jul 12 00:23:59.021781 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:23:59.021802 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:23:59.021819 kernel: CPU features: detected: ARM erratum 1742098 Jul 12 00:23:59.021835 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 12 00:23:59.021851 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 12 00:23:59.021867 kernel: Policy zone: Normal Jul 12 00:23:59.021907 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:23:59.023918 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:23:59.023985 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:23:59.024002 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:23:59.024018 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:23:59.024040 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 12 00:23:59.024058 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Jul 12 00:23:59.024075 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:23:59.024091 kernel: trace event string verifier disabled Jul 12 00:23:59.024107 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:23:59.024124 kernel: rcu: RCU event tracing is enabled. Jul 12 00:23:59.024140 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:23:59.024156 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:23:59.024173 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:23:59.024189 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:23:59.024205 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:23:59.024221 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:23:59.024242 kernel: GICv3: 96 SPIs implemented Jul 12 00:23:59.024258 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:23:59.024274 kernel: GICv3: Distributor has no Range Selector support Jul 12 00:23:59.024290 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:23:59.024306 kernel: GICv3: 16 PPIs implemented Jul 12 00:23:59.024322 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 12 00:23:59.024338 kernel: ACPI: SRAT not present Jul 12 00:23:59.024353 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 12 00:23:59.024370 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:23:59.024386 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:23:59.024402 kernel: GICv3: using LPI property table @0x00000004000b0000 Jul 12 00:23:59.024422 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 12 00:23:59.024439 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jul 12 00:23:59.024455 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 12 00:23:59.024472 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 12 00:23:59.024488 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 12 00:23:59.024504 kernel: Console: colour dummy device 80x25 Jul 12 00:23:59.024521 kernel: printk: console [tty1] enabled Jul 12 00:23:59.024537 kernel: ACPI: Core revision 20210730 Jul 12 00:23:59.024554 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 12 00:23:59.024571 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:23:59.024593 kernel: LSM: Security Framework initializing Jul 12 00:23:59.024610 kernel: SELinux: Initializing. Jul 12 00:23:59.024627 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:23:59.024644 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:23:59.024660 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:23:59.024677 kernel: Platform MSI: ITS@0x10080000 domain created Jul 12 00:23:59.024693 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 12 00:23:59.024709 kernel: Remapping and enabling EFI services. Jul 12 00:23:59.024726 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:23:59.024742 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:23:59.024763 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 12 00:23:59.024779 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jul 12 00:23:59.024796 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 12 00:23:59.024812 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:23:59.024828 kernel: SMP: Total of 2 processors activated. Jul 12 00:23:59.024845 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:23:59.024861 kernel: CPU features: detected: 32-bit EL1 Support Jul 12 00:23:59.024878 kernel: CPU features: detected: CRC32 instructions Jul 12 00:23:59.024894 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:23:59.024914 kernel: alternatives: patching kernel code Jul 12 00:23:59.024960 kernel: devtmpfs: initialized Jul 12 00:23:59.024990 kernel: KASLR disabled due to lack of seed Jul 12 00:23:59.025013 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:23:59.025030 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:23:59.025047 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:23:59.025065 kernel: SMBIOS 3.0.0 present. Jul 12 00:23:59.025083 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 12 00:23:59.025100 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:23:59.025117 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:23:59.025136 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:23:59.025159 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:23:59.025177 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:23:59.025195 kernel: audit: type=2000 audit(0.295:1): state=initialized audit_enabled=0 res=1 Jul 12 00:23:59.025212 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:23:59.025229 kernel: cpuidle: using governor menu Jul 12 00:23:59.025250 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:23:59.025268 kernel: ASID allocator initialised with 32768 entries Jul 12 00:23:59.025285 kernel: ACPI: bus type PCI registered Jul 12 00:23:59.025302 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:23:59.025319 kernel: Serial: AMBA PL011 UART driver Jul 12 00:23:59.025336 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:23:59.025355 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:23:59.025372 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:23:59.025389 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:23:59.025410 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:23:59.025428 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:23:59.025445 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:23:59.025462 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:23:59.025479 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:23:59.025496 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 12 00:23:59.025514 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 12 00:23:59.025531 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 12 00:23:59.025549 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:23:59.025566 kernel: ACPI: Interpreter enabled Jul 12 00:23:59.025587 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:23:59.025604 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:23:59.025621 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 12 00:23:59.025912 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:23:59.026173 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:23:59.026369 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:23:59.026557 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 12 00:23:59.026754 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 12 00:23:59.026777 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 12 00:23:59.026795 kernel: acpiphp: Slot [1] registered Jul 12 00:23:59.026812 kernel: acpiphp: Slot [2] registered Jul 12 00:23:59.026829 kernel: acpiphp: Slot [3] registered Jul 12 00:23:59.026846 kernel: acpiphp: Slot [4] registered Jul 12 00:23:59.026863 kernel: acpiphp: Slot [5] registered Jul 12 00:23:59.026879 kernel: acpiphp: Slot [6] registered Jul 12 00:23:59.026896 kernel: acpiphp: Slot [7] registered Jul 12 00:23:59.026918 kernel: acpiphp: Slot [8] registered Jul 12 00:23:59.029958 kernel: acpiphp: Slot [9] registered Jul 12 00:23:59.029980 kernel: acpiphp: Slot [10] registered Jul 12 00:23:59.029998 kernel: acpiphp: Slot [11] registered Jul 12 00:23:59.030016 kernel: acpiphp: Slot [12] registered Jul 12 00:23:59.030033 kernel: acpiphp: Slot [13] registered Jul 12 00:23:59.030072 kernel: acpiphp: Slot [14] registered Jul 12 00:23:59.030091 kernel: acpiphp: Slot [15] registered Jul 12 00:23:59.030108 kernel: acpiphp: Slot [16] registered Jul 12 00:23:59.030133 kernel: acpiphp: Slot [17] registered Jul 12 00:23:59.030151 kernel: acpiphp: Slot [18] registered Jul 12 00:23:59.030168 kernel: acpiphp: Slot [19] registered Jul 12 00:23:59.030185 kernel: acpiphp: Slot [20] registered Jul 12 00:23:59.030202 kernel: acpiphp: Slot [21] registered Jul 12 00:23:59.030219 kernel: acpiphp: Slot [22] registered Jul 12 00:23:59.030236 kernel: acpiphp: Slot [23] registered Jul 12 00:23:59.030253 kernel: acpiphp: Slot [24] registered Jul 12 00:23:59.030270 kernel: acpiphp: Slot [25] registered Jul 12 00:23:59.030286 kernel: acpiphp: Slot [26] registered Jul 12 00:23:59.030308 kernel: acpiphp: Slot [27] registered Jul 12 00:23:59.030325 kernel: acpiphp: Slot [28] registered Jul 12 00:23:59.030341 kernel: acpiphp: Slot [29] registered Jul 12 00:23:59.030358 kernel: acpiphp: Slot [30] registered Jul 12 00:23:59.030375 kernel: acpiphp: Slot [31] registered Jul 12 00:23:59.030392 kernel: PCI host bridge to bus 0000:00 Jul 12 00:23:59.030632 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 12 00:23:59.030810 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:23:59.039363 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 12 00:23:59.039622 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 12 00:23:59.039864 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 12 00:23:59.041010 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 12 00:23:59.041242 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 12 00:23:59.041454 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 12 00:23:59.041668 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 12 00:23:59.041862 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:23:59.042132 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 12 00:23:59.042344 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 12 00:23:59.042545 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 12 00:23:59.042742 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 12 00:23:59.042967 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:23:59.043180 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 12 00:23:59.043374 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 12 00:23:59.043573 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 12 00:23:59.043770 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 12 00:23:59.044006 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 12 00:23:59.044195 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 12 00:23:59.044373 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:23:59.044554 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 12 00:23:59.044578 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:23:59.044597 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:23:59.044614 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:23:59.044632 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:23:59.044649 kernel: iommu: Default domain type: Translated Jul 12 00:23:59.044666 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:23:59.044683 kernel: vgaarb: loaded Jul 12 00:23:59.044700 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:23:59.044722 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:23:59.044740 kernel: PTP clock support registered Jul 12 00:23:59.044757 kernel: Registered efivars operations Jul 12 00:23:59.044774 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:23:59.044791 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:23:59.044808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:23:59.044825 kernel: pnp: PnP ACPI init Jul 12 00:23:59.051178 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 12 00:23:59.051233 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:23:59.051252 kernel: NET: Registered PF_INET protocol family Jul 12 00:23:59.051271 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:23:59.051289 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:23:59.051307 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:23:59.051325 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:23:59.051343 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 12 00:23:59.051360 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:23:59.051378 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:23:59.051399 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:23:59.051417 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:23:59.051434 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:23:59.051451 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 12 00:23:59.051469 kernel: kvm [1]: HYP mode not available Jul 12 00:23:59.051486 kernel: Initialise system trusted keyrings Jul 12 00:23:59.051504 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:23:59.051521 kernel: Key type asymmetric registered Jul 12 00:23:59.051538 kernel: Asymmetric key parser 'x509' registered Jul 12 00:23:59.051559 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:23:59.051577 kernel: io scheduler mq-deadline registered Jul 12 00:23:59.051594 kernel: io scheduler kyber registered Jul 12 00:23:59.051611 kernel: io scheduler bfq registered Jul 12 00:23:59.051828 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 12 00:23:59.051854 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:23:59.051872 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:23:59.051889 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 12 00:23:59.051907 kernel: ACPI: button: Sleep Button [SLPB] Jul 12 00:23:59.051950 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:23:59.051971 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:23:59.052177 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 12 00:23:59.052202 kernel: printk: console [ttyS0] disabled Jul 12 00:23:59.052221 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 12 00:23:59.052238 kernel: printk: console [ttyS0] enabled Jul 12 00:23:59.052256 kernel: printk: bootconsole [uart0] disabled Jul 12 00:23:59.052273 kernel: thunder_xcv, ver 1.0 Jul 12 00:23:59.052290 kernel: thunder_bgx, ver 1.0 Jul 12 00:23:59.052312 kernel: nicpf, ver 1.0 Jul 12 00:23:59.052329 kernel: nicvf, ver 1.0 Jul 12 00:23:59.052527 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:23:59.052710 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:23:58 UTC (1752279838) Jul 12 00:23:59.052734 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:23:59.052753 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:23:59.052770 kernel: Segment Routing with IPv6 Jul 12 00:23:59.052787 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:23:59.052809 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:23:59.052826 kernel: Key type dns_resolver registered Jul 12 00:23:59.052842 kernel: registered taskstats version 1 Jul 12 00:23:59.052859 kernel: Loading compiled-in X.509 certificates Jul 12 00:23:59.052877 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: de2ee1d04443f96c763927c453375bbe23b5752a' Jul 12 00:23:59.052894 kernel: Key type .fscrypt registered Jul 12 00:23:59.052911 kernel: Key type fscrypt-provisioning registered Jul 12 00:23:59.056765 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:23:59.056811 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:23:59.056837 kernel: ima: No architecture policies found Jul 12 00:23:59.056855 kernel: clk: Disabling unused clocks Jul 12 00:23:59.056873 kernel: Freeing unused kernel memory: 36416K Jul 12 00:23:59.056890 kernel: Run /init as init process Jul 12 00:23:59.056908 kernel: with arguments: Jul 12 00:23:59.057780 kernel: /init Jul 12 00:23:59.057804 kernel: with environment: Jul 12 00:23:59.057822 kernel: HOME=/ Jul 12 00:23:59.057839 kernel: TERM=linux Jul 12 00:23:59.057862 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:23:59.057885 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:23:59.057908 systemd[1]: Detected virtualization amazon. Jul 12 00:23:59.057952 systemd[1]: Detected architecture arm64. Jul 12 00:23:59.057972 systemd[1]: Running in initrd. Jul 12 00:23:59.057991 systemd[1]: No hostname configured, using default hostname. Jul 12 00:23:59.058009 systemd[1]: Hostname set to . Jul 12 00:23:59.058034 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:23:59.058076 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:23:59.058095 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:23:59.058113 systemd[1]: Reached target cryptsetup.target. Jul 12 00:23:59.058131 systemd[1]: Reached target paths.target. Jul 12 00:23:59.058150 systemd[1]: Reached target slices.target. Jul 12 00:23:59.058168 systemd[1]: Reached target swap.target. Jul 12 00:23:59.058186 systemd[1]: Reached target timers.target. Jul 12 00:23:59.058210 systemd[1]: Listening on iscsid.socket. Jul 12 00:23:59.058229 systemd[1]: Listening on iscsiuio.socket. Jul 12 00:23:59.058247 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:23:59.058266 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:23:59.058284 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:23:59.058303 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:23:59.058321 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:23:59.058340 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:23:59.058363 systemd[1]: Reached target sockets.target. Jul 12 00:23:59.058382 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:23:59.058401 systemd[1]: Finished network-cleanup.service. Jul 12 00:23:59.058420 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:23:59.058438 systemd[1]: Starting systemd-journald.service... Jul 12 00:23:59.058457 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:23:59.058476 systemd[1]: Starting systemd-resolved.service... Jul 12 00:23:59.058494 systemd[1]: Starting systemd-vconsole-setup.service... Jul 12 00:23:59.058513 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:23:59.058536 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:23:59.058555 systemd[1]: Finished systemd-vconsole-setup.service. Jul 12 00:23:59.058574 kernel: audit: type=1130 audit(1752279839.008:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.058593 systemd[1]: Starting dracut-cmdline-ask.service... Jul 12 00:23:59.058612 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:23:59.058635 systemd-journald[310]: Journal started Jul 12 00:23:59.058731 systemd-journald[310]: Runtime Journal (/run/log/journal/ec20b07983aea7130794e4247f6b44eb) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:23:59.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:58.989297 systemd-modules-load[311]: Inserted module 'overlay' Jul 12 00:23:59.074858 systemd[1]: Started systemd-journald.service. Jul 12 00:23:59.074899 kernel: audit: type=1130 audit(1752279839.061:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.076572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:23:59.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.094207 systemd[1]: Finished dracut-cmdline-ask.service. Jul 12 00:23:59.098454 systemd[1]: Starting dracut-cmdline.service... Jul 12 00:23:59.113940 kernel: audit: type=1130 audit(1752279839.082:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.113985 kernel: audit: type=1130 audit(1752279839.092:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.125454 systemd-resolved[312]: Positive Trust Anchors: Jul 12 00:23:59.128231 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:23:59.125788 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:23:59.125842 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:23:59.153785 kernel: Bridge firewalling registered Jul 12 00:23:59.152309 systemd-modules-load[311]: Inserted module 'br_netfilter' Jul 12 00:23:59.162231 dracut-cmdline[328]: dracut-dracut-053 Jul 12 00:23:59.174421 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:23:59.195831 kernel: SCSI subsystem initialized Jul 12 00:23:59.214973 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:23:59.215042 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:23:59.218974 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 12 00:23:59.225135 systemd-modules-load[311]: Inserted module 'dm_multipath' Jul 12 00:23:59.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.229520 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:23:59.245891 kernel: audit: type=1130 audit(1752279839.229:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.233164 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:23:59.261904 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:23:59.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.275950 kernel: audit: type=1130 audit(1752279839.263:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.349960 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:23:59.370962 kernel: iscsi: registered transport (tcp) Jul 12 00:23:59.399198 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:23:59.399282 kernel: QLogic iSCSI HBA Driver Jul 12 00:23:59.561870 systemd-resolved[312]: Defaulting to hostname 'linux'. Jul 12 00:23:59.565000 kernel: random: crng init done Jul 12 00:23:59.565255 systemd[1]: Started systemd-resolved.service. Jul 12 00:23:59.578784 kernel: audit: type=1130 audit(1752279839.565:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.567251 systemd[1]: Reached target nss-lookup.target. Jul 12 00:23:59.598430 systemd[1]: Finished dracut-cmdline.service. Jul 12 00:23:59.600144 systemd[1]: Starting dracut-pre-udev.service... Jul 12 00:23:59.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.616979 kernel: audit: type=1130 audit(1752279839.596:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:23:59.676979 kernel: raid6: neonx8 gen() 6436 MB/s Jul 12 00:23:59.691960 kernel: raid6: neonx8 xor() 4717 MB/s Jul 12 00:23:59.709957 kernel: raid6: neonx4 gen() 6550 MB/s Jul 12 00:23:59.727956 kernel: raid6: neonx4 xor() 4894 MB/s Jul 12 00:23:59.745956 kernel: raid6: neonx2 gen() 5791 MB/s Jul 12 00:23:59.763959 kernel: raid6: neonx2 xor() 4519 MB/s Jul 12 00:23:59.781957 kernel: raid6: neonx1 gen() 4489 MB/s Jul 12 00:23:59.799973 kernel: raid6: neonx1 xor() 3661 MB/s Jul 12 00:23:59.817959 kernel: raid6: int64x8 gen() 3432 MB/s Jul 12 00:23:59.835957 kernel: raid6: int64x8 xor() 2083 MB/s Jul 12 00:23:59.853956 kernel: raid6: int64x4 gen() 3855 MB/s Jul 12 00:23:59.871963 kernel: raid6: int64x4 xor() 2189 MB/s Jul 12 00:23:59.889957 kernel: raid6: int64x2 gen() 3618 MB/s Jul 12 00:23:59.907956 kernel: raid6: int64x2 xor() 1945 MB/s Jul 12 00:23:59.925964 kernel: raid6: int64x1 gen() 2771 MB/s Jul 12 00:23:59.945463 kernel: raid6: int64x1 xor() 1447 MB/s Jul 12 00:23:59.945501 kernel: raid6: using algorithm neonx4 gen() 6550 MB/s Jul 12 00:23:59.945527 kernel: raid6: .... xor() 4894 MB/s, rmw enabled Jul 12 00:23:59.947298 kernel: raid6: using neon recovery algorithm Jul 12 00:23:59.965964 kernel: xor: measuring software checksum speed Jul 12 00:23:59.969563 kernel: 8regs : 8527 MB/sec Jul 12 00:23:59.969606 kernel: 32regs : 11078 MB/sec Jul 12 00:23:59.971566 kernel: arm64_neon : 9570 MB/sec Jul 12 00:23:59.971597 kernel: xor: using function: 32regs (11078 MB/sec) Jul 12 00:24:00.069974 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 12 00:24:00.087359 systemd[1]: Finished dracut-pre-udev.service. Jul 12 00:24:00.106219 kernel: audit: type=1130 audit(1752279840.085:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:00.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:00.093000 audit: BPF prog-id=7 op=LOAD Jul 12 00:24:00.100000 audit: BPF prog-id=8 op=LOAD Jul 12 00:24:00.106786 systemd[1]: Starting systemd-udevd.service... Jul 12 00:24:00.137303 systemd-udevd[510]: Using default interface naming scheme 'v252'. Jul 12 00:24:00.148746 systemd[1]: Started systemd-udevd.service. Jul 12 00:24:00.154208 systemd[1]: Starting dracut-pre-trigger.service... Jul 12 00:24:00.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:00.185970 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Jul 12 00:24:00.245845 systemd[1]: Finished dracut-pre-trigger.service. Jul 12 00:24:00.250352 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:24:00.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:00.348483 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:24:00.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:00.470740 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:24:00.470809 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 12 00:24:00.491038 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 12 00:24:00.491307 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 12 00:24:00.491519 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ec:00:23:2d:2d Jul 12 00:24:00.493977 (udev-worker)[560]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:24:00.500398 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:24:00.500435 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 12 00:24:00.508970 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 12 00:24:00.517388 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:24:00.517455 kernel: GPT:9289727 != 16777215 Jul 12 00:24:00.519705 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:24:00.521067 kernel: GPT:9289727 != 16777215 Jul 12 00:24:00.522982 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:24:00.524534 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:24:00.592970 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (566) Jul 12 00:24:00.631528 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 12 00:24:00.700920 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 12 00:24:00.710229 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 12 00:24:00.722843 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:24:00.737070 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 12 00:24:00.751804 systemd[1]: Starting disk-uuid.service... Jul 12 00:24:00.768290 disk-uuid[669]: Primary Header is updated. Jul 12 00:24:00.768290 disk-uuid[669]: Secondary Entries is updated. Jul 12 00:24:00.768290 disk-uuid[669]: Secondary Header is updated. Jul 12 00:24:00.778972 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:24:00.788960 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:24:00.797971 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:24:01.797167 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:24:01.797232 disk-uuid[670]: The operation has completed successfully. Jul 12 00:24:01.982250 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:24:01.982464 systemd[1]: Finished disk-uuid.service. Jul 12 00:24:01.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:01.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:01.999877 systemd[1]: Starting verity-setup.service... Jul 12 00:24:02.037020 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:24:02.140296 systemd[1]: Found device dev-mapper-usr.device. Jul 12 00:24:02.145813 systemd[1]: Mounting sysusr-usr.mount... Jul 12 00:24:02.153678 systemd[1]: Finished verity-setup.service. Jul 12 00:24:02.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:02.243948 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 12 00:24:02.244974 systemd[1]: Mounted sysusr-usr.mount. Jul 12 00:24:02.248861 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 12 00:24:02.253618 systemd[1]: Starting ignition-setup.service... Jul 12 00:24:02.265054 systemd[1]: Starting parse-ip-for-networkd.service... Jul 12 00:24:02.289106 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:24:02.289178 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:24:02.291561 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:24:02.302968 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:24:02.322703 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:24:02.342560 systemd[1]: Finished ignition-setup.service. Jul 12 00:24:02.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:02.347018 systemd[1]: Starting ignition-fetch-offline.service... Jul 12 00:24:02.422417 systemd[1]: Finished parse-ip-for-networkd.service. Jul 12 00:24:02.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:02.429000 audit: BPF prog-id=9 op=LOAD Jul 12 00:24:02.432342 systemd[1]: Starting systemd-networkd.service... Jul 12 00:24:02.485449 systemd-networkd[1193]: lo: Link UP Jul 12 00:24:02.490123 systemd-networkd[1193]: lo: Gained carrier Jul 12 00:24:02.493301 systemd-networkd[1193]: Enumeration completed Jul 12 00:24:02.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:02.493771 systemd-networkd[1193]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:24:02.494005 systemd[1]: Started systemd-networkd.service. Jul 12 00:24:02.496640 systemd[1]: Reached target network.target. Jul 12 00:24:02.499741 systemd[1]: Starting iscsiuio.service... Jul 12 00:24:02.519607 systemd-networkd[1193]: eth0: Link UP Jul 12 00:24:02.519778 systemd-networkd[1193]: eth0: Gained carrier Jul 12 00:24:02.529109 systemd[1]: Started iscsiuio.service. Jul 12 00:24:02.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:02.532411 systemd[1]: Starting iscsid.service... Jul 12 00:24:02.542135 systemd-networkd[1193]: eth0: DHCPv4 address 172.31.23.9/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:24:02.548188 iscsid[1198]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:24:02.548188 iscsid[1198]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 12 00:24:02.548188 iscsid[1198]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 12 00:24:02.548188 iscsid[1198]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 12 00:24:02.569836 iscsid[1198]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:24:02.569836 iscsid[1198]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 12 00:24:02.582125 systemd[1]: Started iscsid.service. Jul 12 00:24:02.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:02.592963 systemd[1]: Starting dracut-initqueue.service... Jul 12 00:24:02.616493 systemd[1]: Finished dracut-initqueue.service. Jul 12 00:24:02.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:02.620523 systemd[1]: Reached target remote-fs-pre.target. Jul 12 00:24:02.622814 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:24:02.625082 systemd[1]: Reached target remote-fs.target. Jul 12 00:24:02.631098 systemd[1]: Starting dracut-pre-mount.service... Jul 12 00:24:02.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:02.649806 systemd[1]: Finished dracut-pre-mount.service. Jul 12 00:24:03.139550 ignition[1129]: Ignition 2.14.0 Jul 12 00:24:03.139583 ignition[1129]: Stage: fetch-offline Jul 12 00:24:03.140266 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:03.140331 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:03.168055 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:03.169980 ignition[1129]: Ignition finished successfully Jul 12 00:24:03.175621 systemd[1]: Finished ignition-fetch-offline.service. Jul 12 00:24:03.193134 kernel: kauditd_printk_skb: 16 callbacks suppressed Jul 12 00:24:03.193172 kernel: audit: type=1130 audit(1752279843.176:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.179435 systemd[1]: Starting ignition-fetch.service... Jul 12 00:24:03.208153 ignition[1217]: Ignition 2.14.0 Jul 12 00:24:03.208181 ignition[1217]: Stage: fetch Jul 12 00:24:03.208473 ignition[1217]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:03.208531 ignition[1217]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:03.223132 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:03.227164 ignition[1217]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:03.232657 ignition[1217]: INFO : PUT result: OK Jul 12 00:24:03.236342 ignition[1217]: DEBUG : parsed url from cmdline: "" Jul 12 00:24:03.236342 ignition[1217]: INFO : no config URL provided Jul 12 00:24:03.236342 ignition[1217]: INFO : reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:24:03.243800 ignition[1217]: INFO : no config at "/usr/lib/ignition/user.ign" Jul 12 00:24:03.243800 ignition[1217]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:03.249365 ignition[1217]: INFO : PUT result: OK Jul 12 00:24:03.249365 ignition[1217]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 12 00:24:03.254856 ignition[1217]: INFO : GET result: OK Jul 12 00:24:03.256755 ignition[1217]: DEBUG : parsing config with SHA512: 66261930aa82ec5e8c60521d8478bd1bc043b022b5504c3f8b08e191f3a805069c00044885f6d6696e7c40a24f46cb2c6564fb8b445f748225219a7dd1f9e328 Jul 12 00:24:03.270119 unknown[1217]: fetched base config from "system" Jul 12 00:24:03.270148 unknown[1217]: fetched base config from "system" Jul 12 00:24:03.270163 unknown[1217]: fetched user config from "aws" Jul 12 00:24:03.277306 ignition[1217]: fetch: fetch complete Jul 12 00:24:03.277333 ignition[1217]: fetch: fetch passed Jul 12 00:24:03.277441 ignition[1217]: Ignition finished successfully Jul 12 00:24:03.285166 systemd[1]: Finished ignition-fetch.service. Jul 12 00:24:03.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.290396 systemd[1]: Starting ignition-kargs.service... Jul 12 00:24:03.301088 kernel: audit: type=1130 audit(1752279843.287:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.314817 ignition[1223]: Ignition 2.14.0 Jul 12 00:24:03.314845 ignition[1223]: Stage: kargs Jul 12 00:24:03.315168 ignition[1223]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:03.315225 ignition[1223]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:03.331606 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:03.334406 ignition[1223]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:03.337659 ignition[1223]: INFO : PUT result: OK Jul 12 00:24:03.343476 ignition[1223]: kargs: kargs passed Jul 12 00:24:03.343578 ignition[1223]: Ignition finished successfully Jul 12 00:24:03.348706 systemd[1]: Finished ignition-kargs.service. Jul 12 00:24:03.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.350334 systemd[1]: Starting ignition-disks.service... Jul 12 00:24:03.366969 kernel: audit: type=1130 audit(1752279843.346:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.372362 ignition[1229]: Ignition 2.14.0 Jul 12 00:24:03.372391 ignition[1229]: Stage: disks Jul 12 00:24:03.372693 ignition[1229]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:03.372750 ignition[1229]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:03.392567 ignition[1229]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:03.395637 ignition[1229]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:03.399516 ignition[1229]: INFO : PUT result: OK Jul 12 00:24:03.405216 ignition[1229]: disks: disks passed Jul 12 00:24:03.405325 ignition[1229]: Ignition finished successfully Jul 12 00:24:03.407023 systemd[1]: Finished ignition-disks.service. Jul 12 00:24:03.413084 systemd[1]: Reached target initrd-root-device.target. Jul 12 00:24:03.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.423260 kernel: audit: type=1130 audit(1752279843.411:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.424702 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:24:03.428743 systemd[1]: Reached target local-fs.target. Jul 12 00:24:03.432380 systemd[1]: Reached target sysinit.target. Jul 12 00:24:03.436113 systemd[1]: Reached target basic.target. Jul 12 00:24:03.441128 systemd[1]: Starting systemd-fsck-root.service... Jul 12 00:24:03.477695 systemd-fsck[1237]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 12 00:24:03.488037 systemd[1]: Finished systemd-fsck-root.service. Jul 12 00:24:03.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.493401 systemd[1]: Mounting sysroot.mount... Jul 12 00:24:03.505001 kernel: audit: type=1130 audit(1752279843.486:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.522962 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 12 00:24:03.524994 systemd[1]: Mounted sysroot.mount. Jul 12 00:24:03.526666 systemd[1]: Reached target initrd-root-fs.target. Jul 12 00:24:03.536010 systemd[1]: Mounting sysroot-usr.mount... Jul 12 00:24:03.538835 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 12 00:24:03.538911 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:24:03.538987 systemd[1]: Reached target ignition-diskful.target. Jul 12 00:24:03.548089 systemd[1]: Mounted sysroot-usr.mount. Jul 12 00:24:03.577710 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:24:03.585702 systemd[1]: Starting initrd-setup-root.service... Jul 12 00:24:03.603970 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1254) Jul 12 00:24:03.604482 initrd-setup-root[1259]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:24:03.613835 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:24:03.613903 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:24:03.613980 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:24:03.624986 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:24:03.626412 initrd-setup-root[1285]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:24:03.629853 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:24:03.640740 initrd-setup-root[1293]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:24:03.650615 initrd-setup-root[1301]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:24:03.825791 systemd[1]: Finished initrd-setup-root.service. Jul 12 00:24:03.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.827389 systemd[1]: Starting ignition-mount.service... Jul 12 00:24:03.839495 kernel: audit: type=1130 audit(1752279843.824:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.842607 systemd[1]: Starting sysroot-boot.service... Jul 12 00:24:03.853470 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 12 00:24:03.853655 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 12 00:24:03.883740 ignition[1320]: INFO : Ignition 2.14.0 Jul 12 00:24:03.883740 ignition[1320]: INFO : Stage: mount Jul 12 00:24:03.889868 ignition[1320]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:03.889868 ignition[1320]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:03.904231 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:03.907794 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:03.915000 ignition[1320]: INFO : PUT result: OK Jul 12 00:24:03.921533 ignition[1320]: INFO : mount: mount passed Jul 12 00:24:03.921562 systemd[1]: Finished sysroot-boot.service. Jul 12 00:24:03.934605 kernel: audit: type=1130 audit(1752279843.923:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.932396 systemd[1]: Finished ignition-mount.service. Jul 12 00:24:03.936657 ignition[1320]: INFO : Ignition finished successfully Jul 12 00:24:03.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.948508 systemd[1]: Starting ignition-files.service... Jul 12 00:24:03.954268 kernel: audit: type=1130 audit(1752279843.939:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:03.963638 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:24:03.983963 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1330) Jul 12 00:24:03.990064 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:24:03.990111 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:24:03.990136 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:24:03.998958 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:24:04.004222 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:24:04.023574 ignition[1349]: INFO : Ignition 2.14.0 Jul 12 00:24:04.023574 ignition[1349]: INFO : Stage: files Jul 12 00:24:04.029593 ignition[1349]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:04.029593 ignition[1349]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:04.045414 ignition[1349]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:04.048233 ignition[1349]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:04.051917 ignition[1349]: INFO : PUT result: OK Jul 12 00:24:04.057648 ignition[1349]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:24:04.065311 ignition[1349]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:24:04.068699 ignition[1349]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:24:04.113217 ignition[1349]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:24:04.116831 ignition[1349]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:24:04.122158 unknown[1349]: wrote ssh authorized keys file for user: core Jul 12 00:24:04.124978 ignition[1349]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:24:04.129696 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:24:04.134219 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:24:04.138718 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:24:04.143576 ignition[1349]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:24:04.154105 systemd-networkd[1193]: eth0: Gained IPv6LL Jul 12 00:24:04.247897 ignition[1349]: INFO : GET result: OK Jul 12 00:24:04.462746 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:24:04.468097 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:24:04.468097 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:24:04.468097 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:24:04.468097 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:24:04.468097 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:24:04.468097 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:24:04.505776 ignition[1349]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem114199162" Jul 12 00:24:04.509301 ignition[1349]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem114199162": device or resource busy Jul 12 00:24:04.509301 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem114199162", trying btrfs: device or resource busy Jul 12 00:24:04.509301 ignition[1349]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem114199162" Jul 12 00:24:04.521240 ignition[1349]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem114199162" Jul 12 00:24:04.521240 ignition[1349]: INFO : op(3): [started] unmounting "/mnt/oem114199162" Jul 12 00:24:04.527606 ignition[1349]: INFO : op(3): [finished] unmounting "/mnt/oem114199162" Jul 12 00:24:04.530654 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:24:04.530654 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:24:04.530654 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:24:04.530654 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:24:04.530654 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:24:04.553379 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:24:04.553379 ignition[1349]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:24:05.032034 ignition[1349]: INFO : GET result: OK Jul 12 00:24:05.201596 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:24:05.206570 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:24:05.206570 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:24:05.206570 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:24:05.219864 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:24:05.224457 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:24:05.229316 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:24:05.242000 ignition[1349]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1470256046" Jul 12 00:24:05.245507 ignition[1349]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1470256046": device or resource busy Jul 12 00:24:05.245507 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1470256046", trying btrfs: device or resource busy Jul 12 00:24:05.245507 ignition[1349]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1470256046" Jul 12 00:24:05.257803 ignition[1349]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1470256046" Jul 12 00:24:05.262959 ignition[1349]: INFO : op(6): [started] unmounting "/mnt/oem1470256046" Jul 12 00:24:05.262959 ignition[1349]: INFO : op(6): [finished] unmounting "/mnt/oem1470256046" Jul 12 00:24:05.262959 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:24:05.262959 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:24:05.262959 ignition[1349]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:24:05.288806 systemd[1]: mnt-oem1470256046.mount: Deactivated successfully. Jul 12 00:24:05.809446 ignition[1349]: INFO : GET result: OK Jul 12 00:24:06.376224 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:24:06.381374 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:24:06.381374 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:24:06.398997 ignition[1349]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3613619127" Jul 12 00:24:06.403629 ignition[1349]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3613619127": device or resource busy Jul 12 00:24:06.403629 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3613619127", trying btrfs: device or resource busy Jul 12 00:24:06.403629 ignition[1349]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3613619127" Jul 12 00:24:06.403629 ignition[1349]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3613619127" Jul 12 00:24:06.403629 ignition[1349]: INFO : op(9): [started] unmounting "/mnt/oem3613619127" Jul 12 00:24:06.403629 ignition[1349]: INFO : op(9): [finished] unmounting "/mnt/oem3613619127" Jul 12 00:24:06.403629 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:24:06.403629 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:24:06.403629 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:24:06.445310 systemd[1]: mnt-oem3613619127.mount: Deactivated successfully. Jul 12 00:24:06.465408 ignition[1349]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3037402669" Jul 12 00:24:06.469445 ignition[1349]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3037402669": device or resource busy Jul 12 00:24:06.469445 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3037402669", trying btrfs: device or resource busy Jul 12 00:24:06.469445 ignition[1349]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3037402669" Jul 12 00:24:06.483974 ignition[1349]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3037402669" Jul 12 00:24:06.483974 ignition[1349]: INFO : op(c): [started] unmounting "/mnt/oem3037402669" Jul 12 00:24:06.483974 ignition[1349]: INFO : op(c): [finished] unmounting "/mnt/oem3037402669" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(14): [started] processing unit "nvidia.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(14): [finished] processing unit "nvidia.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(15): [started] processing unit "containerd.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(15): [finished] processing unit "containerd.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:24:06.483974 ignition[1349]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Jul 12 00:24:06.560177 ignition[1349]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:24:06.560177 ignition[1349]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:24:06.560177 ignition[1349]: INFO : files: op(1a): [started] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:24:06.560177 ignition[1349]: INFO : files: op(1a): [finished] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:24:06.560177 ignition[1349]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Jul 12 00:24:06.560177 ignition[1349]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Jul 12 00:24:06.560177 ignition[1349]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:24:06.560177 ignition[1349]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:24:06.598969 ignition[1349]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:24:06.598969 ignition[1349]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:24:06.598969 ignition[1349]: INFO : files: files passed Jul 12 00:24:06.598969 ignition[1349]: INFO : Ignition finished successfully Jul 12 00:24:06.626603 kernel: audit: type=1130 audit(1752279846.607:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.603783 systemd[1]: Finished ignition-files.service. Jul 12 00:24:06.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.626710 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 12 00:24:06.630430 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 12 00:24:06.631767 systemd[1]: Starting ignition-quench.service... Jul 12 00:24:06.639032 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:24:06.662349 kernel: audit: type=1130 audit(1752279846.638:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.639236 systemd[1]: Finished ignition-quench.service. Jul 12 00:24:06.670108 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:24:06.675221 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 12 00:24:06.679916 systemd[1]: Reached target ignition-complete.target. Jul 12 00:24:06.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.685766 systemd[1]: Starting initrd-parse-etc.service... Jul 12 00:24:06.715234 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:24:06.717971 systemd[1]: Finished initrd-parse-etc.service. Jul 12 00:24:06.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.722457 systemd[1]: Reached target initrd-fs.target. Jul 12 00:24:06.726343 systemd[1]: Reached target initrd.target. Jul 12 00:24:06.730191 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 12 00:24:06.734883 systemd[1]: Starting dracut-pre-pivot.service... Jul 12 00:24:06.759109 systemd[1]: Finished dracut-pre-pivot.service. Jul 12 00:24:06.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.762657 systemd[1]: Starting initrd-cleanup.service... Jul 12 00:24:06.784892 systemd[1]: Stopped target nss-lookup.target. Jul 12 00:24:06.790917 systemd[1]: Stopped target remote-cryptsetup.target. Jul 12 00:24:06.795471 systemd[1]: Stopped target timers.target. Jul 12 00:24:06.799307 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:24:06.802065 systemd[1]: Stopped dracut-pre-pivot.service. Jul 12 00:24:06.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.806490 systemd[1]: Stopped target initrd.target. Jul 12 00:24:06.810267 systemd[1]: Stopped target basic.target. Jul 12 00:24:06.814138 systemd[1]: Stopped target ignition-complete.target. Jul 12 00:24:06.818610 systemd[1]: Stopped target ignition-diskful.target. Jul 12 00:24:06.822998 systemd[1]: Stopped target initrd-root-device.target. Jul 12 00:24:06.827590 systemd[1]: Stopped target remote-fs.target. Jul 12 00:24:06.831623 systemd[1]: Stopped target remote-fs-pre.target. Jul 12 00:24:06.835962 systemd[1]: Stopped target sysinit.target. Jul 12 00:24:06.839940 systemd[1]: Stopped target local-fs.target. Jul 12 00:24:06.843880 systemd[1]: Stopped target local-fs-pre.target. Jul 12 00:24:06.848142 systemd[1]: Stopped target swap.target. Jul 12 00:24:06.856369 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:24:06.859151 systemd[1]: Stopped dracut-pre-mount.service. Jul 12 00:24:06.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.863749 systemd[1]: Stopped target cryptsetup.target. Jul 12 00:24:06.868178 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:24:06.871045 systemd[1]: Stopped dracut-initqueue.service. Jul 12 00:24:06.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.875684 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:24:06.878906 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 12 00:24:06.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.884452 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:24:06.887050 systemd[1]: Stopped ignition-files.service. Jul 12 00:24:06.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.892904 systemd[1]: Stopping ignition-mount.service... Jul 12 00:24:06.913862 ignition[1387]: INFO : Ignition 2.14.0 Jul 12 00:24:06.913862 ignition[1387]: INFO : Stage: umount Jul 12 00:24:06.913862 ignition[1387]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:06.913862 ignition[1387]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:06.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.966957 iscsid[1198]: iscsid shutting down. Jul 12 00:24:06.919406 systemd[1]: Stopping iscsid.service... Jul 12 00:24:06.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.976367 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:06.976367 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:06.976367 ignition[1387]: INFO : PUT result: OK Jul 12 00:24:06.976367 ignition[1387]: INFO : umount: umount passed Jul 12 00:24:06.976367 ignition[1387]: INFO : Ignition finished successfully Jul 12 00:24:06.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.926369 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:24:06.928180 systemd[1]: Stopped kmod-static-nodes.service. Jul 12 00:24:06.947708 systemd[1]: Stopping sysroot-boot.service... Jul 12 00:24:07.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:06.957504 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:24:06.957835 systemd[1]: Stopped systemd-udev-trigger.service. Jul 12 00:24:06.975182 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:24:06.975481 systemd[1]: Stopped dracut-pre-trigger.service. Jul 12 00:24:06.989969 systemd[1]: iscsid.service: Deactivated successfully. Jul 12 00:24:06.992041 systemd[1]: Stopped iscsid.service. Jul 12 00:24:06.995150 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:24:06.995359 systemd[1]: Stopped ignition-mount.service. Jul 12 00:24:06.998310 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:24:06.998534 systemd[1]: Stopped ignition-disks.service. Jul 12 00:24:07.001337 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:24:07.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.001558 systemd[1]: Stopped ignition-kargs.service. Jul 12 00:24:07.004050 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:24:07.004281 systemd[1]: Stopped ignition-fetch.service. Jul 12 00:24:07.006754 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:24:07.006996 systemd[1]: Stopped ignition-fetch-offline.service. Jul 12 00:24:07.013944 systemd[1]: Stopped target paths.target. Jul 12 00:24:07.018680 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:24:07.021368 systemd[1]: Stopped systemd-ask-password-console.path. Jul 12 00:24:07.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.025277 systemd[1]: Stopped target slices.target. Jul 12 00:24:07.027254 systemd[1]: Stopped target sockets.target. Jul 12 00:24:07.034269 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:24:07.036262 systemd[1]: Closed iscsid.socket. Jul 12 00:24:07.039161 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:24:07.039268 systemd[1]: Stopped ignition-setup.service. Jul 12 00:24:07.044854 systemd[1]: Stopping iscsiuio.service... Jul 12 00:24:07.058383 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:24:07.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.059659 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 12 00:24:07.060819 systemd[1]: Stopped iscsiuio.service. Jul 12 00:24:07.082732 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:24:07.083653 systemd[1]: Finished initrd-cleanup.service. Jul 12 00:24:07.118508 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:24:07.123479 systemd[1]: Stopped sysroot-boot.service. Jul 12 00:24:07.145125 systemd[1]: Stopped target network.target. Jul 12 00:24:07.149600 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:24:07.149739 systemd[1]: Closed iscsiuio.socket. Jul 12 00:24:07.155872 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:24:07.158907 systemd[1]: Stopped initrd-setup-root.service. Jul 12 00:24:07.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.163968 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:24:07.168222 systemd[1]: Stopping systemd-resolved.service... Jul 12 00:24:07.173054 systemd-networkd[1193]: eth0: DHCPv6 lease lost Jul 12 00:24:07.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.175524 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:24:07.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.195000 audit: BPF prog-id=9 op=UNLOAD Jul 12 00:24:07.175746 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:24:07.177025 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:24:07.177095 systemd[1]: Closed systemd-networkd.socket. Jul 12 00:24:07.181782 systemd[1]: Stopping network-cleanup.service... Jul 12 00:24:07.182493 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:24:07.183522 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 12 00:24:07.184051 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:24:07.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.184154 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:24:07.188130 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:24:07.229000 audit: BPF prog-id=6 op=UNLOAD Jul 12 00:24:07.188238 systemd[1]: Stopped systemd-modules-load.service. Jul 12 00:24:07.199250 systemd[1]: Stopping systemd-udevd.service... Jul 12 00:24:07.216122 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:24:07.216408 systemd[1]: Stopped systemd-resolved.service. Jul 12 00:24:07.243428 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:24:07.244093 systemd[1]: Stopped network-cleanup.service. Jul 12 00:24:07.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.254631 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:24:07.259135 systemd[1]: Stopped systemd-udevd.service. Jul 12 00:24:07.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.263555 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:24:07.263672 systemd[1]: Closed systemd-udevd-control.socket. Jul 12 00:24:07.271177 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:24:07.271286 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 12 00:24:07.275988 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:24:07.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.276100 systemd[1]: Stopped dracut-pre-udev.service. Jul 12 00:24:07.280388 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:24:07.280499 systemd[1]: Stopped dracut-cmdline.service. Jul 12 00:24:07.285461 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:24:07.285570 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 12 00:24:07.289308 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 12 00:24:07.305097 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:24:07.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.305244 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 12 00:24:07.324428 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:24:07.324698 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 12 00:24:07.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:07.332303 systemd[1]: Reached target initrd-switch-root.target. Jul 12 00:24:07.338348 systemd[1]: Starting initrd-switch-root.service... Jul 12 00:24:07.378193 systemd[1]: Switching root. Jul 12 00:24:07.384000 audit: BPF prog-id=5 op=UNLOAD Jul 12 00:24:07.384000 audit: BPF prog-id=4 op=UNLOAD Jul 12 00:24:07.384000 audit: BPF prog-id=3 op=UNLOAD Jul 12 00:24:07.385000 audit: BPF prog-id=8 op=UNLOAD Jul 12 00:24:07.385000 audit: BPF prog-id=7 op=UNLOAD Jul 12 00:24:07.407177 systemd-journald[310]: Journal stopped Jul 12 00:24:13.627748 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Jul 12 00:24:13.627872 kernel: SELinux: Class mctp_socket not defined in policy. Jul 12 00:24:13.627915 kernel: SELinux: Class anon_inode not defined in policy. Jul 12 00:24:13.627965 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 12 00:24:13.627996 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:24:13.628040 kernel: SELinux: policy capability open_perms=1 Jul 12 00:24:13.628072 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:24:13.628101 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:24:13.628136 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:24:13.628166 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:24:13.628195 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:24:13.628225 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:24:13.628254 kernel: kauditd_printk_skb: 46 callbacks suppressed Jul 12 00:24:13.628286 kernel: audit: type=1403 audit(1752279848.647:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:24:13.628320 systemd[1]: Successfully loaded SELinux policy in 118.267ms. Jul 12 00:24:13.628374 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.533ms. Jul 12 00:24:13.628409 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:24:13.628444 systemd[1]: Detected virtualization amazon. Jul 12 00:24:13.628477 systemd[1]: Detected architecture arm64. Jul 12 00:24:13.628507 systemd[1]: Detected first boot. Jul 12 00:24:13.628539 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:24:13.628572 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 12 00:24:13.628606 kernel: audit: type=1400 audit(1752279849.050:84): avc: denied { associate } for pid=1437 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 12 00:24:13.628639 kernel: audit: type=1300 audit(1752279849.050:84): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014766c a1=40000c8ae0 a2=40000cea00 a3=32 items=0 ppid=1420 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:24:13.628672 kernel: audit: type=1327 audit(1752279849.050:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:24:13.628704 kernel: audit: type=1400 audit(1752279849.054:85): avc: denied { associate } for pid=1437 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 12 00:24:13.628737 kernel: audit: type=1300 audit(1752279849.054:85): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147745 a2=1ed a3=0 items=2 ppid=1420 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:24:13.628768 kernel: audit: type=1307 audit(1752279849.054:85): cwd="/" Jul 12 00:24:13.628798 kernel: audit: type=1302 audit(1752279849.054:85): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:24:13.628828 kernel: audit: type=1302 audit(1752279849.054:85): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:24:13.628859 kernel: audit: type=1327 audit(1752279849.054:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:24:13.628892 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:24:13.635698 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:24:13.635754 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:24:13.635790 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:24:13.635822 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:24:13.635855 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Jul 12 00:24:13.635887 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 12 00:24:13.635980 systemd[1]: Created slice system-addon\x2drun.slice. Jul 12 00:24:13.636022 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 12 00:24:13.636054 systemd[1]: Created slice system-getty.slice. Jul 12 00:24:13.636088 systemd[1]: Created slice system-modprobe.slice. Jul 12 00:24:13.636122 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 12 00:24:13.638885 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 12 00:24:13.638962 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 12 00:24:13.638995 systemd[1]: Created slice user.slice. Jul 12 00:24:13.639024 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:24:13.639058 systemd[1]: Started systemd-ask-password-wall.path. Jul 12 00:24:13.639087 systemd[1]: Set up automount boot.automount. Jul 12 00:24:13.639123 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 12 00:24:13.639155 systemd[1]: Reached target integritysetup.target. Jul 12 00:24:13.639187 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:24:13.639219 systemd[1]: Reached target remote-fs.target. Jul 12 00:24:13.639248 systemd[1]: Reached target slices.target. Jul 12 00:24:13.639277 systemd[1]: Reached target swap.target. Jul 12 00:24:13.639308 systemd[1]: Reached target torcx.target. Jul 12 00:24:13.639337 systemd[1]: Reached target veritysetup.target. Jul 12 00:24:13.639368 systemd[1]: Listening on systemd-coredump.socket. Jul 12 00:24:13.639400 systemd[1]: Listening on systemd-initctl.socket. Jul 12 00:24:13.639436 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:24:13.639468 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:24:13.639498 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:24:13.639529 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:24:13.639558 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:24:13.639587 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:24:13.639620 systemd[1]: Listening on systemd-userdbd.socket. Jul 12 00:24:13.639651 systemd[1]: Mounting dev-hugepages.mount... Jul 12 00:24:13.639682 systemd[1]: Mounting dev-mqueue.mount... Jul 12 00:24:13.639714 systemd[1]: Mounting media.mount... Jul 12 00:24:13.639744 systemd[1]: Mounting sys-kernel-debug.mount... Jul 12 00:24:13.639783 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 12 00:24:13.639813 systemd[1]: Mounting tmp.mount... Jul 12 00:24:13.639842 systemd[1]: Starting flatcar-tmpfiles.service... Jul 12 00:24:13.639871 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:24:13.639899 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:24:13.639949 systemd[1]: Starting modprobe@configfs.service... Jul 12 00:24:13.639984 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:24:13.640018 systemd[1]: Starting modprobe@drm.service... Jul 12 00:24:13.640047 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:24:13.640090 systemd[1]: Starting modprobe@fuse.service... Jul 12 00:24:13.640124 systemd[1]: Starting modprobe@loop.service... Jul 12 00:24:13.640156 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:24:13.640186 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 12 00:24:13.640216 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 12 00:24:13.640245 systemd[1]: Starting systemd-journald.service... Jul 12 00:24:13.640274 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:24:13.640313 systemd[1]: Starting systemd-network-generator.service... Jul 12 00:24:13.640343 systemd[1]: Starting systemd-remount-fs.service... Jul 12 00:24:13.640371 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:24:13.640402 systemd[1]: Mounted dev-hugepages.mount. Jul 12 00:24:13.640431 systemd[1]: Mounted dev-mqueue.mount. Jul 12 00:24:13.640460 systemd[1]: Mounted media.mount. Jul 12 00:24:13.640488 systemd[1]: Mounted sys-kernel-debug.mount. Jul 12 00:24:13.640517 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 12 00:24:13.640546 systemd[1]: Mounted tmp.mount. Jul 12 00:24:13.640580 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:24:13.640609 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:24:13.640640 systemd[1]: Finished modprobe@configfs.service. Jul 12 00:24:13.640671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:24:13.640700 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:24:13.640731 kernel: loop: module loaded Jul 12 00:24:13.640762 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:24:13.640792 systemd[1]: Finished modprobe@drm.service. Jul 12 00:24:13.640820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:24:13.640853 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:24:13.640882 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:24:13.640913 systemd[1]: Finished systemd-network-generator.service. Jul 12 00:24:13.648164 systemd[1]: Finished systemd-remount-fs.service. Jul 12 00:24:13.648207 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:24:13.648245 systemd[1]: Finished modprobe@loop.service. Jul 12 00:24:13.648275 systemd[1]: Reached target network-pre.target. Jul 12 00:24:13.648307 systemd-journald[1536]: Journal started Jul 12 00:24:13.648408 systemd-journald[1536]: Runtime Journal (/run/log/journal/ec20b07983aea7130794e4247f6b44eb) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:24:13.283000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:24:13.283000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 12 00:24:13.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.621000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 12 00:24:13.621000 audit[1536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffca75adc0 a2=4000 a3=1 items=0 ppid=1 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:24:13.621000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 12 00:24:13.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.656167 systemd[1]: Mounting sys-kernel-config.mount... Jul 12 00:24:13.663008 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:24:13.667984 kernel: fuse: init (API version 7.34) Jul 12 00:24:13.686024 systemd[1]: Starting systemd-hwdb-update.service... Jul 12 00:24:13.686112 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:24:13.702062 systemd[1]: Starting systemd-random-seed.service... Jul 12 00:24:13.702164 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:24:13.715077 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:24:13.722737 systemd[1]: Started systemd-journald.service. Jul 12 00:24:13.738089 kernel: kauditd_printk_skb: 19 callbacks suppressed Jul 12 00:24:13.738182 kernel: audit: type=1130 audit(1752279853.721:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.725452 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:24:13.728310 systemd[1]: Finished modprobe@fuse.service. Jul 12 00:24:13.735427 systemd[1]: Mounted sys-kernel-config.mount. Jul 12 00:24:13.754085 kernel: audit: type=1130 audit(1752279853.732:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.740381 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 12 00:24:13.746535 systemd[1]: Starting systemd-journal-flush.service... Jul 12 00:24:13.787491 kernel: audit: type=1131 audit(1752279853.732:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.761913 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 12 00:24:13.783572 systemd[1]: Finished systemd-random-seed.service. Jul 12 00:24:13.787770 systemd[1]: Reached target first-boot-complete.target. Jul 12 00:24:13.809166 kernel: audit: type=1130 audit(1752279853.786:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.812832 systemd-journald[1536]: Time spent on flushing to /var/log/journal/ec20b07983aea7130794e4247f6b44eb is 100.139ms for 1084 entries. Jul 12 00:24:13.812832 systemd-journald[1536]: System Journal (/var/log/journal/ec20b07983aea7130794e4247f6b44eb) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:24:13.923079 systemd-journald[1536]: Received client request to flush runtime journal. Jul 12 00:24:13.923160 kernel: audit: type=1130 audit(1752279853.865:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.923206 kernel: audit: type=1130 audit(1752279853.882:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.863121 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:24:13.879680 systemd[1]: Finished flatcar-tmpfiles.service. Jul 12 00:24:13.886181 systemd[1]: Starting systemd-sysusers.service... Jul 12 00:24:13.928054 systemd[1]: Finished systemd-journal-flush.service. Jul 12 00:24:13.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.945072 kernel: audit: type=1130 audit(1752279853.931:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.972343 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:24:13.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:13.977711 systemd[1]: Starting systemd-udev-settle.service... Jul 12 00:24:13.987949 kernel: audit: type=1130 audit(1752279853.974:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:14.002662 udevadm[1587]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:24:14.115777 systemd[1]: Finished systemd-sysusers.service. Jul 12 00:24:14.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:14.130286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:24:14.132323 kernel: audit: type=1130 audit(1752279854.118:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:14.268106 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:24:14.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:14.287243 kernel: audit: type=1130 audit(1752279854.271:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:14.681044 systemd[1]: Finished systemd-hwdb-update.service. Jul 12 00:24:14.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:14.685871 systemd[1]: Starting systemd-udevd.service... Jul 12 00:24:14.728389 systemd-udevd[1593]: Using default interface naming scheme 'v252'. Jul 12 00:24:14.788113 systemd[1]: Started systemd-udevd.service. Jul 12 00:24:14.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:14.793150 systemd[1]: Starting systemd-networkd.service... Jul 12 00:24:14.812884 systemd[1]: Starting systemd-userdbd.service... Jul 12 00:24:14.873306 (udev-worker)[1595]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:24:14.915230 systemd[1]: Found device dev-ttyS0.device. Jul 12 00:24:14.941242 systemd[1]: Started systemd-userdbd.service. Jul 12 00:24:14.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.108458 systemd-networkd[1596]: lo: Link UP Jul 12 00:24:15.109042 systemd-networkd[1596]: lo: Gained carrier Jul 12 00:24:15.110055 systemd-networkd[1596]: Enumeration completed Jul 12 00:24:15.110264 systemd[1]: Started systemd-networkd.service. Jul 12 00:24:15.110915 systemd-networkd[1596]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:24:15.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.115903 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:24:15.125574 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:24:15.124745 systemd-networkd[1596]: eth0: Link UP Jul 12 00:24:15.125062 systemd-networkd[1596]: eth0: Gained carrier Jul 12 00:24:15.146306 systemd-networkd[1596]: eth0: DHCPv4 address 172.31.23.9/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:24:15.298472 systemd[1]: Finished systemd-udev-settle.service. Jul 12 00:24:15.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.313328 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:24:15.318592 systemd[1]: Starting lvm2-activation-early.service... Jul 12 00:24:15.393892 lvm[1713]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:24:15.432723 systemd[1]: Finished lvm2-activation-early.service. Jul 12 00:24:15.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.435288 systemd[1]: Reached target cryptsetup.target. Jul 12 00:24:15.439679 systemd[1]: Starting lvm2-activation.service... Jul 12 00:24:15.449390 lvm[1715]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:24:15.483975 systemd[1]: Finished lvm2-activation.service. Jul 12 00:24:15.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.486408 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:24:15.488392 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:24:15.488435 systemd[1]: Reached target local-fs.target. Jul 12 00:24:15.490279 systemd[1]: Reached target machines.target. Jul 12 00:24:15.494473 systemd[1]: Starting ldconfig.service... Jul 12 00:24:15.498601 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:24:15.498729 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:15.501607 systemd[1]: Starting systemd-boot-update.service... Jul 12 00:24:15.509081 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 12 00:24:15.516687 systemd[1]: Starting systemd-machine-id-commit.service... Jul 12 00:24:15.523537 systemd[1]: Starting systemd-sysext.service... Jul 12 00:24:15.527504 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1718 (bootctl) Jul 12 00:24:15.530101 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 12 00:24:15.559823 systemd[1]: Unmounting usr-share-oem.mount... Jul 12 00:24:15.570725 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 12 00:24:15.571290 systemd[1]: Unmounted usr-share-oem.mount. Jul 12 00:24:15.607171 kernel: loop0: detected capacity change from 0 to 203944 Jul 12 00:24:15.638675 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 12 00:24:15.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.709765 systemd-fsck[1730]: fsck.fat 4.2 (2021-01-31) Jul 12 00:24:15.709765 systemd-fsck[1730]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Jul 12 00:24:15.716495 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 12 00:24:15.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.724890 systemd[1]: Mounting boot.mount... Jul 12 00:24:15.735973 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:24:15.771445 systemd[1]: Mounted boot.mount. Jul 12 00:24:15.791964 kernel: loop1: detected capacity change from 0 to 203944 Jul 12 00:24:15.809505 systemd[1]: Finished systemd-boot-update.service. Jul 12 00:24:15.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.815608 (sd-sysext)[1750]: Using extensions 'kubernetes'. Jul 12 00:24:15.817061 (sd-sysext)[1750]: Merged extensions into '/usr'. Jul 12 00:24:15.864349 systemd[1]: Mounting usr-share-oem.mount... Jul 12 00:24:15.866700 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:24:15.872189 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:24:15.882523 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:24:15.892273 systemd[1]: Starting modprobe@loop.service... Jul 12 00:24:15.899012 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:24:15.899596 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:15.914967 systemd[1]: Mounted usr-share-oem.mount. Jul 12 00:24:15.920117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:24:15.920491 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:24:15.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.924126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:24:15.924486 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:24:15.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.928758 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:24:15.929516 systemd[1]: Finished modprobe@loop.service. Jul 12 00:24:15.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.937322 systemd[1]: Finished systemd-sysext.service. Jul 12 00:24:15.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.948210 systemd[1]: Starting ensure-sysext.service... Jul 12 00:24:15.952348 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:24:15.952466 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:24:15.955128 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 12 00:24:15.969527 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:24:15.970979 systemd[1]: Finished systemd-machine-id-commit.service. Jul 12 00:24:15.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:15.979232 systemd[1]: Reloading. Jul 12 00:24:15.996463 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 12 00:24:15.999170 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:24:16.004753 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:24:16.124796 /usr/lib/systemd/system-generators/torcx-generator[1786]: time="2025-07-12T00:24:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:24:16.127284 /usr/lib/systemd/system-generators/torcx-generator[1786]: time="2025-07-12T00:24:16Z" level=info msg="torcx already run" Jul 12 00:24:16.250140 systemd-networkd[1596]: eth0: Gained IPv6LL Jul 12 00:24:16.384154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:24:16.384192 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:24:16.422656 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:24:16.596128 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:24:16.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.603206 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 12 00:24:16.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.619274 systemd[1]: Starting audit-rules.service... Jul 12 00:24:16.625647 systemd[1]: Starting clean-ca-certificates.service... Jul 12 00:24:16.632820 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 12 00:24:16.643751 systemd[1]: Starting systemd-resolved.service... Jul 12 00:24:16.656509 systemd[1]: Starting systemd-timesyncd.service... Jul 12 00:24:16.663847 systemd[1]: Starting systemd-update-utmp.service... Jul 12 00:24:16.670135 systemd[1]: Finished clean-ca-certificates.service. Jul 12 00:24:16.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.686282 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:24:16.688874 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:24:16.696153 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:24:16.707072 systemd[1]: Starting modprobe@loop.service... Jul 12 00:24:16.711077 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:24:16.711422 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:16.711665 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:24:16.716330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:24:16.716722 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:24:16.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.722000 audit[1862]: SYSTEM_BOOT pid=1862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.734682 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:24:16.739499 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:24:16.745125 systemd[1]: Starting modprobe@drm.service... Jul 12 00:24:16.750643 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:24:16.750997 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:16.751321 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:24:16.753341 systemd[1]: Finished systemd-update-utmp.service. Jul 12 00:24:16.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.759139 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:24:16.759515 systemd[1]: Finished modprobe@loop.service. Jul 12 00:24:16.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.769774 systemd[1]: Finished ensure-sysext.service. Jul 12 00:24:16.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.787501 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:24:16.787875 systemd[1]: Finished modprobe@drm.service. Jul 12 00:24:16.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.798236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:24:16.798621 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:24:16.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.803890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:24:16.804271 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:24:16.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.810798 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:24:16.810878 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:24:16.834607 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 12 00:24:16.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.846003 ldconfig[1717]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:24:16.859499 systemd[1]: Finished ldconfig.service. Jul 12 00:24:16.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:16.869317 systemd[1]: Starting systemd-update-done.service... Jul 12 00:24:16.892000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 12 00:24:16.892000 audit[1887]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe4cbb460 a2=420 a3=0 items=0 ppid=1850 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:24:16.892000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 12 00:24:16.894061 augenrules[1887]: No rules Jul 12 00:24:16.895740 systemd[1]: Finished audit-rules.service. Jul 12 00:24:16.901299 systemd[1]: Finished systemd-update-done.service. Jul 12 00:24:16.975380 systemd[1]: Started systemd-timesyncd.service. Jul 12 00:24:16.980977 systemd[1]: Reached target time-set.target. Jul 12 00:24:16.992704 systemd-resolved[1860]: Positive Trust Anchors: Jul 12 00:24:16.992732 systemd-resolved[1860]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:24:16.992784 systemd-resolved[1860]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:24:17.051540 systemd-resolved[1860]: Defaulting to hostname 'linux'. Jul 12 00:24:17.054711 systemd[1]: Started systemd-resolved.service. Jul 12 00:24:17.057233 systemd[1]: Reached target network.target. Jul 12 00:24:17.059529 systemd[1]: Reached target network-online.target. Jul 12 00:24:17.062006 systemd[1]: Reached target nss-lookup.target. Jul 12 00:24:17.064138 systemd[1]: Reached target sysinit.target. Jul 12 00:24:17.066509 systemd[1]: Started motdgen.path. Jul 12 00:24:17.068564 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 12 00:24:17.071884 systemd[1]: Started logrotate.timer. Jul 12 00:24:17.074026 systemd[1]: Started mdadm.timer. Jul 12 00:24:17.075845 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 12 00:24:17.078132 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:24:17.078183 systemd[1]: Reached target paths.target. Jul 12 00:24:17.080190 systemd[1]: Reached target timers.target. Jul 12 00:24:17.082707 systemd[1]: Listening on dbus.socket. Jul 12 00:24:17.087073 systemd[1]: Starting docker.socket... Jul 12 00:24:17.091761 systemd[1]: Listening on sshd.socket. Jul 12 00:24:17.094436 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:17.095422 systemd[1]: Listening on docker.socket. Jul 12 00:24:17.097871 systemd[1]: Reached target sockets.target. Jul 12 00:24:17.100248 systemd[1]: Reached target basic.target. Jul 12 00:24:17.102722 systemd[1]: System is tainted: cgroupsv1 Jul 12 00:24:17.103016 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:24:17.103200 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:24:17.105832 systemd[1]: Started amazon-ssm-agent.service. Jul 12 00:24:17.112160 systemd[1]: Starting containerd.service... Jul 12 00:24:17.116676 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 12 00:24:17.122990 systemd[1]: Starting dbus.service... Jul 12 00:24:17.128292 systemd[1]: Starting enable-oem-cloudinit.service... Jul 12 00:24:17.140529 systemd[1]: Starting extend-filesystems.service... Jul 12 00:24:17.228806 jq[1903]: false Jul 12 00:24:17.143774 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 12 00:24:17.072102 systemd-journald[1536]: Time jumped backwards, rotating. Jul 12 00:24:16.974673 dbus-daemon[1902]: [system] SELinux support is enabled Jul 12 00:24:17.151013 systemd[1]: Starting kubelet.service... Jul 12 00:24:17.078332 extend-filesystems[1904]: Found loop1 Jul 12 00:24:17.078332 extend-filesystems[1904]: Found nvme0n1 Jul 12 00:24:17.078332 extend-filesystems[1904]: Found nvme0n1p1 Jul 12 00:24:17.078332 extend-filesystems[1904]: Found nvme0n1p2 Jul 12 00:24:17.078332 extend-filesystems[1904]: Found nvme0n1p3 Jul 12 00:24:17.078332 extend-filesystems[1904]: Found usr Jul 12 00:24:17.078332 extend-filesystems[1904]: Found nvme0n1p4 Jul 12 00:24:17.078332 extend-filesystems[1904]: Found nvme0n1p6 Jul 12 00:24:17.078332 extend-filesystems[1904]: Found nvme0n1p7 Jul 12 00:24:17.078332 extend-filesystems[1904]: Found nvme0n1p9 Jul 12 00:24:17.078332 extend-filesystems[1904]: Checking size of /dev/nvme0n1p9 Jul 12 00:24:17.078332 extend-filesystems[1904]: Resized partition /dev/nvme0n1p9 Jul 12 00:24:17.232338 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 12 00:24:17.051351 dbus-daemon[1902]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1596 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 12 00:24:17.159752 systemd[1]: Starting motdgen.service... Jul 12 00:24:17.238480 extend-filesystems[1958]: resize2fs 1.46.5 (30-Dec-2021) Jul 12 00:24:17.257195 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 12 00:24:17.177727 systemd[1]: Started nvidia.service. Jul 12 00:24:17.182860 systemd[1]: Starting prepare-helm.service... Jul 12 00:24:17.187581 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 12 00:24:17.276019 extend-filesystems[1958]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 12 00:24:17.276019 extend-filesystems[1958]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:24:17.276019 extend-filesystems[1958]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 12 00:24:17.192625 systemd[1]: Starting sshd-keygen.service... Jul 12 00:24:17.318907 extend-filesystems[1904]: Resized filesystem in /dev/nvme0n1p9 Jul 12 00:24:17.325049 amazon-ssm-agent[1898]: 2025/07/12 00:24:17 Failed to load instance info from vault. RegistrationKey does not exist. Jul 12 00:24:17.205669 systemd[1]: Starting systemd-logind.service... Jul 12 00:24:17.327013 update_engine[1915]: I0712 00:24:17.138873 1915 main.cc:92] Flatcar Update Engine starting Jul 12 00:24:17.327013 update_engine[1915]: I0712 00:24:17.158795 1915 update_check_scheduler.cc:74] Next update check in 2m58s Jul 12 00:24:17.213177 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:17.213306 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:24:17.216281 systemd[1]: Starting update-engine.service... Jul 12 00:24:17.329997 tar[1922]: linux-arm64/helm Jul 12 00:24:16.790727 systemd-timesyncd[1861]: Contacted time server 206.191.180.116:123 (0.flatcar.pool.ntp.org). Jul 12 00:24:16.790913 systemd-timesyncd[1861]: Initial clock synchronization to Sat 2025-07-12 00:24:16.790287 UTC. Jul 12 00:24:17.333593 jq[1919]: true Jul 12 00:24:16.800358 systemd-resolved[1860]: Clock change detected. Flushing caches. Jul 12 00:24:16.808376 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 12 00:24:17.335331 jq[1942]: true Jul 12 00:24:16.829383 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:24:16.833346 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 12 00:24:17.336580 bash[1987]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:24:16.874638 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:24:16.875270 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 12 00:24:16.975009 systemd[1]: Started dbus.service. Jul 12 00:24:16.982099 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:24:16.982143 systemd[1]: Reached target system-config.target. Jul 12 00:24:16.988337 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:24:16.988377 systemd[1]: Reached target user-config.target. Jul 12 00:24:17.038988 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:24:17.039482 systemd[1]: Finished motdgen.service. Jul 12 00:24:17.057602 systemd[1]: Starting systemd-hostnamed.service... Jul 12 00:24:17.158756 systemd[1]: Started update-engine.service. Jul 12 00:24:17.191158 systemd[1]: Started locksmithd.service. Jul 12 00:24:17.284357 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:24:17.284908 systemd[1]: Finished extend-filesystems.service. Jul 12 00:24:17.307807 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 12 00:24:17.346378 amazon-ssm-agent[1898]: Initializing new seelog logger Jul 12 00:24:17.346378 amazon-ssm-agent[1898]: New Seelog Logger Creation Complete Jul 12 00:24:17.346378 amazon-ssm-agent[1898]: 2025/07/12 00:24:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:24:17.346378 amazon-ssm-agent[1898]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:24:17.346378 amazon-ssm-agent[1898]: 2025/07/12 00:24:17 processing appconfig overrides Jul 12 00:24:17.391494 systemd[1]: nvidia.service: Deactivated successfully. Jul 12 00:24:17.524421 systemd-logind[1914]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:24:17.524480 systemd-logind[1914]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 12 00:24:17.530375 systemd-logind[1914]: New seat seat0. Jul 12 00:24:17.531136 env[1927]: time="2025-07-12T00:24:17.531064508Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 12 00:24:17.535982 systemd[1]: Started systemd-logind.service. Jul 12 00:24:17.736022 env[1927]: time="2025-07-12T00:24:17.735925161Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:24:17.744942 env[1927]: time="2025-07-12T00:24:17.744843441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:17.756796 env[1927]: time="2025-07-12T00:24:17.756700630Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:24:17.756796 env[1927]: time="2025-07-12T00:24:17.756786202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:17.774044 env[1927]: time="2025-07-12T00:24:17.773944510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:24:17.774208 env[1927]: time="2025-07-12T00:24:17.774057850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:17.774208 env[1927]: time="2025-07-12T00:24:17.774096394Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 12 00:24:17.774208 env[1927]: time="2025-07-12T00:24:17.774145546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:17.774680 env[1927]: time="2025-07-12T00:24:17.774579970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:17.775759 env[1927]: time="2025-07-12T00:24:17.775703470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:17.776298 env[1927]: time="2025-07-12T00:24:17.776234218Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:24:17.776415 env[1927]: time="2025-07-12T00:24:17.776309602Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:24:17.776541 env[1927]: time="2025-07-12T00:24:17.776496646Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 12 00:24:17.776630 env[1927]: time="2025-07-12T00:24:17.776535802Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:24:17.795406 env[1927]: time="2025-07-12T00:24:17.795321022Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:24:17.795561 env[1927]: time="2025-07-12T00:24:17.795427618Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:24:17.795561 env[1927]: time="2025-07-12T00:24:17.795484174Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:24:17.795716 env[1927]: time="2025-07-12T00:24:17.795561190Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:24:17.795716 env[1927]: time="2025-07-12T00:24:17.795652750Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:24:17.795818 env[1927]: time="2025-07-12T00:24:17.795728434Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:24:17.795818 env[1927]: time="2025-07-12T00:24:17.795760786Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:24:17.796389 env[1927]: time="2025-07-12T00:24:17.796334362Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:24:17.796483 env[1927]: time="2025-07-12T00:24:17.796395286Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 12 00:24:17.796483 env[1927]: time="2025-07-12T00:24:17.796429990Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:24:17.796483 env[1927]: time="2025-07-12T00:24:17.796461262Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:24:17.796624 env[1927]: time="2025-07-12T00:24:17.796492954Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:24:17.796832 env[1927]: time="2025-07-12T00:24:17.796752358Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:24:17.796985 env[1927]: time="2025-07-12T00:24:17.796943602Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:24:17.797643 env[1927]: time="2025-07-12T00:24:17.797593414Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:24:17.797757 env[1927]: time="2025-07-12T00:24:17.797710210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.797757 env[1927]: time="2025-07-12T00:24:17.797745634Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:24:17.798035 env[1927]: time="2025-07-12T00:24:17.797991622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798127 env[1927]: time="2025-07-12T00:24:17.798040690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798127 env[1927]: time="2025-07-12T00:24:17.798075142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798127 env[1927]: time="2025-07-12T00:24:17.798103018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798267 env[1927]: time="2025-07-12T00:24:17.798140614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798267 env[1927]: time="2025-07-12T00:24:17.798171850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798267 env[1927]: time="2025-07-12T00:24:17.798203626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798267 env[1927]: time="2025-07-12T00:24:17.798232714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798466 env[1927]: time="2025-07-12T00:24:17.798265486Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:24:17.798617 env[1927]: time="2025-07-12T00:24:17.798572746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798715 env[1927]: time="2025-07-12T00:24:17.798621526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798715 env[1927]: time="2025-07-12T00:24:17.798653386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.798715 env[1927]: time="2025-07-12T00:24:17.798704314Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:24:17.798873 env[1927]: time="2025-07-12T00:24:17.798737314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 12 00:24:17.798873 env[1927]: time="2025-07-12T00:24:17.798767494Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:24:17.798873 env[1927]: time="2025-07-12T00:24:17.798802534Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 12 00:24:17.799040 env[1927]: time="2025-07-12T00:24:17.798867574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:24:17.799337 env[1927]: time="2025-07-12T00:24:17.799230934Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:24:17.801329 env[1927]: time="2025-07-12T00:24:17.799358638Z" level=info msg="Connect containerd service" Jul 12 00:24:17.801329 env[1927]: time="2025-07-12T00:24:17.799424578Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:24:17.806303 dbus-daemon[1902]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 12 00:24:17.812500 env[1927]: time="2025-07-12T00:24:17.809456206Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:24:17.812500 env[1927]: time="2025-07-12T00:24:17.809847874Z" level=info msg="Start subscribing containerd event" Jul 12 00:24:17.812500 env[1927]: time="2025-07-12T00:24:17.809936494Z" level=info msg="Start recovering state" Jul 12 00:24:17.812500 env[1927]: time="2025-07-12T00:24:17.810073438Z" level=info msg="Start event monitor" Jul 12 00:24:17.812500 env[1927]: time="2025-07-12T00:24:17.810433786Z" level=info msg="Start snapshots syncer" Jul 12 00:24:17.812500 env[1927]: time="2025-07-12T00:24:17.810459478Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:24:17.812500 env[1927]: time="2025-07-12T00:24:17.810504862Z" level=info msg="Start streaming server" Jul 12 00:24:17.806551 systemd[1]: Started systemd-hostnamed.service. Jul 12 00:24:17.812027 dbus-daemon[1902]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1955 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 12 00:24:17.827344 env[1927]: time="2025-07-12T00:24:17.817918030Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:24:17.827344 env[1927]: time="2025-07-12T00:24:17.818447086Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:24:17.827344 env[1927]: time="2025-07-12T00:24:17.818576050Z" level=info msg="containerd successfully booted in 0.302681s" Jul 12 00:24:17.817050 systemd[1]: Starting polkit.service... Jul 12 00:24:17.820631 systemd[1]: Started containerd.service. Jul 12 00:24:17.863358 polkitd[2027]: Started polkitd version 121 Jul 12 00:24:17.920295 polkitd[2027]: Loading rules from directory /etc/polkit-1/rules.d Jul 12 00:24:17.920413 polkitd[2027]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 12 00:24:17.926148 polkitd[2027]: Finished loading, compiling and executing 2 rules Jul 12 00:24:17.927218 dbus-daemon[1902]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 12 00:24:17.927463 systemd[1]: Started polkit.service. Jul 12 00:24:17.928014 polkitd[2027]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 12 00:24:17.972941 systemd-hostnamed[1955]: Hostname set to (transient) Jul 12 00:24:17.973118 systemd-resolved[1860]: System hostname changed to 'ip-172-31-23-9'. Jul 12 00:24:18.058848 coreos-metadata[1900]: Jul 12 00:24:18.058 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:24:18.060854 coreos-metadata[1900]: Jul 12 00:24:18.060 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Jul 12 00:24:18.062801 coreos-metadata[1900]: Jul 12 00:24:18.062 INFO Fetch successful Jul 12 00:24:18.062926 coreos-metadata[1900]: Jul 12 00:24:18.062 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 12 00:24:18.063766 coreos-metadata[1900]: Jul 12 00:24:18.063 INFO Fetch successful Jul 12 00:24:18.074415 unknown[1900]: wrote ssh authorized keys file for user: core Jul 12 00:24:18.110076 update-ssh-keys[2072]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:24:18.111111 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 12 00:24:18.219319 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Create new startup processor Jul 12 00:24:18.219594 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [LongRunningPluginsManager] registered plugins: {} Jul 12 00:24:18.219705 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Initializing bookkeeping folders Jul 12 00:24:18.219705 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO removing the completed state files Jul 12 00:24:18.219705 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Initializing bookkeeping folders for long running plugins Jul 12 00:24:18.219914 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Jul 12 00:24:18.219914 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Initializing healthcheck folders for long running plugins Jul 12 00:24:18.219914 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Initializing locations for inventory plugin Jul 12 00:24:18.219914 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Initializing default location for custom inventory Jul 12 00:24:18.219914 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Initializing default location for file inventory Jul 12 00:24:18.219914 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Initializing default location for role inventory Jul 12 00:24:18.219914 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Init the cloudwatchlogs publisher Jul 12 00:24:18.219914 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform independent plugin aws:runPowerShellScript Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform independent plugin aws:updateSsmAgent Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform independent plugin aws:configureDocker Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform independent plugin aws:refreshAssociation Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform independent plugin aws:runDocument Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform independent plugin aws:softwareInventory Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform independent plugin aws:runDockerAction Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform independent plugin aws:configurePackage Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform independent plugin aws:downloadContent Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Successfully loaded platform dependent plugin aws:runShellScript Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Jul 12 00:24:18.220423 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO OS: linux, Arch: arm64 Jul 12 00:24:18.229865 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessageGatewayService] Starting session document processing engine... Jul 12 00:24:18.229865 amazon-ssm-agent[1898]: datastore file /var/lib/amazon/ssm/i-03817c5f88d76d59c/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Jul 12 00:24:18.324044 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessageGatewayService] [EngineProcessor] Starting Jul 12 00:24:18.419238 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Jul 12 00:24:18.513870 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-03817c5f88d76d59c, requestId: b4ead96f-fa2a-4409-a9dc-00a4612971c4 Jul 12 00:24:18.608710 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] Starting document processing engine... Jul 12 00:24:18.704216 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] [EngineProcessor] Starting Jul 12 00:24:18.798809 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Jul 12 00:24:18.853682 tar[1922]: linux-arm64/LICENSE Jul 12 00:24:18.853682 tar[1922]: linux-arm64/README.md Jul 12 00:24:18.881068 systemd[1]: Finished prepare-helm.service. Jul 12 00:24:18.895731 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] Starting message polling Jul 12 00:24:18.966239 locksmithd[1979]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:24:18.990244 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] Starting send replies to MDS Jul 12 00:24:19.085966 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [instanceID=i-03817c5f88d76d59c] Starting association polling Jul 12 00:24:19.181926 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Jul 12 00:24:19.277797 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] [Association] Launching response handler Jul 12 00:24:19.373916 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Jul 12 00:24:19.470273 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Jul 12 00:24:19.566737 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Jul 12 00:24:19.663422 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessageGatewayService] listening reply. Jul 12 00:24:19.760391 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [HealthCheck] HealthCheck reporting agent health. Jul 12 00:24:19.857413 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [OfflineService] Starting document processing engine... Jul 12 00:24:19.954716 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [OfflineService] [EngineProcessor] Starting Jul 12 00:24:20.052249 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [OfflineService] [EngineProcessor] Initial processing Jul 12 00:24:20.149875 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [OfflineService] Starting message polling Jul 12 00:24:20.247749 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [OfflineService] Starting send replies to MDS Jul 12 00:24:20.345914 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [LongRunningPluginsManager] starting long running plugin manager Jul 12 00:24:20.444107 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Jul 12 00:24:20.542547 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [StartupProcessor] Executing startup processor tasks Jul 12 00:24:20.641290 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Jul 12 00:24:20.721014 systemd[1]: Started kubelet.service. Jul 12 00:24:20.740106 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Jul 12 00:24:20.839248 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 Jul 12 00:24:20.938556 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Jul 12 00:24:21.037946 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-03817c5f88d76d59c?role=subscribe&stream=input Jul 12 00:24:21.137620 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-03817c5f88d76d59c?role=subscribe&stream=input Jul 12 00:24:21.191181 sshd_keygen[1943]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:24:21.230986 systemd[1]: Finished sshd-keygen.service. Jul 12 00:24:21.237485 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessageGatewayService] Starting receiving message from control channel Jul 12 00:24:21.243264 systemd[1]: Starting issuegen.service... Jul 12 00:24:21.255744 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:24:21.256273 systemd[1]: Finished issuegen.service. Jul 12 00:24:21.263447 systemd[1]: Starting systemd-user-sessions.service... Jul 12 00:24:21.280074 systemd[1]: Finished systemd-user-sessions.service. Jul 12 00:24:21.286754 systemd[1]: Started getty@tty1.service. Jul 12 00:24:21.293345 systemd[1]: Started serial-getty@ttyS0.service. Jul 12 00:24:21.298005 systemd[1]: Reached target getty.target. Jul 12 00:24:21.300841 systemd[1]: Reached target multi-user.target. Jul 12 00:24:21.306338 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 12 00:24:21.324943 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 12 00:24:21.325609 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 12 00:24:21.333122 systemd[1]: Startup finished in 11.123s (kernel) + 13.281s (userspace) = 24.404s. Jul 12 00:24:21.338465 amazon-ssm-agent[1898]: 2025-07-12 00:24:18 INFO [MessageGatewayService] [EngineProcessor] Initial processing Jul 12 00:24:21.438745 amazon-ssm-agent[1898]: 2025-07-12 00:24:19 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Jul 12 00:24:21.738704 kubelet[2135]: E0712 00:24:21.738596 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:24:21.742336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:24:21.742742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:24:25.702311 systemd[1]: Created slice system-sshd.slice. Jul 12 00:24:25.704632 systemd[1]: Started sshd@0-172.31.23.9:22-147.75.109.163:33546.service. Jul 12 00:24:25.895412 sshd[2159]: Accepted publickey for core from 147.75.109.163 port 33546 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:24:25.899679 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:24:25.921238 systemd[1]: Created slice user-500.slice. Jul 12 00:24:25.923411 systemd[1]: Starting user-runtime-dir@500.service... Jul 12 00:24:25.928949 systemd-logind[1914]: New session 1 of user core. Jul 12 00:24:25.943330 systemd[1]: Finished user-runtime-dir@500.service. Jul 12 00:24:25.946084 systemd[1]: Starting user@500.service... Jul 12 00:24:25.960178 (systemd)[2164]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:24:26.139969 systemd[2164]: Queued start job for default target default.target. Jul 12 00:24:26.141312 systemd[2164]: Reached target paths.target. Jul 12 00:24:26.141370 systemd[2164]: Reached target sockets.target. Jul 12 00:24:26.141403 systemd[2164]: Reached target timers.target. Jul 12 00:24:26.141433 systemd[2164]: Reached target basic.target. Jul 12 00:24:26.141632 systemd[1]: Started user@500.service. Jul 12 00:24:26.142774 systemd[2164]: Reached target default.target. Jul 12 00:24:26.143007 systemd[2164]: Startup finished in 170ms. Jul 12 00:24:26.143351 systemd[1]: Started session-1.scope. Jul 12 00:24:26.290125 systemd[1]: Started sshd@1-172.31.23.9:22-147.75.109.163:49952.service. Jul 12 00:24:26.464474 sshd[2173]: Accepted publickey for core from 147.75.109.163 port 49952 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:24:26.467592 sshd[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:24:26.476689 systemd[1]: Started session-2.scope. Jul 12 00:24:26.478832 systemd-logind[1914]: New session 2 of user core. Jul 12 00:24:26.608851 sshd[2173]: pam_unix(sshd:session): session closed for user core Jul 12 00:24:26.614211 systemd-logind[1914]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:24:26.615076 systemd[1]: sshd@1-172.31.23.9:22-147.75.109.163:49952.service: Deactivated successfully. Jul 12 00:24:26.616466 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:24:26.618643 systemd-logind[1914]: Removed session 2. Jul 12 00:24:26.635542 systemd[1]: Started sshd@2-172.31.23.9:22-147.75.109.163:49954.service. Jul 12 00:24:26.812156 sshd[2180]: Accepted publickey for core from 147.75.109.163 port 49954 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:24:26.815154 sshd[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:24:26.823783 systemd[1]: Started session-3.scope. Jul 12 00:24:26.826492 systemd-logind[1914]: New session 3 of user core. Jul 12 00:24:26.950347 sshd[2180]: pam_unix(sshd:session): session closed for user core Jul 12 00:24:26.956140 systemd-logind[1914]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:24:26.956435 systemd[1]: sshd@2-172.31.23.9:22-147.75.109.163:49954.service: Deactivated successfully. Jul 12 00:24:26.957879 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:24:26.958747 systemd-logind[1914]: Removed session 3. Jul 12 00:24:26.975413 systemd[1]: Started sshd@3-172.31.23.9:22-147.75.109.163:49962.service. Jul 12 00:24:27.150217 sshd[2187]: Accepted publickey for core from 147.75.109.163 port 49962 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:24:27.152704 sshd[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:24:27.161841 systemd[1]: Started session-4.scope. Jul 12 00:24:27.163772 systemd-logind[1914]: New session 4 of user core. Jul 12 00:24:27.295838 sshd[2187]: pam_unix(sshd:session): session closed for user core Jul 12 00:24:27.301290 systemd-logind[1914]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:24:27.301609 systemd[1]: sshd@3-172.31.23.9:22-147.75.109.163:49962.service: Deactivated successfully. Jul 12 00:24:27.303088 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:24:27.304066 systemd-logind[1914]: Removed session 4. Jul 12 00:24:27.320775 systemd[1]: Started sshd@4-172.31.23.9:22-147.75.109.163:49970.service. Jul 12 00:24:27.492522 sshd[2194]: Accepted publickey for core from 147.75.109.163 port 49970 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:24:27.495559 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:24:27.503184 systemd-logind[1914]: New session 5 of user core. Jul 12 00:24:27.504120 systemd[1]: Started session-5.scope. Jul 12 00:24:27.628182 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:24:27.629912 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:24:27.713544 systemd[1]: Starting docker.service... Jul 12 00:24:27.834293 env[2208]: time="2025-07-12T00:24:27.834204992Z" level=info msg="Starting up" Jul 12 00:24:27.837315 env[2208]: time="2025-07-12T00:24:27.837271628Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:24:27.837497 env[2208]: time="2025-07-12T00:24:27.837468956Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:24:27.837622 env[2208]: time="2025-07-12T00:24:27.837589556Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:24:27.837764 env[2208]: time="2025-07-12T00:24:27.837735944Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:24:27.841346 env[2208]: time="2025-07-12T00:24:27.841302428Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:24:27.841550 env[2208]: time="2025-07-12T00:24:27.841506536Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:24:27.841707 env[2208]: time="2025-07-12T00:24:27.841641668Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:24:27.841841 env[2208]: time="2025-07-12T00:24:27.841812632Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:24:28.096486 env[2208]: time="2025-07-12T00:24:28.096343577Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 12 00:24:28.096486 env[2208]: time="2025-07-12T00:24:28.096390917Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 12 00:24:28.096785 env[2208]: time="2025-07-12T00:24:28.096647165Z" level=info msg="Loading containers: start." Jul 12 00:24:28.295701 kernel: Initializing XFRM netlink socket Jul 12 00:24:28.343249 env[2208]: time="2025-07-12T00:24:28.343175046Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 12 00:24:28.345907 (udev-worker)[2218]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:24:28.441339 systemd-networkd[1596]: docker0: Link UP Jul 12 00:24:28.471707 env[2208]: time="2025-07-12T00:24:28.471623683Z" level=info msg="Loading containers: done." Jul 12 00:24:28.505008 env[2208]: time="2025-07-12T00:24:28.504931843Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:24:28.505297 env[2208]: time="2025-07-12T00:24:28.505246123Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 12 00:24:28.505481 env[2208]: time="2025-07-12T00:24:28.505436515Z" level=info msg="Daemon has completed initialization" Jul 12 00:24:28.529793 systemd[1]: Started docker.service. Jul 12 00:24:28.546156 env[2208]: time="2025-07-12T00:24:28.546074047Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:24:29.700554 env[1927]: time="2025-07-12T00:24:29.700471485Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:24:30.264561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599274889.mount: Deactivated successfully. Jul 12 00:24:31.946940 env[1927]: time="2025-07-12T00:24:31.946867728Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:31.949500 env[1927]: time="2025-07-12T00:24:31.949430556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:31.952952 env[1927]: time="2025-07-12T00:24:31.952890876Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:31.956377 env[1927]: time="2025-07-12T00:24:31.956321628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:31.958034 env[1927]: time="2025-07-12T00:24:31.957972588Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:24:31.960410 env[1927]: time="2025-07-12T00:24:31.960354852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:24:31.994300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:24:31.994627 systemd[1]: Stopped kubelet.service. Jul 12 00:24:31.998281 systemd[1]: Starting kubelet.service... Jul 12 00:24:32.353855 systemd[1]: Started kubelet.service. Jul 12 00:24:32.438057 kubelet[2336]: E0712 00:24:32.437961 2336 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:24:32.445928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:24:32.447762 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:24:33.834090 env[1927]: time="2025-07-12T00:24:33.834019801Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:33.836918 env[1927]: time="2025-07-12T00:24:33.836866129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:33.840255 env[1927]: time="2025-07-12T00:24:33.840189385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:33.845736 env[1927]: time="2025-07-12T00:24:33.844559665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:33.846315 env[1927]: time="2025-07-12T00:24:33.846257065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:24:33.847089 env[1927]: time="2025-07-12T00:24:33.847036393Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:24:35.367635 env[1927]: time="2025-07-12T00:24:35.367551289Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:35.370973 env[1927]: time="2025-07-12T00:24:35.370902349Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:35.376070 env[1927]: time="2025-07-12T00:24:35.376005001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:35.381268 env[1927]: time="2025-07-12T00:24:35.381180493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:35.385370 env[1927]: time="2025-07-12T00:24:35.383376109Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:24:35.386345 env[1927]: time="2025-07-12T00:24:35.386293225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:24:36.756162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210192142.mount: Deactivated successfully. Jul 12 00:24:37.681518 env[1927]: time="2025-07-12T00:24:37.681430240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:37.690584 env[1927]: time="2025-07-12T00:24:37.690526529Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:37.694994 env[1927]: time="2025-07-12T00:24:37.694942289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:37.699962 env[1927]: time="2025-07-12T00:24:37.699907361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:37.701902 env[1927]: time="2025-07-12T00:24:37.701374133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:24:37.702643 env[1927]: time="2025-07-12T00:24:37.702596405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:24:38.313343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873752321.mount: Deactivated successfully. Jul 12 00:24:40.212613 env[1927]: time="2025-07-12T00:24:40.212521073Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:40.280219 env[1927]: time="2025-07-12T00:24:40.280111301Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:40.317867 env[1927]: time="2025-07-12T00:24:40.317780550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:40.394530 env[1927]: time="2025-07-12T00:24:40.393439566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:40.397033 env[1927]: time="2025-07-12T00:24:40.396931506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:24:40.399897 env[1927]: time="2025-07-12T00:24:40.399813138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:24:41.197597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894393192.mount: Deactivated successfully. Jul 12 00:24:41.216516 env[1927]: time="2025-07-12T00:24:41.216452790Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:41.223613 env[1927]: time="2025-07-12T00:24:41.223551930Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:41.228213 env[1927]: time="2025-07-12T00:24:41.228150714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:41.231951 env[1927]: time="2025-07-12T00:24:41.231866910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:41.234877 env[1927]: time="2025-07-12T00:24:41.233531742Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:24:41.235989 env[1927]: time="2025-07-12T00:24:41.235855482Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:24:41.759885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082374766.mount: Deactivated successfully. Jul 12 00:24:42.636852 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:24:42.637183 systemd[1]: Stopped kubelet.service. Jul 12 00:24:42.640022 systemd[1]: Starting kubelet.service... Jul 12 00:24:42.963963 systemd[1]: Started kubelet.service. Jul 12 00:24:43.062295 kubelet[2351]: E0712 00:24:43.062212 2351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:24:43.065847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:24:43.066241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:24:44.749774 env[1927]: time="2025-07-12T00:24:44.749690136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:44.754515 env[1927]: time="2025-07-12T00:24:44.754439964Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:44.758905 env[1927]: time="2025-07-12T00:24:44.758839620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:44.762909 env[1927]: time="2025-07-12T00:24:44.762854484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:24:44.764687 env[1927]: time="2025-07-12T00:24:44.764607288Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:24:47.981234 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 12 00:24:49.201232 amazon-ssm-agent[1898]: 2025-07-12 00:24:49 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Jul 12 00:24:53.136910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:24:53.137269 systemd[1]: Stopped kubelet.service. Jul 12 00:24:53.140811 systemd[1]: Starting kubelet.service... Jul 12 00:24:53.483358 systemd[1]: Started kubelet.service. Jul 12 00:24:53.589434 kubelet[2387]: E0712 00:24:53.589376 2387 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:24:53.593304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:24:53.593874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:24:56.301317 systemd[1]: Stopped kubelet.service. Jul 12 00:24:56.306266 systemd[1]: Starting kubelet.service... Jul 12 00:24:56.366048 systemd[1]: Reloading. Jul 12 00:24:56.535213 /usr/lib/systemd/system-generators/torcx-generator[2420]: time="2025-07-12T00:24:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:24:56.535286 /usr/lib/systemd/system-generators/torcx-generator[2420]: time="2025-07-12T00:24:56Z" level=info msg="torcx already run" Jul 12 00:24:56.752269 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:24:56.752306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:24:56.791929 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:24:57.056338 systemd[1]: Stopping kubelet.service... Jul 12 00:24:57.058101 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:24:57.058989 systemd[1]: Stopped kubelet.service. Jul 12 00:24:57.068112 systemd[1]: Starting kubelet.service... Jul 12 00:24:57.473507 systemd[1]: Started kubelet.service. Jul 12 00:24:57.607039 kubelet[2497]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:24:57.607631 kubelet[2497]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:24:57.607768 kubelet[2497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:24:57.608029 kubelet[2497]: I0712 00:24:57.607975 2497 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:24:59.028003 kubelet[2497]: I0712 00:24:59.027930 2497 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:24:59.028003 kubelet[2497]: I0712 00:24:59.027988 2497 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:24:59.028724 kubelet[2497]: I0712 00:24:59.028448 2497 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:24:59.122524 kubelet[2497]: E0712 00:24:59.122475 2497 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.9:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:24:59.128873 kubelet[2497]: I0712 00:24:59.128825 2497 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:24:59.142363 kubelet[2497]: E0712 00:24:59.142313 2497 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:24:59.142363 kubelet[2497]: I0712 00:24:59.142365 2497 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:24:59.152020 kubelet[2497]: I0712 00:24:59.151974 2497 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:24:59.153314 kubelet[2497]: I0712 00:24:59.153281 2497 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:24:59.153769 kubelet[2497]: I0712 00:24:59.153720 2497 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:24:59.154159 kubelet[2497]: I0712 00:24:59.153895 2497 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:24:59.154513 kubelet[2497]: I0712 00:24:59.154489 2497 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:24:59.154624 kubelet[2497]: I0712 00:24:59.154604 2497 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:24:59.155039 kubelet[2497]: I0712 00:24:59.155016 2497 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:24:59.163446 kubelet[2497]: I0712 00:24:59.163414 2497 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:24:59.163643 kubelet[2497]: I0712 00:24:59.163622 2497 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:24:59.163828 kubelet[2497]: I0712 00:24:59.163783 2497 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:24:59.163963 kubelet[2497]: I0712 00:24:59.163942 2497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:24:59.185352 kubelet[2497]: W0712 00:24:59.185094 2497 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-9&limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jul 12 00:24:59.185525 kubelet[2497]: E0712 00:24:59.185374 2497 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-9&limit=500&resourceVersion=0\": dial tcp 172.31.23.9:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:24:59.185871 kubelet[2497]: I0712 00:24:59.185831 2497 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:24:59.187205 kubelet[2497]: I0712 00:24:59.187146 2497 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:24:59.187409 kubelet[2497]: W0712 00:24:59.187370 2497 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:24:59.190027 kubelet[2497]: I0712 00:24:59.189981 2497 server.go:1274] "Started kubelet" Jul 12 00:24:59.195531 kubelet[2497]: W0712 00:24:59.195456 2497 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jul 12 00:24:59.195806 kubelet[2497]: E0712 00:24:59.195772 2497 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.9:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:24:59.196268 kubelet[2497]: I0712 00:24:59.196181 2497 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:24:59.196421 kubelet[2497]: I0712 00:24:59.196228 2497 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:24:59.196842 kubelet[2497]: I0712 00:24:59.196799 2497 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:24:59.204922 kubelet[2497]: E0712 00:24:59.197065 2497 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.9:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-9.18515948ca460ca0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-9,UID:ip-172-31-23-9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-9,},FirstTimestamp:2025-07-12 00:24:59.18993936 +0000 UTC m=+1.703783779,LastTimestamp:2025-07-12 00:24:59.18993936 +0000 UTC m=+1.703783779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-9,}" Jul 12 00:24:59.209857 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 12 00:24:59.210003 kubelet[2497]: I0712 00:24:59.206808 2497 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:24:59.210329 kubelet[2497]: I0712 00:24:59.210300 2497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:24:59.212899 kubelet[2497]: I0712 00:24:59.212586 2497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:24:59.213186 kubelet[2497]: I0712 00:24:59.213151 2497 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:24:59.213701 kubelet[2497]: E0712 00:24:59.213640 2497 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-9\" not found" Jul 12 00:24:59.214169 kubelet[2497]: I0712 00:24:59.214136 2497 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:24:59.214283 kubelet[2497]: I0712 00:24:59.214258 2497 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:24:59.216234 kubelet[2497]: W0712 00:24:59.215523 2497 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jul 12 00:24:59.216234 kubelet[2497]: E0712 00:24:59.215638 2497 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.9:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:24:59.216234 kubelet[2497]: E0712 00:24:59.215830 2497 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": dial tcp 172.31.23.9:6443: connect: connection refused" interval="200ms" Jul 12 00:24:59.219599 kubelet[2497]: E0712 00:24:59.219562 2497 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:24:59.219969 kubelet[2497]: I0712 00:24:59.219922 2497 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:24:59.220140 kubelet[2497]: I0712 00:24:59.220098 2497 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:24:59.223433 kubelet[2497]: I0712 00:24:59.223388 2497 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:24:59.292028 kubelet[2497]: I0712 00:24:59.291846 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:24:59.294234 kubelet[2497]: I0712 00:24:59.294191 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:24:59.294447 kubelet[2497]: I0712 00:24:59.294418 2497 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:24:59.294583 kubelet[2497]: I0712 00:24:59.294561 2497 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:24:59.294828 kubelet[2497]: E0712 00:24:59.294796 2497 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:24:59.296432 kubelet[2497]: I0712 00:24:59.295195 2497 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:24:59.296592 kubelet[2497]: I0712 00:24:59.296446 2497 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:24:59.296592 kubelet[2497]: I0712 00:24:59.296485 2497 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:24:59.299234 kubelet[2497]: W0712 00:24:59.299097 2497 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jul 12 00:24:59.300998 kubelet[2497]: E0712 00:24:59.299249 2497 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.9:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:24:59.302034 kubelet[2497]: I0712 00:24:59.301983 2497 policy_none.go:49] "None policy: Start" Jul 12 00:24:59.303913 kubelet[2497]: I0712 00:24:59.303880 2497 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:24:59.304114 kubelet[2497]: I0712 00:24:59.304092 2497 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:24:59.314063 kubelet[2497]: E0712 00:24:59.314002 2497 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-9\" not found" Jul 12 00:24:59.318531 kubelet[2497]: I0712 00:24:59.318490 2497 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:24:59.319008 kubelet[2497]: I0712 00:24:59.318982 2497 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:24:59.319174 kubelet[2497]: I0712 00:24:59.319119 2497 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:24:59.319893 kubelet[2497]: I0712 00:24:59.319864 2497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:24:59.324087 kubelet[2497]: E0712 00:24:59.324051 2497 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-9\" not found" Jul 12 00:24:59.421471 kubelet[2497]: E0712 00:24:59.421399 2497 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": dial tcp 172.31.23.9:6443: connect: connection refused" interval="400ms" Jul 12 00:24:59.422478 kubelet[2497]: I0712 00:24:59.422374 2497 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-9" Jul 12 00:24:59.423685 kubelet[2497]: E0712 00:24:59.423616 2497 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.9:6443/api/v1/nodes\": dial tcp 172.31.23.9:6443: connect: connection refused" node="ip-172-31-23-9" Jul 12 00:24:59.515377 kubelet[2497]: I0712 00:24:59.515334 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:24:59.515624 kubelet[2497]: I0712 00:24:59.515580 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:24:59.515859 kubelet[2497]: I0712 00:24:59.515820 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94a7b290732a1e688d9634fc33742742-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-9\" (UID: \"94a7b290732a1e688d9634fc33742742\") " pod="kube-system/kube-scheduler-ip-172-31-23-9" Jul 12 00:24:59.516023 kubelet[2497]: I0712 00:24:59.515986 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47e2945d84cc314f1e30501913692819-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"47e2945d84cc314f1e30501913692819\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jul 12 00:24:59.516185 kubelet[2497]: I0712 00:24:59.516148 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:24:59.516346 kubelet[2497]: I0712 00:24:59.516309 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:24:59.516522 kubelet[2497]: I0712 00:24:59.516483 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47e2945d84cc314f1e30501913692819-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"47e2945d84cc314f1e30501913692819\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jul 12 00:24:59.516711 kubelet[2497]: I0712 00:24:59.516686 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:24:59.516894 kubelet[2497]: I0712 00:24:59.516868 2497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47e2945d84cc314f1e30501913692819-ca-certs\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"47e2945d84cc314f1e30501913692819\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jul 12 00:24:59.626125 kubelet[2497]: I0712 00:24:59.626004 2497 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-9" Jul 12 00:24:59.627943 kubelet[2497]: E0712 00:24:59.627872 2497 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.9:6443/api/v1/nodes\": dial tcp 172.31.23.9:6443: connect: connection refused" node="ip-172-31-23-9" Jul 12 00:24:59.712584 env[1927]: time="2025-07-12T00:24:59.712522944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-9,Uid:b31af3f526bdf7dac3ef2bd0bf4e1aa2,Namespace:kube-system,Attempt:0,}" Jul 12 00:24:59.717086 env[1927]: time="2025-07-12T00:24:59.716421937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-9,Uid:94a7b290732a1e688d9634fc33742742,Namespace:kube-system,Attempt:0,}" Jul 12 00:24:59.721706 env[1927]: time="2025-07-12T00:24:59.721468571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-9,Uid:47e2945d84cc314f1e30501913692819,Namespace:kube-system,Attempt:0,}" Jul 12 00:24:59.823246 kubelet[2497]: E0712 00:24:59.823180 2497 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": dial tcp 172.31.23.9:6443: connect: connection refused" interval="800ms" Jul 12 00:25:00.030578 kubelet[2497]: I0712 00:25:00.030506 2497 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-9" Jul 12 00:25:00.031273 kubelet[2497]: E0712 00:25:00.031101 2497 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.9:6443/api/v1/nodes\": dial tcp 172.31.23.9:6443: connect: connection refused" node="ip-172-31-23-9" Jul 12 00:25:00.058970 kubelet[2497]: W0712 00:25:00.058874 2497 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-9&limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jul 12 00:25:00.059140 kubelet[2497]: E0712 00:25:00.058976 2497 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-9&limit=500&resourceVersion=0\": dial tcp 172.31.23.9:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:00.224051 kubelet[2497]: W0712 00:25:00.223950 2497 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jul 12 00:25:00.224051 kubelet[2497]: E0712 00:25:00.224030 2497 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.9:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:00.244773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3199718778.mount: Deactivated successfully. Jul 12 00:25:00.258090 env[1927]: time="2025-07-12T00:25:00.258012348Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.266995 env[1927]: time="2025-07-12T00:25:00.266942563Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.269966 env[1927]: time="2025-07-12T00:25:00.269894064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.271897 env[1927]: time="2025-07-12T00:25:00.271831005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.278130 env[1927]: time="2025-07-12T00:25:00.278078549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.280200 env[1927]: time="2025-07-12T00:25:00.280153403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.282736 env[1927]: time="2025-07-12T00:25:00.281840319Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.284506 env[1927]: time="2025-07-12T00:25:00.284456769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.290018 env[1927]: time="2025-07-12T00:25:00.289940402Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.291595 env[1927]: time="2025-07-12T00:25:00.291521376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.293082 env[1927]: time="2025-07-12T00:25:00.293025748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.299906 env[1927]: time="2025-07-12T00:25:00.299850866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:00.390794 env[1927]: time="2025-07-12T00:25:00.390229688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:00.390794 env[1927]: time="2025-07-12T00:25:00.390305557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:00.390794 env[1927]: time="2025-07-12T00:25:00.390330663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:00.391545 env[1927]: time="2025-07-12T00:25:00.391443245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa5bcb477d76b886a5b186f8c3d9a0bf846c2a481bf68420c60902a79184dabf pid=2544 runtime=io.containerd.runc.v2 Jul 12 00:25:00.396386 env[1927]: time="2025-07-12T00:25:00.396217967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:00.396721 env[1927]: time="2025-07-12T00:25:00.396319782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:00.396966 env[1927]: time="2025-07-12T00:25:00.396877819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:00.397568 env[1927]: time="2025-07-12T00:25:00.397452621Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e345e4f3ac8dc16a4c4618c5734cbb281df7ad97c99375d00f0c30ab3f68842b pid=2559 runtime=io.containerd.runc.v2 Jul 12 00:25:00.404680 env[1927]: time="2025-07-12T00:25:00.404524056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:00.404856 env[1927]: time="2025-07-12T00:25:00.404704968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:00.404856 env[1927]: time="2025-07-12T00:25:00.404782925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:00.405691 env[1927]: time="2025-07-12T00:25:00.405562521Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4575fdb1fb3755a3344abb225313424572693c13bbb2c2f36d174f1cec610d28 pid=2558 runtime=io.containerd.runc.v2 Jul 12 00:25:00.409690 kubelet[2497]: W0712 00:25:00.409514 2497 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jul 12 00:25:00.409690 kubelet[2497]: E0712 00:25:00.409611 2497 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.9:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:00.499699 kubelet[2497]: W0712 00:25:00.499508 2497 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.9:6443: connect: connection refused Jul 12 00:25:00.499699 kubelet[2497]: E0712 00:25:00.499606 2497 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.9:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:00.597185 env[1927]: time="2025-07-12T00:25:00.597031442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-9,Uid:94a7b290732a1e688d9634fc33742742,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa5bcb477d76b886a5b186f8c3d9a0bf846c2a481bf68420c60902a79184dabf\"" Jul 12 00:25:00.599470 env[1927]: time="2025-07-12T00:25:00.599350285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-9,Uid:47e2945d84cc314f1e30501913692819,Namespace:kube-system,Attempt:0,} returns sandbox id \"e345e4f3ac8dc16a4c4618c5734cbb281df7ad97c99375d00f0c30ab3f68842b\"" Jul 12 00:25:00.606568 env[1927]: time="2025-07-12T00:25:00.606499557Z" level=info msg="CreateContainer within sandbox \"aa5bcb477d76b886a5b186f8c3d9a0bf846c2a481bf68420c60902a79184dabf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:25:00.613118 env[1927]: time="2025-07-12T00:25:00.613057717Z" level=info msg="CreateContainer within sandbox \"e345e4f3ac8dc16a4c4618c5734cbb281df7ad97c99375d00f0c30ab3f68842b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:25:00.624897 kubelet[2497]: E0712 00:25:00.624735 2497 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": dial tcp 172.31.23.9:6443: connect: connection refused" interval="1.6s" Jul 12 00:25:00.631563 env[1927]: time="2025-07-12T00:25:00.631506654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-9,Uid:b31af3f526bdf7dac3ef2bd0bf4e1aa2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4575fdb1fb3755a3344abb225313424572693c13bbb2c2f36d174f1cec610d28\"" Jul 12 00:25:00.638738 env[1927]: time="2025-07-12T00:25:00.638642977Z" level=info msg="CreateContainer within sandbox \"4575fdb1fb3755a3344abb225313424572693c13bbb2c2f36d174f1cec610d28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:25:00.645703 env[1927]: time="2025-07-12T00:25:00.645617817Z" level=info msg="CreateContainer within sandbox \"aa5bcb477d76b886a5b186f8c3d9a0bf846c2a481bf68420c60902a79184dabf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9935c1c28016a07006d25ef29202970c08ffcc8685cd3e16b1868c0e6d4f373b\"" Jul 12 00:25:00.647085 env[1927]: time="2025-07-12T00:25:00.647038196Z" level=info msg="StartContainer for \"9935c1c28016a07006d25ef29202970c08ffcc8685cd3e16b1868c0e6d4f373b\"" Jul 12 00:25:00.655442 env[1927]: time="2025-07-12T00:25:00.655369451Z" level=info msg="CreateContainer within sandbox \"e345e4f3ac8dc16a4c4618c5734cbb281df7ad97c99375d00f0c30ab3f68842b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f258578424f0441eea240bb1693eb58e6e5b4d24ef40b5b95f70a5cf626f455\"" Jul 12 00:25:00.656415 env[1927]: time="2025-07-12T00:25:00.656309533Z" level=info msg="StartContainer for \"0f258578424f0441eea240bb1693eb58e6e5b4d24ef40b5b95f70a5cf626f455\"" Jul 12 00:25:00.680484 env[1927]: time="2025-07-12T00:25:00.680360611Z" level=info msg="CreateContainer within sandbox \"4575fdb1fb3755a3344abb225313424572693c13bbb2c2f36d174f1cec610d28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f206fd67bd867e6fdaa3622c4f689911be503dc5f0ac3f445807535dcab39d58\"" Jul 12 00:25:00.681418 env[1927]: time="2025-07-12T00:25:00.681370178Z" level=info msg="StartContainer for \"f206fd67bd867e6fdaa3622c4f689911be503dc5f0ac3f445807535dcab39d58\"" Jul 12 00:25:00.840067 kubelet[2497]: I0712 00:25:00.839971 2497 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-9" Jul 12 00:25:00.840615 kubelet[2497]: E0712 00:25:00.840533 2497 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.9:6443/api/v1/nodes\": dial tcp 172.31.23.9:6443: connect: connection refused" node="ip-172-31-23-9" Jul 12 00:25:00.864150 env[1927]: time="2025-07-12T00:25:00.863990334Z" level=info msg="StartContainer for \"0f258578424f0441eea240bb1693eb58e6e5b4d24ef40b5b95f70a5cf626f455\" returns successfully" Jul 12 00:25:00.910773 env[1927]: time="2025-07-12T00:25:00.905066129Z" level=info msg="StartContainer for \"f206fd67bd867e6fdaa3622c4f689911be503dc5f0ac3f445807535dcab39d58\" returns successfully" Jul 12 00:25:00.951758 env[1927]: time="2025-07-12T00:25:00.951640726Z" level=info msg="StartContainer for \"9935c1c28016a07006d25ef29202970c08ffcc8685cd3e16b1868c0e6d4f373b\" returns successfully" Jul 12 00:25:01.943568 update_engine[1915]: I0712 00:25:01.942732 1915 update_attempter.cc:509] Updating boot flags... Jul 12 00:25:02.452843 kubelet[2497]: I0712 00:25:02.452758 2497 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-9" Jul 12 00:25:05.081905 kubelet[2497]: E0712 00:25:05.081814 2497 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-9\" not found" node="ip-172-31-23-9" Jul 12 00:25:05.139491 kubelet[2497]: I0712 00:25:05.139442 2497 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-9" Jul 12 00:25:05.201424 kubelet[2497]: I0712 00:25:05.201379 2497 apiserver.go:52] "Watching apiserver" Jul 12 00:25:05.214901 kubelet[2497]: I0712 00:25:05.214820 2497 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:25:07.515341 systemd[1]: Reloading. Jul 12 00:25:07.674759 /usr/lib/systemd/system-generators/torcx-generator[2887]: time="2025-07-12T00:25:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:25:07.675479 /usr/lib/systemd/system-generators/torcx-generator[2887]: time="2025-07-12T00:25:07Z" level=info msg="torcx already run" Jul 12 00:25:07.939709 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:25:07.939756 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:25:07.985279 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:25:08.223608 systemd[1]: Stopping kubelet.service... Jul 12 00:25:08.248096 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:25:08.248816 systemd[1]: Stopped kubelet.service. Jul 12 00:25:08.254767 systemd[1]: Starting kubelet.service... Jul 12 00:25:08.614390 systemd[1]: Started kubelet.service. Jul 12 00:25:08.751976 kubelet[2959]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:25:08.752521 kubelet[2959]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:25:08.752623 kubelet[2959]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:25:08.752885 kubelet[2959]: I0712 00:25:08.752836 2959 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:25:08.779715 kubelet[2959]: I0712 00:25:08.779626 2959 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:25:08.779936 kubelet[2959]: I0712 00:25:08.779911 2959 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:25:08.780569 kubelet[2959]: I0712 00:25:08.780532 2959 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:25:08.783435 kubelet[2959]: I0712 00:25:08.783388 2959 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:25:08.787838 kubelet[2959]: I0712 00:25:08.787794 2959 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:25:08.789515 sudo[2974]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:25:08.790361 sudo[2974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 12 00:25:08.804505 kubelet[2959]: E0712 00:25:08.803878 2959 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:25:08.804505 kubelet[2959]: I0712 00:25:08.803947 2959 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:25:08.809517 kubelet[2959]: I0712 00:25:08.809438 2959 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:25:08.810331 kubelet[2959]: I0712 00:25:08.810290 2959 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:25:08.810626 kubelet[2959]: I0712 00:25:08.810566 2959 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:25:08.817979 kubelet[2959]: I0712 00:25:08.810624 2959 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:25:08.818250 kubelet[2959]: I0712 00:25:08.817994 2959 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:25:08.818250 kubelet[2959]: I0712 00:25:08.818075 2959 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:25:08.818250 kubelet[2959]: I0712 00:25:08.818152 2959 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:25:08.819049 kubelet[2959]: I0712 00:25:08.818438 2959 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:25:08.820264 kubelet[2959]: I0712 00:25:08.820212 2959 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:25:08.820395 kubelet[2959]: I0712 00:25:08.820306 2959 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:25:08.820395 kubelet[2959]: I0712 00:25:08.820332 2959 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:25:08.828525 kubelet[2959]: I0712 00:25:08.828459 2959 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:25:08.830096 kubelet[2959]: I0712 00:25:08.830047 2959 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:25:08.836875 kubelet[2959]: I0712 00:25:08.836818 2959 server.go:1274] "Started kubelet" Jul 12 00:25:08.843749 kubelet[2959]: I0712 00:25:08.843641 2959 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:25:08.844906 kubelet[2959]: I0712 00:25:08.844821 2959 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:25:08.873776 kubelet[2959]: I0712 00:25:08.866572 2959 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:25:08.882083 kubelet[2959]: I0712 00:25:08.877157 2959 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:25:08.882083 kubelet[2959]: I0712 00:25:08.879181 2959 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:25:08.882083 kubelet[2959]: I0712 00:25:08.879564 2959 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:25:08.883467 kubelet[2959]: I0712 00:25:08.883428 2959 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:25:08.883864 kubelet[2959]: E0712 00:25:08.883822 2959 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-9\" not found" Jul 12 00:25:08.893911 kubelet[2959]: I0712 00:25:08.893861 2959 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:25:08.894157 kubelet[2959]: I0712 00:25:08.894121 2959 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:25:08.901715 kubelet[2959]: I0712 00:25:08.901321 2959 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:25:08.905847 kubelet[2959]: I0712 00:25:08.903482 2959 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:25:08.905847 kubelet[2959]: I0712 00:25:08.903538 2959 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:25:08.905847 kubelet[2959]: I0712 00:25:08.903571 2959 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:25:08.905847 kubelet[2959]: E0712 00:25:08.903647 2959 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:25:08.947862 kubelet[2959]: I0712 00:25:08.947817 2959 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:25:08.949091 kubelet[2959]: I0712 00:25:08.948186 2959 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:25:08.952363 kubelet[2959]: E0712 00:25:08.951600 2959 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:25:08.959344 kubelet[2959]: I0712 00:25:08.959308 2959 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:25:09.005337 kubelet[2959]: E0712 00:25:09.005161 2959 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:25:09.124301 kubelet[2959]: I0712 00:25:09.124184 2959 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:25:09.124301 kubelet[2959]: I0712 00:25:09.124222 2959 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:25:09.124301 kubelet[2959]: I0712 00:25:09.124258 2959 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:25:09.124737 kubelet[2959]: I0712 00:25:09.124695 2959 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:25:09.124835 kubelet[2959]: I0712 00:25:09.124732 2959 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:25:09.124835 kubelet[2959]: I0712 00:25:09.124770 2959 policy_none.go:49] "None policy: Start" Jul 12 00:25:09.126208 kubelet[2959]: I0712 00:25:09.126161 2959 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:25:09.126208 kubelet[2959]: I0712 00:25:09.126216 2959 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:25:09.126524 kubelet[2959]: I0712 00:25:09.126487 2959 state_mem.go:75] "Updated machine memory state" Jul 12 00:25:09.129326 kubelet[2959]: I0712 00:25:09.129274 2959 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:25:09.129608 kubelet[2959]: I0712 00:25:09.129569 2959 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:25:09.129728 kubelet[2959]: I0712 00:25:09.129603 2959 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:25:09.138769 kubelet[2959]: I0712 00:25:09.138720 2959 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:25:09.230566 kubelet[2959]: E0712 00:25:09.230506 2959 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-9\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:25:09.245010 kubelet[2959]: I0712 00:25:09.244959 2959 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-9" Jul 12 00:25:09.260462 kubelet[2959]: I0712 00:25:09.260407 2959 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-23-9" Jul 12 00:25:09.260842 kubelet[2959]: I0712 00:25:09.260802 2959 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-9" Jul 12 00:25:09.299004 kubelet[2959]: I0712 00:25:09.298930 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:25:09.299156 kubelet[2959]: I0712 00:25:09.299017 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:25:09.299156 kubelet[2959]: I0712 00:25:09.299062 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:25:09.299156 kubelet[2959]: I0712 00:25:09.299107 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47e2945d84cc314f1e30501913692819-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"47e2945d84cc314f1e30501913692819\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jul 12 00:25:09.299156 kubelet[2959]: I0712 00:25:09.299149 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47e2945d84cc314f1e30501913692819-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"47e2945d84cc314f1e30501913692819\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jul 12 00:25:09.299396 kubelet[2959]: I0712 00:25:09.299184 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:25:09.299396 kubelet[2959]: I0712 00:25:09.299221 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b31af3f526bdf7dac3ef2bd0bf4e1aa2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-9\" (UID: \"b31af3f526bdf7dac3ef2bd0bf4e1aa2\") " pod="kube-system/kube-controller-manager-ip-172-31-23-9" Jul 12 00:25:09.299396 kubelet[2959]: I0712 00:25:09.299268 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94a7b290732a1e688d9634fc33742742-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-9\" (UID: \"94a7b290732a1e688d9634fc33742742\") " pod="kube-system/kube-scheduler-ip-172-31-23-9" Jul 12 00:25:09.299396 kubelet[2959]: I0712 00:25:09.299304 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47e2945d84cc314f1e30501913692819-ca-certs\") pod \"kube-apiserver-ip-172-31-23-9\" (UID: \"47e2945d84cc314f1e30501913692819\") " pod="kube-system/kube-apiserver-ip-172-31-23-9" Jul 12 00:25:09.825692 kubelet[2959]: I0712 00:25:09.825619 2959 apiserver.go:52] "Watching apiserver" Jul 12 00:25:09.894834 kubelet[2959]: I0712 00:25:09.894764 2959 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:25:09.956870 sudo[2974]: pam_unix(sudo:session): session closed for user root Jul 12 00:25:10.090289 kubelet[2959]: I0712 00:25:10.090114 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-9" podStartSLOduration=1.09008782 podStartE2EDuration="1.09008782s" podCreationTimestamp="2025-07-12 00:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:25:10.068285939 +0000 UTC m=+1.416819343" watchObservedRunningTime="2025-07-12 00:25:10.09008782 +0000 UTC m=+1.438621188" Jul 12 00:25:10.106693 kubelet[2959]: I0712 00:25:10.106600 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-9" podStartSLOduration=1.106576396 podStartE2EDuration="1.106576396s" podCreationTimestamp="2025-07-12 00:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:25:10.091166574 +0000 UTC m=+1.439699978" watchObservedRunningTime="2025-07-12 00:25:10.106576396 +0000 UTC m=+1.455109776" Jul 12 00:25:10.127031 kubelet[2959]: I0712 00:25:10.126953 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-9" podStartSLOduration=4.126930211 podStartE2EDuration="4.126930211s" podCreationTimestamp="2025-07-12 00:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:25:10.107454611 +0000 UTC m=+1.455988015" watchObservedRunningTime="2025-07-12 00:25:10.126930211 +0000 UTC m=+1.475463591" Jul 12 00:25:11.587074 kubelet[2959]: I0712 00:25:11.587034 2959 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:25:11.588505 env[1927]: time="2025-07-12T00:25:11.588410206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:25:11.589589 kubelet[2959]: I0712 00:25:11.589551 2959 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:25:12.323704 kubelet[2959]: I0712 00:25:12.323081 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d69d2041-d1b8-4408-a448-e4419c5d5a7f-kube-proxy\") pod \"kube-proxy-45vpb\" (UID: \"d69d2041-d1b8-4408-a448-e4419c5d5a7f\") " pod="kube-system/kube-proxy-45vpb" Jul 12 00:25:12.323704 kubelet[2959]: I0712 00:25:12.323243 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d69d2041-d1b8-4408-a448-e4419c5d5a7f-xtables-lock\") pod \"kube-proxy-45vpb\" (UID: \"d69d2041-d1b8-4408-a448-e4419c5d5a7f\") " pod="kube-system/kube-proxy-45vpb" Jul 12 00:25:12.323704 kubelet[2959]: I0712 00:25:12.323286 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d69d2041-d1b8-4408-a448-e4419c5d5a7f-lib-modules\") pod \"kube-proxy-45vpb\" (UID: \"d69d2041-d1b8-4408-a448-e4419c5d5a7f\") " pod="kube-system/kube-proxy-45vpb" Jul 12 00:25:12.323704 kubelet[2959]: I0712 00:25:12.323375 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjgf\" (UniqueName: \"kubernetes.io/projected/d69d2041-d1b8-4408-a448-e4419c5d5a7f-kube-api-access-8pjgf\") pod \"kube-proxy-45vpb\" (UID: \"d69d2041-d1b8-4408-a448-e4419c5d5a7f\") " pod="kube-system/kube-proxy-45vpb" Jul 12 00:25:12.453223 kubelet[2959]: I0712 00:25:12.453137 2959 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 12 00:25:12.589357 env[1927]: time="2025-07-12T00:25:12.588719839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45vpb,Uid:d69d2041-d1b8-4408-a448-e4419c5d5a7f,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:12.666930 env[1927]: time="2025-07-12T00:25:12.666753939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:12.667442 env[1927]: time="2025-07-12T00:25:12.667322792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:12.668128 env[1927]: time="2025-07-12T00:25:12.667991345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:12.669086 env[1927]: time="2025-07-12T00:25:12.668984219Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afd31da012336b147eb89c6cc60bd4895c1f5f22f072b8bd66b6ad3b2f169558 pid=3008 runtime=io.containerd.runc.v2 Jul 12 00:25:12.813799 env[1927]: time="2025-07-12T00:25:12.813727262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45vpb,Uid:d69d2041-d1b8-4408-a448-e4419c5d5a7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"afd31da012336b147eb89c6cc60bd4895c1f5f22f072b8bd66b6ad3b2f169558\"" Jul 12 00:25:12.822698 env[1927]: time="2025-07-12T00:25:12.822610295Z" level=info msg="CreateContainer within sandbox \"afd31da012336b147eb89c6cc60bd4895c1f5f22f072b8bd66b6ad3b2f169558\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:25:12.854102 env[1927]: time="2025-07-12T00:25:12.853959505Z" level=info msg="CreateContainer within sandbox \"afd31da012336b147eb89c6cc60bd4895c1f5f22f072b8bd66b6ad3b2f169558\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5e65d901af63b615c159f30c1f23f3c9766025f6218b622d71b2db4c8733d5d\"" Jul 12 00:25:12.855935 env[1927]: time="2025-07-12T00:25:12.855879924Z" level=info msg="StartContainer for \"c5e65d901af63b615c159f30c1f23f3c9766025f6218b622d71b2db4c8733d5d\"" Jul 12 00:25:13.072349 env[1927]: time="2025-07-12T00:25:13.072255657Z" level=info msg="StartContainer for \"c5e65d901af63b615c159f30c1f23f3c9766025f6218b622d71b2db4c8733d5d\" returns successfully" Jul 12 00:25:13.246917 kubelet[2959]: I0712 00:25:13.246845 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-bpf-maps\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247546 kubelet[2959]: I0712 00:25:13.246924 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-xtables-lock\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247546 kubelet[2959]: I0712 00:25:13.246966 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-clustermesh-secrets\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247546 kubelet[2959]: I0712 00:25:13.247001 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-hostproc\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247546 kubelet[2959]: I0712 00:25:13.247040 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-run\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247546 kubelet[2959]: I0712 00:25:13.247077 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-lib-modules\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247546 kubelet[2959]: I0712 00:25:13.247123 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-config-path\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247981 kubelet[2959]: I0712 00:25:13.247161 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-cgroup\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247981 kubelet[2959]: I0712 00:25:13.247195 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cni-path\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247981 kubelet[2959]: I0712 00:25:13.247229 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-host-proc-sys-net\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247981 kubelet[2959]: I0712 00:25:13.247265 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtv4\" (UniqueName: \"kubernetes.io/projected/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-kube-api-access-5xtv4\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.247981 kubelet[2959]: I0712 00:25:13.247305 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f41feb0e-c3f7-4bba-b15c-be07edfd5efd-cilium-config-path\") pod \"cilium-operator-5d85765b45-7fjf7\" (UID: \"f41feb0e-c3f7-4bba-b15c-be07edfd5efd\") " pod="kube-system/cilium-operator-5d85765b45-7fjf7" Jul 12 00:25:13.248299 kubelet[2959]: I0712 00:25:13.247345 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-hubble-tls\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.248299 kubelet[2959]: I0712 00:25:13.247383 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9gh4\" (UniqueName: \"kubernetes.io/projected/f41feb0e-c3f7-4bba-b15c-be07edfd5efd-kube-api-access-r9gh4\") pod \"cilium-operator-5d85765b45-7fjf7\" (UID: \"f41feb0e-c3f7-4bba-b15c-be07edfd5efd\") " pod="kube-system/cilium-operator-5d85765b45-7fjf7" Jul 12 00:25:13.248299 kubelet[2959]: I0712 00:25:13.247423 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-etc-cni-netd\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.248299 kubelet[2959]: I0712 00:25:13.247499 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-host-proc-sys-kernel\") pod \"cilium-2cqt9\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " pod="kube-system/cilium-2cqt9" Jul 12 00:25:13.473788 systemd[1]: run-containerd-runc-k8s.io-afd31da012336b147eb89c6cc60bd4895c1f5f22f072b8bd66b6ad3b2f169558-runc.h1uKea.mount: Deactivated successfully. Jul 12 00:25:13.528154 env[1927]: time="2025-07-12T00:25:13.526243530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7fjf7,Uid:f41feb0e-c3f7-4bba-b15c-be07edfd5efd,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:13.541126 env[1927]: time="2025-07-12T00:25:13.541057536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cqt9,Uid:633fd08a-3dc4-4e1f-9851-9d6e02a4c74b,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:13.582512 env[1927]: time="2025-07-12T00:25:13.581329699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:13.582512 env[1927]: time="2025-07-12T00:25:13.581498220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:13.582512 env[1927]: time="2025-07-12T00:25:13.581595975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:13.582512 env[1927]: time="2025-07-12T00:25:13.581967830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5 pid=3097 runtime=io.containerd.runc.v2 Jul 12 00:25:13.625980 env[1927]: time="2025-07-12T00:25:13.625740565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:13.625980 env[1927]: time="2025-07-12T00:25:13.625915170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:13.626641 env[1927]: time="2025-07-12T00:25:13.626203791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:13.627748 env[1927]: time="2025-07-12T00:25:13.626597102Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870 pid=3123 runtime=io.containerd.runc.v2 Jul 12 00:25:13.759230 env[1927]: time="2025-07-12T00:25:13.759172369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cqt9,Uid:633fd08a-3dc4-4e1f-9851-9d6e02a4c74b,Namespace:kube-system,Attempt:0,} returns sandbox id \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\"" Jul 12 00:25:13.767565 env[1927]: time="2025-07-12T00:25:13.767504809Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:25:13.819285 env[1927]: time="2025-07-12T00:25:13.818400878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7fjf7,Uid:f41feb0e-c3f7-4bba-b15c-be07edfd5efd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\"" Jul 12 00:25:15.635129 kubelet[2959]: I0712 00:25:15.635040 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-45vpb" podStartSLOduration=3.635014 podStartE2EDuration="3.635014s" podCreationTimestamp="2025-07-12 00:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:25:14.073497862 +0000 UTC m=+5.422031242" watchObservedRunningTime="2025-07-12 00:25:15.635014 +0000 UTC m=+6.983547368" Jul 12 00:25:20.700639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828447305.mount: Deactivated successfully. Jul 12 00:25:24.894806 env[1927]: time="2025-07-12T00:25:24.894719808Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:24.899256 env[1927]: time="2025-07-12T00:25:24.899184243Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:24.903228 env[1927]: time="2025-07-12T00:25:24.903159647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:24.906951 env[1927]: time="2025-07-12T00:25:24.906855520Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:25:24.911319 env[1927]: time="2025-07-12T00:25:24.911252694Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:25:24.914258 env[1927]: time="2025-07-12T00:25:24.914195952Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:25:24.942884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716508407.mount: Deactivated successfully. Jul 12 00:25:24.956635 env[1927]: time="2025-07-12T00:25:24.956567795Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\"" Jul 12 00:25:24.959106 env[1927]: time="2025-07-12T00:25:24.958323036Z" level=info msg="StartContainer for \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\"" Jul 12 00:25:25.095829 env[1927]: time="2025-07-12T00:25:25.095710005Z" level=info msg="StartContainer for \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\" returns successfully" Jul 12 00:25:25.698907 env[1927]: time="2025-07-12T00:25:25.698839088Z" level=info msg="shim disconnected" id=bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f Jul 12 00:25:25.699230 env[1927]: time="2025-07-12T00:25:25.699195312Z" level=warning msg="cleaning up after shim disconnected" id=bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f namespace=k8s.io Jul 12 00:25:25.699352 env[1927]: time="2025-07-12T00:25:25.699324230Z" level=info msg="cleaning up dead shim" Jul 12 00:25:25.715616 env[1927]: time="2025-07-12T00:25:25.715553581Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3346 runtime=io.containerd.runc.v2\n" Jul 12 00:25:25.933554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f-rootfs.mount: Deactivated successfully. Jul 12 00:25:26.142677 env[1927]: time="2025-07-12T00:25:26.142592642Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:25:26.192985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698706296.mount: Deactivated successfully. Jul 12 00:25:26.209121 env[1927]: time="2025-07-12T00:25:26.209036797Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\"" Jul 12 00:25:26.211319 env[1927]: time="2025-07-12T00:25:26.211231408Z" level=info msg="StartContainer for \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\"" Jul 12 00:25:26.387266 env[1927]: time="2025-07-12T00:25:26.386753080Z" level=info msg="StartContainer for \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\" returns successfully" Jul 12 00:25:26.404390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:25:26.406060 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:25:26.407046 systemd[1]: Stopping systemd-sysctl.service... Jul 12 00:25:26.412034 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:25:26.438024 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:25:26.499497 env[1927]: time="2025-07-12T00:25:26.499430089Z" level=info msg="shim disconnected" id=6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16 Jul 12 00:25:26.499927 env[1927]: time="2025-07-12T00:25:26.499873435Z" level=warning msg="cleaning up after shim disconnected" id=6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16 namespace=k8s.io Jul 12 00:25:26.500070 env[1927]: time="2025-07-12T00:25:26.500041317Z" level=info msg="cleaning up dead shim" Jul 12 00:25:26.516899 env[1927]: time="2025-07-12T00:25:26.516839942Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3413 runtime=io.containerd.runc.v2\n" Jul 12 00:25:26.935881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16-rootfs.mount: Deactivated successfully. Jul 12 00:25:27.157790 env[1927]: time="2025-07-12T00:25:27.157500370Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:25:27.236424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883695092.mount: Deactivated successfully. Jul 12 00:25:27.259067 env[1927]: time="2025-07-12T00:25:27.258989286Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\"" Jul 12 00:25:27.262450 env[1927]: time="2025-07-12T00:25:27.260335605Z" level=info msg="StartContainer for \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\"" Jul 12 00:25:27.408703 env[1927]: time="2025-07-12T00:25:27.408600538Z" level=info msg="StartContainer for \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\" returns successfully" Jul 12 00:25:27.540352 env[1927]: time="2025-07-12T00:25:27.539808125Z" level=info msg="shim disconnected" id=7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777 Jul 12 00:25:27.540352 env[1927]: time="2025-07-12T00:25:27.539888021Z" level=warning msg="cleaning up after shim disconnected" id=7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777 namespace=k8s.io Jul 12 00:25:27.540352 env[1927]: time="2025-07-12T00:25:27.539913030Z" level=info msg="cleaning up dead shim" Jul 12 00:25:27.561341 env[1927]: time="2025-07-12T00:25:27.561287151Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3473 runtime=io.containerd.runc.v2\n" Jul 12 00:25:27.621001 env[1927]: time="2025-07-12T00:25:27.620935935Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:27.626324 env[1927]: time="2025-07-12T00:25:27.626263697Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:27.629931 env[1927]: time="2025-07-12T00:25:27.629849747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:27.631421 env[1927]: time="2025-07-12T00:25:27.631337836Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:25:27.639757 env[1927]: time="2025-07-12T00:25:27.639697445Z" level=info msg="CreateContainer within sandbox \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:25:27.662085 env[1927]: time="2025-07-12T00:25:27.662014970Z" level=info msg="CreateContainer within sandbox \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\"" Jul 12 00:25:27.663438 env[1927]: time="2025-07-12T00:25:27.663150519Z" level=info msg="StartContainer for \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\"" Jul 12 00:25:27.780516 env[1927]: time="2025-07-12T00:25:27.780446247Z" level=info msg="StartContainer for \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\" returns successfully" Jul 12 00:25:27.935128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777-rootfs.mount: Deactivated successfully. Jul 12 00:25:28.148277 env[1927]: time="2025-07-12T00:25:28.148105516Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:25:28.200104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518615681.mount: Deactivated successfully. Jul 12 00:25:28.223095 env[1927]: time="2025-07-12T00:25:28.223004395Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\"" Jul 12 00:25:28.225886 env[1927]: time="2025-07-12T00:25:28.225805345Z" level=info msg="StartContainer for \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\"" Jul 12 00:25:28.356012 kubelet[2959]: I0712 00:25:28.355914 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7fjf7" podStartSLOduration=1.543409374 podStartE2EDuration="15.355888568s" podCreationTimestamp="2025-07-12 00:25:13 +0000 UTC" firstStartedPulling="2025-07-12 00:25:13.821654807 +0000 UTC m=+5.170188175" lastFinishedPulling="2025-07-12 00:25:27.634134013 +0000 UTC m=+18.982667369" observedRunningTime="2025-07-12 00:25:28.353605483 +0000 UTC m=+19.702138887" watchObservedRunningTime="2025-07-12 00:25:28.355888568 +0000 UTC m=+19.704421936" Jul 12 00:25:28.460346 env[1927]: time="2025-07-12T00:25:28.460199084Z" level=info msg="StartContainer for \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\" returns successfully" Jul 12 00:25:28.564790 env[1927]: time="2025-07-12T00:25:28.564724607Z" level=info msg="shim disconnected" id=bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1 Jul 12 00:25:28.565316 env[1927]: time="2025-07-12T00:25:28.565274489Z" level=warning msg="cleaning up after shim disconnected" id=bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1 namespace=k8s.io Jul 12 00:25:28.565468 env[1927]: time="2025-07-12T00:25:28.565439143Z" level=info msg="cleaning up dead shim" Jul 12 00:25:28.611210 env[1927]: time="2025-07-12T00:25:28.611111834Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3564 runtime=io.containerd.runc.v2\n" Jul 12 00:25:28.933138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1-rootfs.mount: Deactivated successfully. Jul 12 00:25:29.187449 env[1927]: time="2025-07-12T00:25:29.186991488Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:25:29.278812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3287319169.mount: Deactivated successfully. Jul 12 00:25:29.339898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686968383.mount: Deactivated successfully. Jul 12 00:25:29.364209 env[1927]: time="2025-07-12T00:25:29.360576615Z" level=info msg="CreateContainer within sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\"" Jul 12 00:25:29.370899 env[1927]: time="2025-07-12T00:25:29.370118093Z" level=info msg="StartContainer for \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\"" Jul 12 00:25:29.580757 env[1927]: time="2025-07-12T00:25:29.580185251Z" level=info msg="StartContainer for \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\" returns successfully" Jul 12 00:25:29.807359 kubelet[2959]: I0712 00:25:29.807314 2959 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:25:30.084119 kubelet[2959]: I0712 00:25:30.084065 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfcd8\" (UniqueName: \"kubernetes.io/projected/57495fba-0238-455e-b7f8-e2a643397e8c-kube-api-access-sfcd8\") pod \"coredns-7c65d6cfc9-tdr7v\" (UID: \"57495fba-0238-455e-b7f8-e2a643397e8c\") " pod="kube-system/coredns-7c65d6cfc9-tdr7v" Jul 12 00:25:30.084429 kubelet[2959]: I0712 00:25:30.084386 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8xg\" (UniqueName: \"kubernetes.io/projected/f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e-kube-api-access-5k8xg\") pod \"coredns-7c65d6cfc9-dlx6d\" (UID: \"f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e\") " pod="kube-system/coredns-7c65d6cfc9-dlx6d" Jul 12 00:25:30.084679 kubelet[2959]: I0712 00:25:30.084633 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e-config-volume\") pod \"coredns-7c65d6cfc9-dlx6d\" (UID: \"f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e\") " pod="kube-system/coredns-7c65d6cfc9-dlx6d" Jul 12 00:25:30.084937 kubelet[2959]: I0712 00:25:30.084907 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57495fba-0238-455e-b7f8-e2a643397e8c-config-volume\") pod \"coredns-7c65d6cfc9-tdr7v\" (UID: \"57495fba-0238-455e-b7f8-e2a643397e8c\") " pod="kube-system/coredns-7c65d6cfc9-tdr7v" Jul 12 00:25:30.141716 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:25:30.238395 kubelet[2959]: I0712 00:25:30.238303 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2cqt9" podStartSLOduration=6.094889024 podStartE2EDuration="17.238277953s" podCreationTimestamp="2025-07-12 00:25:13 +0000 UTC" firstStartedPulling="2025-07-12 00:25:13.76635448 +0000 UTC m=+5.114887848" lastFinishedPulling="2025-07-12 00:25:24.909743397 +0000 UTC m=+16.258276777" observedRunningTime="2025-07-12 00:25:30.235431897 +0000 UTC m=+21.583965433" watchObservedRunningTime="2025-07-12 00:25:30.238277953 +0000 UTC m=+21.586811333" Jul 12 00:25:30.271501 env[1927]: time="2025-07-12T00:25:30.270649920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dlx6d,Uid:f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:30.284634 env[1927]: time="2025-07-12T00:25:30.284527453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tdr7v,Uid:57495fba-0238-455e-b7f8-e2a643397e8c,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:31.097703 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:25:32.925867 systemd-networkd[1596]: cilium_host: Link UP Jul 12 00:25:32.934346 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 12 00:25:32.934502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 12 00:25:32.929187 systemd-networkd[1596]: cilium_net: Link UP Jul 12 00:25:32.933124 systemd-networkd[1596]: cilium_net: Gained carrier Jul 12 00:25:32.935886 systemd-networkd[1596]: cilium_host: Gained carrier Jul 12 00:25:32.937005 (udev-worker)[3752]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:32.938014 (udev-worker)[3751]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:33.048208 systemd-networkd[1596]: cilium_net: Gained IPv6LL Jul 12 00:25:33.080367 systemd-networkd[1596]: cilium_host: Gained IPv6LL Jul 12 00:25:33.130431 (udev-worker)[3691]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:33.140255 systemd-networkd[1596]: cilium_vxlan: Link UP Jul 12 00:25:33.140268 systemd-networkd[1596]: cilium_vxlan: Gained carrier Jul 12 00:25:33.728724 kernel: NET: Registered PF_ALG protocol family Jul 12 00:25:34.688305 systemd-networkd[1596]: cilium_vxlan: Gained IPv6LL Jul 12 00:25:35.240904 systemd-networkd[1596]: lxc_health: Link UP Jul 12 00:25:35.249050 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:25:35.248622 systemd-networkd[1596]: lxc_health: Gained carrier Jul 12 00:25:35.895109 systemd-networkd[1596]: lxcd4a0d5518f4e: Link UP Jul 12 00:25:35.909711 kernel: eth0: renamed from tmp276ee Jul 12 00:25:35.928628 systemd-networkd[1596]: lxc57ee01162be4: Link UP Jul 12 00:25:35.952834 kernel: eth0: renamed from tmpf739e Jul 12 00:25:35.974781 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd4a0d5518f4e: link becomes ready Jul 12 00:25:35.974940 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc57ee01162be4: link becomes ready Jul 12 00:25:35.973096 systemd-networkd[1596]: lxcd4a0d5518f4e: Gained carrier Jul 12 00:25:35.973621 systemd-networkd[1596]: lxc57ee01162be4: Gained carrier Jul 12 00:25:36.800456 systemd-networkd[1596]: lxc_health: Gained IPv6LL Jul 12 00:25:37.312427 systemd-networkd[1596]: lxc57ee01162be4: Gained IPv6LL Jul 12 00:25:37.327352 systemd[1]: run-containerd-runc-k8s.io-2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b-runc.1YEvwT.mount: Deactivated successfully. Jul 12 00:25:37.696563 systemd-networkd[1596]: lxcd4a0d5518f4e: Gained IPv6LL Jul 12 00:25:43.266161 systemd[1]: run-containerd-runc-k8s.io-2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b-runc.ZxrGiq.mount: Deactivated successfully. Jul 12 00:25:43.442857 env[1927]: time="2025-07-12T00:25:43.442724639Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:25:43.486323 env[1927]: time="2025-07-12T00:25:43.486255312Z" level=info msg="StopContainer for \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\" with timeout 2 (s)" Jul 12 00:25:43.486903 env[1927]: time="2025-07-12T00:25:43.486843074Z" level=info msg="Stop container \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\" with signal terminated" Jul 12 00:25:43.506257 systemd-networkd[1596]: lxc_health: Link DOWN Jul 12 00:25:43.506287 systemd-networkd[1596]: lxc_health: Lost carrier Jul 12 00:25:43.605068 systemd[1]: run-containerd-runc-k8s.io-2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b-runc.QDp60j.mount: Deactivated successfully. Jul 12 00:25:44.144749 kubelet[2959]: E0712 00:25:44.144626 2959 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:25:45.533163 env[1927]: time="2025-07-12T00:25:45.533024475Z" level=info msg="Kill container \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\"" Jul 12 00:25:45.561336 systemd-networkd[1596]: lxcd4a0d5518f4e: Link DOWN Jul 12 00:25:45.561349 systemd-networkd[1596]: lxcd4a0d5518f4e: Lost carrier Jul 12 00:25:45.579094 systemd-networkd[1596]: lxc57ee01162be4: Link DOWN Jul 12 00:25:45.579106 systemd-networkd[1596]: lxc57ee01162be4: Lost carrier Jul 12 00:25:45.594024 env[1927]: time="2025-07-12T00:25:45.593939698Z" level=error msg="Failed to destroy network for sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\"" error="cni plugin not initialized" Jul 12 00:25:45.598489 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f-shm.mount: Deactivated successfully. Jul 12 00:25:45.607885 env[1927]: time="2025-07-12T00:25:45.607813813Z" level=error msg="Failed to destroy network for sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\"" error="cni plugin not initialized" Jul 12 00:25:45.612009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf-shm.mount: Deactivated successfully. Jul 12 00:25:45.614435 env[1927]: time="2025-07-12T00:25:45.614345172Z" level=error msg="encountered an error cleaning up failed sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\", marking sandbox state as SANDBOX_UNKNOWN" error="cni plugin not initialized" Jul 12 00:25:45.614582 env[1927]: time="2025-07-12T00:25:45.614451169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dlx6d,Uid:f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Jul 12 00:25:45.614582 env[1927]: time="2025-07-12T00:25:45.614369677Z" level=error msg="encountered an error cleaning up failed sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\", marking sandbox state as SANDBOX_UNKNOWN" error="cni plugin not initialized" Jul 12 00:25:45.614758 env[1927]: time="2025-07-12T00:25:45.614603629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tdr7v,Uid:57495fba-0238-455e-b7f8-e2a643397e8c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Jul 12 00:25:45.615069 kubelet[2959]: E0712 00:25:45.614984 2959 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Jul 12 00:25:45.617624 kubelet[2959]: E0712 00:25:45.615088 2959 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-7c65d6cfc9-tdr7v" Jul 12 00:25:45.617624 kubelet[2959]: E0712 00:25:45.615122 2959 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-7c65d6cfc9-tdr7v" Jul 12 00:25:45.617624 kubelet[2959]: E0712 00:25:45.615183 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tdr7v_kube-system(57495fba-0238-455e-b7f8-e2a643397e8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tdr7v_kube-system(57495fba-0238-455e-b7f8-e2a643397e8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Put \\\"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\\\": EOF\"" pod="kube-system/coredns-7c65d6cfc9-tdr7v" podUID="57495fba-0238-455e-b7f8-e2a643397e8c" Jul 12 00:25:45.617624 kubelet[2959]: E0712 00:25:45.614992 2959 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Jul 12 00:25:45.618431 kubelet[2959]: E0712 00:25:45.615262 2959 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-7c65d6cfc9-dlx6d" Jul 12 00:25:45.618431 kubelet[2959]: E0712 00:25:45.615290 2959 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-7c65d6cfc9-dlx6d" Jul 12 00:25:45.618431 kubelet[2959]: E0712 00:25:45.615335 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dlx6d_kube-system(f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dlx6d_kube-system(f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Put \\\"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\\\": EOF\"" pod="kube-system/coredns-7c65d6cfc9-dlx6d" podUID="f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e" Jul 12 00:25:45.635499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b-rootfs.mount: Deactivated successfully. Jul 12 00:25:45.659967 env[1927]: time="2025-07-12T00:25:45.659864087Z" level=info msg="shim disconnected" id=2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b Jul 12 00:25:45.660225 env[1927]: time="2025-07-12T00:25:45.659969063Z" level=warning msg="cleaning up after shim disconnected" id=2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b namespace=k8s.io Jul 12 00:25:45.660225 env[1927]: time="2025-07-12T00:25:45.659992511Z" level=info msg="cleaning up dead shim" Jul 12 00:25:45.674922 env[1927]: time="2025-07-12T00:25:45.674825357Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4278 runtime=io.containerd.runc.v2\n" Jul 12 00:25:45.678947 env[1927]: time="2025-07-12T00:25:45.678834848Z" level=info msg="StopContainer for \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\" returns successfully" Jul 12 00:25:45.680282 env[1927]: time="2025-07-12T00:25:45.680211925Z" level=info msg="StopPodSandbox for \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\"" Jul 12 00:25:45.680457 env[1927]: time="2025-07-12T00:25:45.680331326Z" level=info msg="Container to stop \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:25:45.680457 env[1927]: time="2025-07-12T00:25:45.680368346Z" level=info msg="Container to stop \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:25:45.680457 env[1927]: time="2025-07-12T00:25:45.680395742Z" level=info msg="Container to stop \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:25:45.680457 env[1927]: time="2025-07-12T00:25:45.680425142Z" level=info msg="Container to stop \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:25:45.680929 env[1927]: time="2025-07-12T00:25:45.680452454Z" level=info msg="Container to stop \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:25:45.685452 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870-shm.mount: Deactivated successfully. Jul 12 00:25:45.735157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870-rootfs.mount: Deactivated successfully. Jul 12 00:25:45.764585 env[1927]: time="2025-07-12T00:25:45.764517813Z" level=info msg="shim disconnected" id=70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870 Jul 12 00:25:45.766082 env[1927]: time="2025-07-12T00:25:45.766027863Z" level=warning msg="cleaning up after shim disconnected" id=70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870 namespace=k8s.io Jul 12 00:25:45.766298 env[1927]: time="2025-07-12T00:25:45.766268055Z" level=info msg="cleaning up dead shim" Jul 12 00:25:45.794862 env[1927]: time="2025-07-12T00:25:45.794710871Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4312 runtime=io.containerd.runc.v2\n" Jul 12 00:25:45.795691 env[1927]: time="2025-07-12T00:25:45.795604503Z" level=info msg="TearDown network for sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" successfully" Jul 12 00:25:45.795915 env[1927]: time="2025-07-12T00:25:45.795861484Z" level=info msg="StopPodSandbox for \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" returns successfully" Jul 12 00:25:45.857455 kubelet[2959]: I0712 00:25:45.857403 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cni-path\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.857814 kubelet[2959]: I0712 00:25:45.857782 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-hubble-tls\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.858022 kubelet[2959]: I0712 00:25:45.857990 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-etc-cni-netd\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.858281 kubelet[2959]: I0712 00:25:45.858254 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-bpf-maps\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.858564 kubelet[2959]: I0712 00:25:45.858495 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-hostproc\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.858564 kubelet[2959]: I0712 00:25:45.858554 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-lib-modules\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.858804 kubelet[2959]: I0712 00:25:45.858591 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-cgroup\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.858804 kubelet[2959]: I0712 00:25:45.858641 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xtv4\" (UniqueName: \"kubernetes.io/projected/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-kube-api-access-5xtv4\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.858804 kubelet[2959]: I0712 00:25:45.858756 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-host-proc-sys-kernel\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.859018 kubelet[2959]: I0712 00:25:45.858817 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-xtables-lock\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.859018 kubelet[2959]: I0712 00:25:45.858857 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-host-proc-sys-net\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.859018 kubelet[2959]: I0712 00:25:45.858894 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-run\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.859018 kubelet[2959]: I0712 00:25:45.858934 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-config-path\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.859018 kubelet[2959]: I0712 00:25:45.858979 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-clustermesh-secrets\") pod \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\" (UID: \"633fd08a-3dc4-4e1f-9851-9d6e02a4c74b\") " Jul 12 00:25:45.870977 kubelet[2959]: I0712 00:25:45.857881 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cni-path" (OuterVolumeSpecName: "cni-path") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.871236 kubelet[2959]: I0712 00:25:45.858183 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.871371 kubelet[2959]: I0712 00:25:45.858429 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.871493 kubelet[2959]: I0712 00:25:45.863169 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.873694 kubelet[2959]: I0712 00:25:45.863210 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.874012 kubelet[2959]: I0712 00:25:45.863236 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.883697 kubelet[2959]: I0712 00:25:45.863269 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.884209 kubelet[2959]: I0712 00:25:45.872766 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-hostproc" (OuterVolumeSpecName: "hostproc") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.884449 kubelet[2959]: I0712 00:25:45.872824 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.884620 kubelet[2959]: I0712 00:25:45.872849 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:25:45.884848 kubelet[2959]: I0712 00:25:45.884079 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:25:45.888428 kubelet[2959]: I0712 00:25:45.886551 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-kube-api-access-5xtv4" (OuterVolumeSpecName: "kube-api-access-5xtv4") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "kube-api-access-5xtv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:25:45.888428 kubelet[2959]: I0712 00:25:45.888082 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:25:45.890844 kubelet[2959]: I0712 00:25:45.890789 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" (UID: "633fd08a-3dc4-4e1f-9851-9d6e02a4c74b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:25:45.901808 kubelet[2959]: E0712 00:25:45.901715 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" containerName="apply-sysctl-overwrites" Jul 12 00:25:45.901808 kubelet[2959]: E0712 00:25:45.901790 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" containerName="mount-bpf-fs" Jul 12 00:25:45.902053 kubelet[2959]: E0712 00:25:45.901831 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" containerName="clean-cilium-state" Jul 12 00:25:45.902053 kubelet[2959]: E0712 00:25:45.901856 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" containerName="cilium-agent" Jul 12 00:25:45.902053 kubelet[2959]: E0712 00:25:45.901873 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" containerName="mount-cgroup" Jul 12 00:25:45.902053 kubelet[2959]: I0712 00:25:45.901948 2959 memory_manager.go:354] "RemoveStaleState removing state" podUID="633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" containerName="cilium-agent" Jul 12 00:25:45.959436 kubelet[2959]: I0712 00:25:45.959346 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cni-path\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.959622 kubelet[2959]: I0712 00:25:45.959437 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-host-proc-sys-net\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.959622 kubelet[2959]: I0712 00:25:45.959507 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-lib-modules\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.959622 kubelet[2959]: I0712 00:25:45.959575 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-xtables-lock\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.959622 kubelet[2959]: I0712 00:25:45.959617 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-run\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.959928 kubelet[2959]: I0712 00:25:45.959745 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-bpf-maps\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.959928 kubelet[2959]: I0712 00:25:45.959784 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-hostproc\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.959928 kubelet[2959]: I0712 00:25:45.959847 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-etc-cni-netd\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.959928 kubelet[2959]: I0712 00:25:45.959890 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d722c65-7198-46d5-95b8-72cf1cf6bceb-clustermesh-secrets\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.960153 kubelet[2959]: I0712 00:25:45.959961 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-host-proc-sys-kernel\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.960153 kubelet[2959]: I0712 00:25:45.960025 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d722c65-7198-46d5-95b8-72cf1cf6bceb-hubble-tls\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.960153 kubelet[2959]: I0712 00:25:45.960070 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-cgroup\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.960153 kubelet[2959]: I0712 00:25:45.960133 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-config-path\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.960390 kubelet[2959]: I0712 00:25:45.960194 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrq9l\" (UniqueName: \"kubernetes.io/projected/5d722c65-7198-46d5-95b8-72cf1cf6bceb-kube-api-access-zrq9l\") pod \"cilium-qns7n\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " pod="kube-system/cilium-qns7n" Jul 12 00:25:45.960390 kubelet[2959]: I0712 00:25:45.960241 2959 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-host-proc-sys-kernel\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960390 kubelet[2959]: I0712 00:25:45.960297 2959 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-xtables-lock\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960390 kubelet[2959]: I0712 00:25:45.960324 2959 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-host-proc-sys-net\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960390 kubelet[2959]: I0712 00:25:45.960372 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-run\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960684 kubelet[2959]: I0712 00:25:45.960399 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-config-path\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960684 kubelet[2959]: I0712 00:25:45.960422 2959 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-clustermesh-secrets\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960684 kubelet[2959]: I0712 00:25:45.960470 2959 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cni-path\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960684 kubelet[2959]: I0712 00:25:45.960496 2959 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-hubble-tls\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960684 kubelet[2959]: I0712 00:25:45.960547 2959 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-etc-cni-netd\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960684 kubelet[2959]: I0712 00:25:45.960573 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-cilium-cgroup\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960684 kubelet[2959]: I0712 00:25:45.960595 2959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xtv4\" (UniqueName: \"kubernetes.io/projected/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-kube-api-access-5xtv4\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.960684 kubelet[2959]: I0712 00:25:45.960643 2959 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-bpf-maps\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.961163 kubelet[2959]: I0712 00:25:45.960697 2959 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-hostproc\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:45.961163 kubelet[2959]: I0712 00:25:45.960720 2959 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b-lib-modules\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:25:46.214467 env[1927]: time="2025-07-12T00:25:46.214381793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qns7n,Uid:5d722c65-7198-46d5-95b8-72cf1cf6bceb,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:46.246966 env[1927]: time="2025-07-12T00:25:46.246804092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:46.247243 env[1927]: time="2025-07-12T00:25:46.247180665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:46.247460 env[1927]: time="2025-07-12T00:25:46.247399906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:46.248218 env[1927]: time="2025-07-12T00:25:46.248123604Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5 pid=4340 runtime=io.containerd.runc.v2 Jul 12 00:25:46.257576 kubelet[2959]: I0712 00:25:46.256187 2959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf" Jul 12 00:25:46.257576 kubelet[2959]: E0712 00:25:46.256384 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-tdr7v" podUID="57495fba-0238-455e-b7f8-e2a643397e8c" Jul 12 00:25:46.268763 kubelet[2959]: I0712 00:25:46.267575 2959 scope.go:117] "RemoveContainer" containerID="2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b" Jul 12 00:25:46.277896 kubelet[2959]: I0712 00:25:46.277833 2959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f" Jul 12 00:25:46.278786 env[1927]: time="2025-07-12T00:25:46.278689293Z" level=info msg="RemoveContainer for \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\"" Jul 12 00:25:46.282098 kubelet[2959]: E0712 00:25:46.278538 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-dlx6d" podUID="f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e" Jul 12 00:25:46.285769 env[1927]: time="2025-07-12T00:25:46.285682761Z" level=info msg="RemoveContainer for \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\" returns successfully" Jul 12 00:25:46.286190 kubelet[2959]: I0712 00:25:46.286152 2959 scope.go:117] "RemoveContainer" containerID="bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1" Jul 12 00:25:46.314730 env[1927]: time="2025-07-12T00:25:46.310411954Z" level=info msg="RemoveContainer for \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\"" Jul 12 00:25:46.351862 env[1927]: time="2025-07-12T00:25:46.351784387Z" level=info msg="RemoveContainer for \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\" returns successfully" Jul 12 00:25:46.357398 kubelet[2959]: I0712 00:25:46.357344 2959 scope.go:117] "RemoveContainer" containerID="7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777" Jul 12 00:25:46.366986 env[1927]: time="2025-07-12T00:25:46.366911903Z" level=info msg="RemoveContainer for \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\"" Jul 12 00:25:46.376240 env[1927]: time="2025-07-12T00:25:46.376147675Z" level=info msg="RemoveContainer for \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\" returns successfully" Jul 12 00:25:46.376750 kubelet[2959]: I0712 00:25:46.376704 2959 scope.go:117] "RemoveContainer" containerID="6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16" Jul 12 00:25:46.379859 env[1927]: time="2025-07-12T00:25:46.379731991Z" level=info msg="RemoveContainer for \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\"" Jul 12 00:25:46.391640 env[1927]: time="2025-07-12T00:25:46.391561548Z" level=info msg="RemoveContainer for \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\" returns successfully" Jul 12 00:25:46.392368 kubelet[2959]: I0712 00:25:46.392316 2959 scope.go:117] "RemoveContainer" containerID="bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f" Jul 12 00:25:46.395069 env[1927]: time="2025-07-12T00:25:46.394992371Z" level=info msg="RemoveContainer for \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\"" Jul 12 00:25:46.402010 env[1927]: time="2025-07-12T00:25:46.401930447Z" level=info msg="RemoveContainer for \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\" returns successfully" Jul 12 00:25:46.402456 kubelet[2959]: I0712 00:25:46.402414 2959 scope.go:117] "RemoveContainer" containerID="2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b" Jul 12 00:25:46.403247 env[1927]: time="2025-07-12T00:25:46.403089927Z" level=error msg="ContainerStatus for \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\": not found" Jul 12 00:25:46.405584 kubelet[2959]: E0712 00:25:46.405499 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\": not found" containerID="2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b" Jul 12 00:25:46.406116 kubelet[2959]: I0712 00:25:46.405889 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b"} err="failed to get container status \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ac0cc75a6a39cbc7ab987ce4a4978fed22467e6de88f0c5c9b8f457ba0ebd1b\": not found" Jul 12 00:25:46.406375 kubelet[2959]: I0712 00:25:46.406318 2959 scope.go:117] "RemoveContainer" containerID="bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1" Jul 12 00:25:46.408817 env[1927]: time="2025-07-12T00:25:46.408701626Z" level=error msg="ContainerStatus for \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\": not found" Jul 12 00:25:46.409746 kubelet[2959]: E0712 00:25:46.409261 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\": not found" containerID="bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1" Jul 12 00:25:46.409746 kubelet[2959]: I0712 00:25:46.409387 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1"} err="failed to get container status \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"bca737c865a92c8498134c12115863c7e69d61b25935ae03dac5dfa94b411bd1\": not found" Jul 12 00:25:46.409746 kubelet[2959]: I0712 00:25:46.409465 2959 scope.go:117] "RemoveContainer" containerID="7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777" Jul 12 00:25:46.410183 env[1927]: time="2025-07-12T00:25:46.410062131Z" level=error msg="ContainerStatus for \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\": not found" Jul 12 00:25:46.410868 kubelet[2959]: E0712 00:25:46.410501 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\": not found" containerID="7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777" Jul 12 00:25:46.410868 kubelet[2959]: I0712 00:25:46.410620 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777"} err="failed to get container status \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\": rpc error: code = NotFound desc = an error occurred when try to find container \"7edbaa33d7f386bc000560a1a0f0824bfd09a99923f55afae01fb3f01c8b0777\": not found" Jul 12 00:25:46.410868 kubelet[2959]: I0712 00:25:46.410720 2959 scope.go:117] "RemoveContainer" containerID="6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16" Jul 12 00:25:46.411474 env[1927]: time="2025-07-12T00:25:46.411337507Z" level=error msg="ContainerStatus for \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\": not found" Jul 12 00:25:46.412479 kubelet[2959]: E0712 00:25:46.412053 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\": not found" containerID="6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16" Jul 12 00:25:46.412479 kubelet[2959]: I0712 00:25:46.412126 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16"} err="failed to get container status \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c25d25f7579ec90376363fbd02c4b8924b25db5d1d370e129c1bb5dc6267e16\": not found" Jul 12 00:25:46.412479 kubelet[2959]: I0712 00:25:46.412203 2959 scope.go:117] "RemoveContainer" containerID="bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f" Jul 12 00:25:46.412881 env[1927]: time="2025-07-12T00:25:46.412772220Z" level=error msg="ContainerStatus for \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\": not found" Jul 12 00:25:46.413251 kubelet[2959]: E0712 00:25:46.413149 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\": not found" containerID="bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f" Jul 12 00:25:46.413251 kubelet[2959]: I0712 00:25:46.413202 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f"} err="failed to get container status \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc15decb59f2fbad0a689be6980561ac2148269c0cc923901f58af2299f6e44f\": not found" Jul 12 00:25:46.414357 env[1927]: time="2025-07-12T00:25:46.414282665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qns7n,Uid:5d722c65-7198-46d5-95b8-72cf1cf6bceb,Namespace:kube-system,Attempt:0,} returns sandbox id \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\"" Jul 12 00:25:46.422872 env[1927]: time="2025-07-12T00:25:46.422733778Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:25:46.469088 env[1927]: time="2025-07-12T00:25:46.467925993Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\"" Jul 12 00:25:46.471738 env[1927]: time="2025-07-12T00:25:46.471635254Z" level=info msg="StartContainer for \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\"" Jul 12 00:25:46.584760 env[1927]: time="2025-07-12T00:25:46.584039447Z" level=info msg="StartContainer for \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\" returns successfully" Jul 12 00:25:46.626123 systemd[1]: var-lib-kubelet-pods-633fd08a\x2d3dc4\x2d4e1f\x2d9851\x2d9d6e02a4c74b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xtv4.mount: Deactivated successfully. Jul 12 00:25:46.626418 systemd[1]: var-lib-kubelet-pods-633fd08a\x2d3dc4\x2d4e1f\x2d9851\x2d9d6e02a4c74b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:25:46.626649 systemd[1]: var-lib-kubelet-pods-633fd08a\x2d3dc4\x2d4e1f\x2d9851\x2d9d6e02a4c74b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:25:46.656157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106-rootfs.mount: Deactivated successfully. Jul 12 00:25:46.674916 env[1927]: time="2025-07-12T00:25:46.674846746Z" level=info msg="shim disconnected" id=1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106 Jul 12 00:25:46.675164 env[1927]: time="2025-07-12T00:25:46.674922382Z" level=warning msg="cleaning up after shim disconnected" id=1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106 namespace=k8s.io Jul 12 00:25:46.675164 env[1927]: time="2025-07-12T00:25:46.674945158Z" level=info msg="cleaning up dead shim" Jul 12 00:25:46.689691 env[1927]: time="2025-07-12T00:25:46.689599788Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4426 runtime=io.containerd.runc.v2\n" Jul 12 00:25:46.908329 kubelet[2959]: I0712 00:25:46.908258 2959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633fd08a-3dc4-4e1f-9851-9d6e02a4c74b" path="/var/lib/kubelet/pods/633fd08a-3dc4-4e1f-9851-9d6e02a4c74b/volumes" Jul 12 00:25:47.289143 env[1927]: time="2025-07-12T00:25:47.288796603Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:25:47.329410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount688981162.mount: Deactivated successfully. Jul 12 00:25:47.335039 env[1927]: time="2025-07-12T00:25:47.334973172Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\"" Jul 12 00:25:47.339079 env[1927]: time="2025-07-12T00:25:47.339008761Z" level=info msg="StartContainer for \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\"" Jul 12 00:25:47.459240 env[1927]: time="2025-07-12T00:25:47.454637664Z" level=info msg="StartContainer for \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\" returns successfully" Jul 12 00:25:47.504555 env[1927]: time="2025-07-12T00:25:47.504492868Z" level=info msg="shim disconnected" id=590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd Jul 12 00:25:47.505066 env[1927]: time="2025-07-12T00:25:47.505031754Z" level=warning msg="cleaning up after shim disconnected" id=590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd namespace=k8s.io Jul 12 00:25:47.505292 env[1927]: time="2025-07-12T00:25:47.505242282Z" level=info msg="cleaning up dead shim" Jul 12 00:25:47.521360 env[1927]: time="2025-07-12T00:25:47.521301526Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4488 runtime=io.containerd.runc.v2\n" Jul 12 00:25:47.598324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd-rootfs.mount: Deactivated successfully. Jul 12 00:25:47.904963 kubelet[2959]: E0712 00:25:47.904789 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-dlx6d" podUID="f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e" Jul 12 00:25:47.904963 kubelet[2959]: E0712 00:25:47.904877 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-tdr7v" podUID="57495fba-0238-455e-b7f8-e2a643397e8c" Jul 12 00:25:48.299769 env[1927]: time="2025-07-12T00:25:48.299394357Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:25:48.341488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058243476.mount: Deactivated successfully. Jul 12 00:25:48.348135 env[1927]: time="2025-07-12T00:25:48.348049404Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\"" Jul 12 00:25:48.349820 env[1927]: time="2025-07-12T00:25:48.349738877Z" level=info msg="StartContainer for \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\"" Jul 12 00:25:48.484224 env[1927]: time="2025-07-12T00:25:48.484150693Z" level=info msg="StartContainer for \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\" returns successfully" Jul 12 00:25:48.528963 env[1927]: time="2025-07-12T00:25:48.528889192Z" level=info msg="shim disconnected" id=40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937 Jul 12 00:25:48.529237 env[1927]: time="2025-07-12T00:25:48.528965824Z" level=warning msg="cleaning up after shim disconnected" id=40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937 namespace=k8s.io Jul 12 00:25:48.529237 env[1927]: time="2025-07-12T00:25:48.528989236Z" level=info msg="cleaning up dead shim" Jul 12 00:25:48.544701 env[1927]: time="2025-07-12T00:25:48.544585839Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4546 runtime=io.containerd.runc.v2\n" Jul 12 00:25:48.598363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937-rootfs.mount: Deactivated successfully. Jul 12 00:25:49.147233 kubelet[2959]: E0712 00:25:49.147181 2959 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:25:49.310833 env[1927]: time="2025-07-12T00:25:49.310691608Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:25:49.354486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927646184.mount: Deactivated successfully. Jul 12 00:25:49.357344 env[1927]: time="2025-07-12T00:25:49.357256971Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\"" Jul 12 00:25:49.361601 env[1927]: time="2025-07-12T00:25:49.358645459Z" level=info msg="StartContainer for \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\"" Jul 12 00:25:49.517133 env[1927]: time="2025-07-12T00:25:49.516927754Z" level=info msg="StartContainer for \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\" returns successfully" Jul 12 00:25:49.571298 env[1927]: time="2025-07-12T00:25:49.571198411Z" level=info msg="shim disconnected" id=2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83 Jul 12 00:25:49.571298 env[1927]: time="2025-07-12T00:25:49.571276688Z" level=warning msg="cleaning up after shim disconnected" id=2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83 namespace=k8s.io Jul 12 00:25:49.571298 env[1927]: time="2025-07-12T00:25:49.571300508Z" level=info msg="cleaning up dead shim" Jul 12 00:25:49.588235 env[1927]: time="2025-07-12T00:25:49.588160087Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:25:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4603 runtime=io.containerd.runc.v2\n" Jul 12 00:25:49.600313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83-rootfs.mount: Deactivated successfully. Jul 12 00:25:49.905645 kubelet[2959]: E0712 00:25:49.904883 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-dlx6d" podUID="f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e" Jul 12 00:25:49.906078 kubelet[2959]: E0712 00:25:49.905565 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-tdr7v" podUID="57495fba-0238-455e-b7f8-e2a643397e8c" Jul 12 00:25:50.322267 env[1927]: time="2025-07-12T00:25:50.321793850Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:25:50.355049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount821685809.mount: Deactivated successfully. Jul 12 00:25:50.369859 env[1927]: time="2025-07-12T00:25:50.369760425Z" level=info msg="CreateContainer within sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\"" Jul 12 00:25:50.372532 env[1927]: time="2025-07-12T00:25:50.372433228Z" level=info msg="StartContainer for \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\"" Jul 12 00:25:50.408777 kubelet[2959]: I0712 00:25:50.408313 2959 setters.go:600] "Node became not ready" node="ip-172-31-23-9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:25:50Z","lastTransitionTime":"2025-07-12T00:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:25:50.544479 env[1927]: time="2025-07-12T00:25:50.544377747Z" level=info msg="StartContainer for \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\" returns successfully" Jul 12 00:25:51.904193 kubelet[2959]: E0712 00:25:51.904126 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-dlx6d" podUID="f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e" Jul 12 00:25:51.905684 kubelet[2959]: E0712 00:25:51.905576 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-tdr7v" podUID="57495fba-0238-455e-b7f8-e2a643397e8c" Jul 12 00:25:52.004291 systemd[1]: run-containerd-runc-k8s.io-745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f-runc.r0S6f3.mount: Deactivated successfully. Jul 12 00:25:53.904809 kubelet[2959]: E0712 00:25:53.904634 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-dlx6d" podUID="f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e" Jul 12 00:25:53.905525 kubelet[2959]: E0712 00:25:53.905431 2959 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-tdr7v" podUID="57495fba-0238-455e-b7f8-e2a643397e8c" Jul 12 00:25:55.465887 (udev-worker)[5118]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:55.473653 systemd-networkd[1596]: lxc_health: Link UP Jul 12 00:25:55.485805 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:25:55.483620 systemd-networkd[1596]: lxc_health: Gained carrier Jul 12 00:25:55.487485 (udev-worker)[5120]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:55.908203 env[1927]: time="2025-07-12T00:25:55.905824053Z" level=info msg="StopPodSandbox for \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\"" Jul 12 00:25:55.908203 env[1927]: time="2025-07-12T00:25:55.905830414Z" level=info msg="StopPodSandbox for \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\"" Jul 12 00:25:55.968058 env[1927]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jul 12 00:25:55.973636 env[1927]: time="2025-07-12T00:25:55.971819905Z" level=info msg="TearDown network for sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\" successfully" Jul 12 00:25:55.973636 env[1927]: time="2025-07-12T00:25:55.971882649Z" level=info msg="StopPodSandbox for \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\" returns successfully" Jul 12 00:25:55.972431 systemd[1]: run-netns-cni\x2d50ea8409\x2d75ef\x2dae19\x2d0c39\x2da3e6da804a44.mount: Deactivated successfully. Jul 12 00:25:55.984412 env[1927]: time="2025-07-12T00:25:55.984325088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tdr7v,Uid:57495fba-0238-455e-b7f8-e2a643397e8c,Namespace:kube-system,Attempt:1,}" Jul 12 00:25:55.997411 env[1927]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jul 12 00:25:56.001633 systemd[1]: run-netns-cni\x2dd44963e6\x2db05f\x2dbe94\x2dd3bc\x2d90a7bf9ce00c.mount: Deactivated successfully. Jul 12 00:25:56.003539 env[1927]: time="2025-07-12T00:25:56.003444416Z" level=info msg="TearDown network for sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\" successfully" Jul 12 00:25:56.003847 env[1927]: time="2025-07-12T00:25:56.003785139Z" level=info msg="StopPodSandbox for \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\" returns successfully" Jul 12 00:25:56.004933 env[1927]: time="2025-07-12T00:25:56.004870329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dlx6d,Uid:f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e,Namespace:kube-system,Attempt:1,}" Jul 12 00:25:56.146839 (udev-worker)[5130]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:56.174811 kernel: eth0: renamed from tmpe4c0b Jul 12 00:25:56.177041 systemd-networkd[1596]: lxce3c4b140f66e: Link UP Jul 12 00:25:56.196123 systemd-networkd[1596]: lxcecb026e4c8ff: Link UP Jul 12 00:25:56.208829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce3c4b140f66e: link becomes ready Jul 12 00:25:56.207297 systemd-networkd[1596]: lxce3c4b140f66e: Gained carrier Jul 12 00:25:56.223699 kernel: eth0: renamed from tmp72b42 Jul 12 00:25:56.240887 (udev-worker)[5131]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:56.244012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcecb026e4c8ff: link becomes ready Jul 12 00:25:56.244920 systemd-networkd[1596]: lxcecb026e4c8ff: Gained carrier Jul 12 00:25:56.293442 kubelet[2959]: I0712 00:25:56.293330 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qns7n" podStartSLOduration=11.293304149 podStartE2EDuration="11.293304149s" podCreationTimestamp="2025-07-12 00:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:25:51.343622178 +0000 UTC m=+42.692155570" watchObservedRunningTime="2025-07-12 00:25:56.293304149 +0000 UTC m=+47.641837517" Jul 12 00:25:57.472868 systemd-networkd[1596]: lxc_health: Gained IPv6LL Jul 12 00:25:57.473338 systemd-networkd[1596]: lxce3c4b140f66e: Gained IPv6LL Jul 12 00:25:58.303882 systemd-networkd[1596]: lxcecb026e4c8ff: Gained IPv6LL Jul 12 00:25:58.923120 systemd[1]: run-containerd-runc-k8s.io-745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f-runc.Oj29kZ.mount: Deactivated successfully. Jul 12 00:26:01.207084 systemd[1]: run-containerd-runc-k8s.io-745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f-runc.LTfffU.mount: Deactivated successfully. Jul 12 00:26:01.610206 sudo[2198]: pam_unix(sudo:session): session closed for user root Jul 12 00:26:01.633640 sshd[2194]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:01.639587 systemd-logind[1914]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:26:01.643543 systemd[1]: sshd@4-172.31.23.9:22-147.75.109.163:49970.service: Deactivated successfully. Jul 12 00:26:01.645249 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:26:01.648854 systemd-logind[1914]: Removed session 5. Jul 12 00:26:05.137168 env[1927]: time="2025-07-12T00:26:05.137066029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:05.138022 env[1927]: time="2025-07-12T00:26:05.137890258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:05.138213 env[1927]: time="2025-07-12T00:26:05.138165625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:05.139851 env[1927]: time="2025-07-12T00:26:05.138630215Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4c0b0c637e20d1d0d50cbd5d6a3aa6bc5920c6b154f7bf69d4ee0949537bc43 pid=5297 runtime=io.containerd.runc.v2 Jul 12 00:26:05.190781 env[1927]: time="2025-07-12T00:26:05.181616971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:05.190781 env[1927]: time="2025-07-12T00:26:05.181775183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:05.190781 env[1927]: time="2025-07-12T00:26:05.181802666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:05.190781 env[1927]: time="2025-07-12T00:26:05.182106056Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72b428466006ac4f56831a91b7323ff3909293ab33013e65a5781a77bca9e800 pid=5316 runtime=io.containerd.runc.v2 Jul 12 00:26:05.253298 systemd[1]: run-containerd-runc-k8s.io-e4c0b0c637e20d1d0d50cbd5d6a3aa6bc5920c6b154f7bf69d4ee0949537bc43-runc.YP2Wt8.mount: Deactivated successfully. Jul 12 00:26:05.446908 env[1927]: time="2025-07-12T00:26:05.445629406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tdr7v,Uid:57495fba-0238-455e-b7f8-e2a643397e8c,Namespace:kube-system,Attempt:1,} returns sandbox id \"e4c0b0c637e20d1d0d50cbd5d6a3aa6bc5920c6b154f7bf69d4ee0949537bc43\"" Jul 12 00:26:05.454141 env[1927]: time="2025-07-12T00:26:05.453187728Z" level=info msg="CreateContainer within sandbox \"e4c0b0c637e20d1d0d50cbd5d6a3aa6bc5920c6b154f7bf69d4ee0949537bc43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:26:05.468357 env[1927]: time="2025-07-12T00:26:05.468293655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dlx6d,Uid:f25c9a6b-c0f8-4d6c-a0e5-df2cd3b0756e,Namespace:kube-system,Attempt:1,} returns sandbox id \"72b428466006ac4f56831a91b7323ff3909293ab33013e65a5781a77bca9e800\"" Jul 12 00:26:05.478443 env[1927]: time="2025-07-12T00:26:05.478237693Z" level=info msg="CreateContainer within sandbox \"72b428466006ac4f56831a91b7323ff3909293ab33013e65a5781a77bca9e800\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:26:05.487147 env[1927]: time="2025-07-12T00:26:05.487085711Z" level=info msg="CreateContainer within sandbox \"e4c0b0c637e20d1d0d50cbd5d6a3aa6bc5920c6b154f7bf69d4ee0949537bc43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fcd137aa9e9df6be15971e5f680cb126753a23e04b0a1eddf8a2de8c20f2385\"" Jul 12 00:26:05.491601 env[1927]: time="2025-07-12T00:26:05.489886019Z" level=info msg="StartContainer for \"6fcd137aa9e9df6be15971e5f680cb126753a23e04b0a1eddf8a2de8c20f2385\"" Jul 12 00:26:05.511978 env[1927]: time="2025-07-12T00:26:05.511889784Z" level=info msg="CreateContainer within sandbox \"72b428466006ac4f56831a91b7323ff3909293ab33013e65a5781a77bca9e800\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e00e84c9d6fee85432e3d9e9c50814b1667c172e51fd71871e01f6d322ea6b0\"" Jul 12 00:26:05.515049 env[1927]: time="2025-07-12T00:26:05.514978300Z" level=info msg="StartContainer for \"8e00e84c9d6fee85432e3d9e9c50814b1667c172e51fd71871e01f6d322ea6b0\"" Jul 12 00:26:05.630618 env[1927]: time="2025-07-12T00:26:05.630508031Z" level=info msg="StartContainer for \"6fcd137aa9e9df6be15971e5f680cb126753a23e04b0a1eddf8a2de8c20f2385\" returns successfully" Jul 12 00:26:05.681435 env[1927]: time="2025-07-12T00:26:05.681337009Z" level=info msg="StartContainer for \"8e00e84c9d6fee85432e3d9e9c50814b1667c172e51fd71871e01f6d322ea6b0\" returns successfully" Jul 12 00:26:06.154100 systemd[1]: run-containerd-runc-k8s.io-72b428466006ac4f56831a91b7323ff3909293ab33013e65a5781a77bca9e800-runc.ja8TTA.mount: Deactivated successfully. Jul 12 00:26:06.430372 kubelet[2959]: I0712 00:26:06.430223 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tdr7v" podStartSLOduration=54.430199703 podStartE2EDuration="54.430199703s" podCreationTimestamp="2025-07-12 00:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:06.407897312 +0000 UTC m=+57.756430680" watchObservedRunningTime="2025-07-12 00:26:06.430199703 +0000 UTC m=+57.778733071" Jul 12 00:26:06.497764 kubelet[2959]: I0712 00:26:06.497691 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dlx6d" podStartSLOduration=54.497644685 podStartE2EDuration="54.497644685s" podCreationTimestamp="2025-07-12 00:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:06.461585084 +0000 UTC m=+57.810118488" watchObservedRunningTime="2025-07-12 00:26:06.497644685 +0000 UTC m=+57.846178053" Jul 12 00:26:08.959899 env[1927]: time="2025-07-12T00:26:08.959564606Z" level=info msg="StopPodSandbox for \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\"" Jul 12 00:26:08.959899 env[1927]: time="2025-07-12T00:26:08.959740218Z" level=info msg="TearDown network for sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" successfully" Jul 12 00:26:08.959899 env[1927]: time="2025-07-12T00:26:08.959797475Z" level=info msg="StopPodSandbox for \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" returns successfully" Jul 12 00:26:08.963706 env[1927]: time="2025-07-12T00:26:08.961288803Z" level=info msg="RemovePodSandbox for \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\"" Jul 12 00:26:08.963706 env[1927]: time="2025-07-12T00:26:08.961345772Z" level=info msg="Forcibly stopping sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\"" Jul 12 00:26:08.963706 env[1927]: time="2025-07-12T00:26:08.961485573Z" level=info msg="TearDown network for sandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" successfully" Jul 12 00:26:08.970838 env[1927]: time="2025-07-12T00:26:08.970777588Z" level=info msg="RemovePodSandbox \"70d695bdafffde2a185f8cc751050772475069f1df860c3f877d3a9010d6b870\" returns successfully" Jul 12 00:26:08.971770 env[1927]: time="2025-07-12T00:26:08.971723574Z" level=info msg="StopPodSandbox for \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\"" Jul 12 00:26:08.997500 env[1927]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jul 12 00:26:08.997500 env[1927]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Jul 12 00:26:08.997911 env[1927]: time="2025-07-12T00:26:08.997860335Z" level=info msg="TearDown network for sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\" successfully" Jul 12 00:26:08.998032 env[1927]: time="2025-07-12T00:26:08.997999800Z" level=info msg="StopPodSandbox for \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\" returns successfully" Jul 12 00:26:08.998806 env[1927]: time="2025-07-12T00:26:08.998758101Z" level=info msg="RemovePodSandbox for \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\"" Jul 12 00:26:08.999029 env[1927]: time="2025-07-12T00:26:08.998970112Z" level=info msg="Forcibly stopping sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\"" Jul 12 00:26:09.025020 env[1927]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jul 12 00:26:09.025020 env[1927]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Jul 12 00:26:09.025520 env[1927]: time="2025-07-12T00:26:09.025468419Z" level=info msg="TearDown network for sandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\" successfully" Jul 12 00:26:09.031870 env[1927]: time="2025-07-12T00:26:09.031770011Z" level=info msg="RemovePodSandbox \"276ee22b70ca43e0791c46232cb1ac5289e2ab752635f6ee0a88e6737197b07f\" returns successfully" Jul 12 00:26:09.032860 env[1927]: time="2025-07-12T00:26:09.032815975Z" level=info msg="StopPodSandbox for \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\"" Jul 12 00:26:09.066924 env[1927]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jul 12 00:26:09.066924 env[1927]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Jul 12 00:26:09.068268 env[1927]: time="2025-07-12T00:26:09.068212012Z" level=info msg="TearDown network for sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\" successfully" Jul 12 00:26:09.068400 env[1927]: time="2025-07-12T00:26:09.068365134Z" level=info msg="StopPodSandbox for \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\" returns successfully" Jul 12 00:26:09.069279 env[1927]: time="2025-07-12T00:26:09.069233783Z" level=info msg="RemovePodSandbox for \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\"" Jul 12 00:26:09.069707 env[1927]: time="2025-07-12T00:26:09.069627214Z" level=info msg="Forcibly stopping sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\"" Jul 12 00:26:09.095871 env[1927]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jul 12 00:26:09.095871 env[1927]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Jul 12 00:26:09.096201 env[1927]: time="2025-07-12T00:26:09.096151223Z" level=info msg="TearDown network for sandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\" successfully" Jul 12 00:26:09.102321 env[1927]: time="2025-07-12T00:26:09.102221410Z" level=info msg="RemovePodSandbox \"f739eae31379a3cdf7cff5645258b15138c161b57523770311613cd9ec75dbcf\" returns successfully" Jul 12 00:26:43.068426 systemd[1]: Started sshd@5-172.31.23.9:22-147.75.109.163:46040.service. Jul 12 00:26:43.247350 sshd[5498]: Accepted publickey for core from 147.75.109.163 port 46040 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:43.250595 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:43.260277 systemd[1]: Started session-6.scope. Jul 12 00:26:43.261938 systemd-logind[1914]: New session 6 of user core. Jul 12 00:26:43.526325 sshd[5498]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:43.532172 systemd[1]: sshd@5-172.31.23.9:22-147.75.109.163:46040.service: Deactivated successfully. Jul 12 00:26:43.533701 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:26:43.533769 systemd-logind[1914]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:26:43.536446 systemd-logind[1914]: Removed session 6. Jul 12 00:26:48.553690 systemd[1]: Started sshd@6-172.31.23.9:22-147.75.109.163:39200.service. Jul 12 00:26:48.732275 sshd[5514]: Accepted publickey for core from 147.75.109.163 port 39200 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:48.735440 sshd[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:48.745167 systemd[1]: Started session-7.scope. Jul 12 00:26:48.745816 systemd-logind[1914]: New session 7 of user core. Jul 12 00:26:49.001364 sshd[5514]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:49.006497 systemd[1]: sshd@6-172.31.23.9:22-147.75.109.163:39200.service: Deactivated successfully. Jul 12 00:26:49.008835 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:26:49.009761 systemd-logind[1914]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:26:49.012251 systemd-logind[1914]: Removed session 7. Jul 12 00:26:54.025156 systemd[1]: Started sshd@7-172.31.23.9:22-147.75.109.163:39216.service. Jul 12 00:26:54.198251 sshd[5528]: Accepted publickey for core from 147.75.109.163 port 39216 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:54.201505 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:54.210851 systemd[1]: Started session-8.scope. Jul 12 00:26:54.211488 systemd-logind[1914]: New session 8 of user core. Jul 12 00:26:54.468495 sshd[5528]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:54.474267 systemd[1]: sshd@7-172.31.23.9:22-147.75.109.163:39216.service: Deactivated successfully. Jul 12 00:26:54.476590 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:26:54.478577 systemd-logind[1914]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:26:54.481432 systemd-logind[1914]: Removed session 8. Jul 12 00:26:59.496778 systemd[1]: Started sshd@8-172.31.23.9:22-147.75.109.163:56772.service. Jul 12 00:26:59.677859 sshd[5543]: Accepted publickey for core from 147.75.109.163 port 56772 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:59.681405 sshd[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:59.691010 systemd[1]: Started session-9.scope. Jul 12 00:26:59.691953 systemd-logind[1914]: New session 9 of user core. Jul 12 00:26:59.967908 sshd[5543]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:59.973975 systemd[1]: sshd@8-172.31.23.9:22-147.75.109.163:56772.service: Deactivated successfully. Jul 12 00:26:59.976769 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:26:59.978362 systemd-logind[1914]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:26:59.981560 systemd-logind[1914]: Removed session 9. Jul 12 00:27:04.992377 systemd[1]: Started sshd@9-172.31.23.9:22-147.75.109.163:56782.service. Jul 12 00:27:05.166331 sshd[5556]: Accepted publickey for core from 147.75.109.163 port 56782 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:05.169516 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:05.177800 systemd-logind[1914]: New session 10 of user core. Jul 12 00:27:05.179716 systemd[1]: Started session-10.scope. Jul 12 00:27:05.438015 sshd[5556]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:05.442919 systemd[1]: sshd@9-172.31.23.9:22-147.75.109.163:56782.service: Deactivated successfully. Jul 12 00:27:05.444962 systemd-logind[1914]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:27:05.445051 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:27:05.448360 systemd-logind[1914]: Removed session 10. Jul 12 00:27:05.462585 systemd[1]: Started sshd@10-172.31.23.9:22-147.75.109.163:56786.service. Jul 12 00:27:05.633496 sshd[5570]: Accepted publickey for core from 147.75.109.163 port 56786 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:05.636150 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:05.645892 systemd[1]: Started session-11.scope. Jul 12 00:27:05.646490 systemd-logind[1914]: New session 11 of user core. Jul 12 00:27:06.021347 sshd[5570]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:06.027257 systemd[1]: sshd@10-172.31.23.9:22-147.75.109.163:56786.service: Deactivated successfully. Jul 12 00:27:06.029266 systemd-logind[1914]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:27:06.030512 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:27:06.033106 systemd-logind[1914]: Removed session 11. Jul 12 00:27:06.044145 systemd[1]: Started sshd@11-172.31.23.9:22-147.75.109.163:60372.service. Jul 12 00:27:06.228497 sshd[5580]: Accepted publickey for core from 147.75.109.163 port 60372 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:06.231716 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:06.240774 systemd-logind[1914]: New session 12 of user core. Jul 12 00:27:06.241649 systemd[1]: Started session-12.scope. Jul 12 00:27:06.518028 sshd[5580]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:06.523437 systemd-logind[1914]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:27:06.523843 systemd[1]: sshd@11-172.31.23.9:22-147.75.109.163:60372.service: Deactivated successfully. Jul 12 00:27:06.526128 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:27:06.527816 systemd-logind[1914]: Removed session 12. Jul 12 00:27:11.544769 systemd[1]: Started sshd@12-172.31.23.9:22-147.75.109.163:60384.service. Jul 12 00:27:11.721513 sshd[5595]: Accepted publickey for core from 147.75.109.163 port 60384 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:11.728227 sshd[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:11.738164 systemd[1]: Started session-13.scope. Jul 12 00:27:11.739170 systemd-logind[1914]: New session 13 of user core. Jul 12 00:27:12.000541 sshd[5595]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:12.006048 systemd[1]: sshd@12-172.31.23.9:22-147.75.109.163:60384.service: Deactivated successfully. Jul 12 00:27:12.008445 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:27:12.009001 systemd-logind[1914]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:27:12.011645 systemd-logind[1914]: Removed session 13. Jul 12 00:27:14.931814 update_engine[1915]: I0712 00:27:14.931627 1915 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 12 00:27:14.931814 update_engine[1915]: I0712 00:27:14.931726 1915 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 12 00:27:14.932474 update_engine[1915]: I0712 00:27:14.932236 1915 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 12 00:27:14.933130 update_engine[1915]: I0712 00:27:14.933076 1915 omaha_request_params.cc:62] Current group set to lts Jul 12 00:27:14.933509 update_engine[1915]: I0712 00:27:14.933332 1915 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 12 00:27:14.933509 update_engine[1915]: I0712 00:27:14.933360 1915 update_attempter.cc:643] Scheduling an action processor start. Jul 12 00:27:14.933509 update_engine[1915]: I0712 00:27:14.933391 1915 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:27:14.933509 update_engine[1915]: I0712 00:27:14.933439 1915 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 12 00:27:14.934569 update_engine[1915]: I0712 00:27:14.934517 1915 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 12 00:27:14.934569 update_engine[1915]: I0712 00:27:14.934553 1915 omaha_request_action.cc:271] Request: Jul 12 00:27:14.934569 update_engine[1915]: Jul 12 00:27:14.934569 update_engine[1915]: Jul 12 00:27:14.934569 update_engine[1915]: Jul 12 00:27:14.934569 update_engine[1915]: Jul 12 00:27:14.934569 update_engine[1915]: Jul 12 00:27:14.934569 update_engine[1915]: Jul 12 00:27:14.934569 update_engine[1915]: Jul 12 00:27:14.934569 update_engine[1915]: Jul 12 00:27:14.934569 update_engine[1915]: I0712 00:27:14.934567 1915 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:27:14.935624 locksmithd[1979]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 12 00:27:14.941444 update_engine[1915]: I0712 00:27:14.941385 1915 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:27:14.941808 update_engine[1915]: I0712 00:27:14.941772 1915 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:27:14.955950 update_engine[1915]: E0712 00:27:14.955895 1915 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:27:14.956100 update_engine[1915]: I0712 00:27:14.956045 1915 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 12 00:27:17.026940 systemd[1]: Started sshd@13-172.31.23.9:22-147.75.109.163:36084.service. Jul 12 00:27:17.201188 sshd[5610]: Accepted publickey for core from 147.75.109.163 port 36084 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:17.204336 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:17.212733 systemd-logind[1914]: New session 14 of user core. Jul 12 00:27:17.213401 systemd[1]: Started session-14.scope. Jul 12 00:27:17.464829 sshd[5610]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:17.469880 systemd-logind[1914]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:27:17.470270 systemd[1]: sshd@13-172.31.23.9:22-147.75.109.163:36084.service: Deactivated successfully. Jul 12 00:27:17.472546 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:27:17.475290 systemd-logind[1914]: Removed session 14. Jul 12 00:27:22.491300 systemd[1]: Started sshd@14-172.31.23.9:22-147.75.109.163:36096.service. Jul 12 00:27:22.665561 sshd[5623]: Accepted publickey for core from 147.75.109.163 port 36096 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:22.669527 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:22.678629 systemd[1]: Started session-15.scope. Jul 12 00:27:22.679087 systemd-logind[1914]: New session 15 of user core. Jul 12 00:27:22.925025 sshd[5623]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:22.930808 systemd[1]: sshd@14-172.31.23.9:22-147.75.109.163:36096.service: Deactivated successfully. Jul 12 00:27:22.932428 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:27:22.933606 systemd-logind[1914]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:27:22.936522 systemd-logind[1914]: Removed session 15. Jul 12 00:27:22.951019 systemd[1]: Started sshd@15-172.31.23.9:22-147.75.109.163:36104.service. Jul 12 00:27:23.123454 sshd[5636]: Accepted publickey for core from 147.75.109.163 port 36104 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:23.126309 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:23.134960 systemd-logind[1914]: New session 16 of user core. Jul 12 00:27:23.135952 systemd[1]: Started session-16.scope. Jul 12 00:27:23.465967 sshd[5636]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:23.471122 systemd-logind[1914]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:27:23.471713 systemd[1]: sshd@15-172.31.23.9:22-147.75.109.163:36104.service: Deactivated successfully. Jul 12 00:27:23.473822 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:27:23.475385 systemd-logind[1914]: Removed session 16. Jul 12 00:27:23.490795 systemd[1]: Started sshd@16-172.31.23.9:22-147.75.109.163:36108.service. Jul 12 00:27:23.663533 sshd[5646]: Accepted publickey for core from 147.75.109.163 port 36108 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:23.666415 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:23.676443 systemd-logind[1914]: New session 17 of user core. Jul 12 00:27:23.677064 systemd[1]: Started session-17.scope. Jul 12 00:27:24.929782 update_engine[1915]: I0712 00:27:24.929708 1915 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:27:24.930417 update_engine[1915]: I0712 00:27:24.930034 1915 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:27:24.930417 update_engine[1915]: I0712 00:27:24.930274 1915 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:27:24.931575 update_engine[1915]: E0712 00:27:24.931510 1915 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:27:24.931732 update_engine[1915]: I0712 00:27:24.931684 1915 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 12 00:27:26.283506 sshd[5646]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:26.290710 systemd[1]: sshd@16-172.31.23.9:22-147.75.109.163:36108.service: Deactivated successfully. Jul 12 00:27:26.292010 systemd-logind[1914]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:27:26.292972 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:27:26.301131 systemd-logind[1914]: Removed session 17. Jul 12 00:27:26.324341 systemd[1]: Started sshd@17-172.31.23.9:22-147.75.109.163:34282.service. Jul 12 00:27:26.510244 sshd[5663]: Accepted publickey for core from 147.75.109.163 port 34282 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:26.513328 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:26.528853 systemd[1]: Started session-18.scope. Jul 12 00:27:26.529569 systemd-logind[1914]: New session 18 of user core. Jul 12 00:27:27.021722 sshd[5663]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:27.027452 systemd[1]: sshd@17-172.31.23.9:22-147.75.109.163:34282.service: Deactivated successfully. Jul 12 00:27:27.030107 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:27:27.030564 systemd-logind[1914]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:27:27.033442 systemd-logind[1914]: Removed session 18. Jul 12 00:27:27.047162 systemd[1]: Started sshd@18-172.31.23.9:22-147.75.109.163:34294.service. Jul 12 00:27:27.223892 sshd[5676]: Accepted publickey for core from 147.75.109.163 port 34294 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:27.228205 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:27.237420 systemd[1]: Started session-19.scope. Jul 12 00:27:27.238425 systemd-logind[1914]: New session 19 of user core. Jul 12 00:27:27.484845 sshd[5676]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:27.490031 systemd-logind[1914]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:27:27.490394 systemd[1]: sshd@18-172.31.23.9:22-147.75.109.163:34294.service: Deactivated successfully. Jul 12 00:27:27.493099 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:27:27.494137 systemd-logind[1914]: Removed session 19. Jul 12 00:27:32.511518 systemd[1]: Started sshd@19-172.31.23.9:22-147.75.109.163:34306.service. Jul 12 00:27:32.682436 sshd[5689]: Accepted publickey for core from 147.75.109.163 port 34306 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:32.685113 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:32.692763 systemd-logind[1914]: New session 20 of user core. Jul 12 00:27:32.694449 systemd[1]: Started session-20.scope. Jul 12 00:27:32.941007 sshd[5689]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:32.946086 systemd-logind[1914]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:27:32.946624 systemd[1]: sshd@19-172.31.23.9:22-147.75.109.163:34306.service: Deactivated successfully. Jul 12 00:27:32.948723 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:27:32.950168 systemd-logind[1914]: Removed session 20. Jul 12 00:27:34.931611 update_engine[1915]: I0712 00:27:34.931047 1915 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:27:34.931611 update_engine[1915]: I0712 00:27:34.931344 1915 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:27:34.931611 update_engine[1915]: I0712 00:27:34.931594 1915 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:27:34.932521 update_engine[1915]: E0712 00:27:34.932029 1915 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:27:34.932521 update_engine[1915]: I0712 00:27:34.932144 1915 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 12 00:27:37.968941 systemd[1]: Started sshd@20-172.31.23.9:22-147.75.109.163:59462.service. Jul 12 00:27:38.143759 sshd[5705]: Accepted publickey for core from 147.75.109.163 port 59462 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:38.146320 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:38.155545 systemd[1]: Started session-21.scope. Jul 12 00:27:38.156740 systemd-logind[1914]: New session 21 of user core. Jul 12 00:27:38.406874 sshd[5705]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:38.412046 systemd[1]: sshd@20-172.31.23.9:22-147.75.109.163:59462.service: Deactivated successfully. Jul 12 00:27:38.414858 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:27:38.415918 systemd-logind[1914]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:27:38.419424 systemd-logind[1914]: Removed session 21. Jul 12 00:27:43.433176 systemd[1]: Started sshd@21-172.31.23.9:22-147.75.109.163:59468.service. Jul 12 00:27:43.607800 sshd[5718]: Accepted publickey for core from 147.75.109.163 port 59468 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:43.610915 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:43.618719 systemd-logind[1914]: New session 22 of user core. Jul 12 00:27:43.620163 systemd[1]: Started session-22.scope. Jul 12 00:27:43.875445 sshd[5718]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:43.882202 systemd[1]: sshd@21-172.31.23.9:22-147.75.109.163:59468.service: Deactivated successfully. Jul 12 00:27:43.884408 systemd-logind[1914]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:27:43.884564 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:27:43.887404 systemd-logind[1914]: Removed session 22. Jul 12 00:27:44.930695 update_engine[1915]: I0712 00:27:44.930163 1915 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:27:44.930695 update_engine[1915]: I0712 00:27:44.930476 1915 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:27:44.931496 update_engine[1915]: I0712 00:27:44.931439 1915 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:27:44.932512 update_engine[1915]: E0712 00:27:44.931865 1915 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.931984 1915 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.931998 1915 omaha_request_action.cc:621] Omaha request response: Jul 12 00:27:44.932512 update_engine[1915]: E0712 00:27:44.932113 1915 omaha_request_action.cc:640] Omaha request network transfer failed. Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.932137 1915 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.932147 1915 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.932155 1915 update_attempter.cc:306] Processing Done. Jul 12 00:27:44.932512 update_engine[1915]: E0712 00:27:44.932174 1915 update_attempter.cc:619] Update failed. Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.932184 1915 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.932193 1915 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.932204 1915 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.932309 1915 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.932344 1915 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 12 00:27:44.932512 update_engine[1915]: I0712 00:27:44.932354 1915 omaha_request_action.cc:271] Request: Jul 12 00:27:44.932512 update_engine[1915]: Jul 12 00:27:44.932512 update_engine[1915]: Jul 12 00:27:44.933709 update_engine[1915]: Jul 12 00:27:44.933709 update_engine[1915]: Jul 12 00:27:44.933709 update_engine[1915]: Jul 12 00:27:44.933709 update_engine[1915]: Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.932364 1915 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.932618 1915 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.932875 1915 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:27:44.933709 update_engine[1915]: E0712 00:27:44.933151 1915 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.933259 1915 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.933276 1915 omaha_request_action.cc:621] Omaha request response: Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.933286 1915 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.933295 1915 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.933303 1915 update_attempter.cc:306] Processing Done. Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.933313 1915 update_attempter.cc:310] Error event sent. Jul 12 00:27:44.933709 update_engine[1915]: I0712 00:27:44.933326 1915 update_check_scheduler.cc:74] Next update check in 42m46s Jul 12 00:27:44.934470 locksmithd[1979]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 12 00:27:44.934470 locksmithd[1979]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 12 00:27:48.901341 systemd[1]: Started sshd@22-172.31.23.9:22-147.75.109.163:59078.service. Jul 12 00:27:49.071390 sshd[5733]: Accepted publickey for core from 147.75.109.163 port 59078 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:49.074385 sshd[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:49.083733 systemd[1]: Started session-23.scope. Jul 12 00:27:49.084592 systemd-logind[1914]: New session 23 of user core. Jul 12 00:27:49.326005 sshd[5733]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:49.332444 systemd[1]: sshd@22-172.31.23.9:22-147.75.109.163:59078.service: Deactivated successfully. Jul 12 00:27:49.333997 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:27:49.336520 systemd-logind[1914]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:27:49.339683 systemd-logind[1914]: Removed session 23. Jul 12 00:27:49.353145 systemd[1]: Started sshd@23-172.31.23.9:22-147.75.109.163:59086.service. Jul 12 00:27:49.533348 sshd[5746]: Accepted publickey for core from 147.75.109.163 port 59086 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:49.536434 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:49.546524 systemd-logind[1914]: New session 24 of user core. Jul 12 00:27:49.548014 systemd[1]: Started session-24.scope. Jul 12 00:27:51.612261 env[1927]: time="2025-07-12T00:27:51.612203081Z" level=info msg="StopContainer for \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\" with timeout 30 (s)" Jul 12 00:27:51.613683 env[1927]: time="2025-07-12T00:27:51.613593186Z" level=info msg="Stop container \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\" with signal terminated" Jul 12 00:27:51.648934 systemd[1]: run-containerd-runc-k8s.io-745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f-runc.00uChh.mount: Deactivated successfully. Jul 12 00:27:51.698673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa-rootfs.mount: Deactivated successfully. Jul 12 00:27:51.724958 env[1927]: time="2025-07-12T00:27:51.724895631Z" level=info msg="shim disconnected" id=cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa Jul 12 00:27:51.725428 env[1927]: time="2025-07-12T00:27:51.725382036Z" level=warning msg="cleaning up after shim disconnected" id=cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa namespace=k8s.io Jul 12 00:27:51.725578 env[1927]: time="2025-07-12T00:27:51.725547543Z" level=info msg="cleaning up dead shim" Jul 12 00:27:51.731179 env[1927]: time="2025-07-12T00:27:51.731098614Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:27:51.745523 env[1927]: time="2025-07-12T00:27:51.745473321Z" level=info msg="StopContainer for \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\" with timeout 2 (s)" Jul 12 00:27:51.746236 env[1927]: time="2025-07-12T00:27:51.746197582Z" level=info msg="Stop container \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\" with signal terminated" Jul 12 00:27:51.749187 env[1927]: time="2025-07-12T00:27:51.749124830Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5791 runtime=io.containerd.runc.v2\n" Jul 12 00:27:51.754015 env[1927]: time="2025-07-12T00:27:51.753949084Z" level=info msg="StopContainer for \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\" returns successfully" Jul 12 00:27:51.756856 env[1927]: time="2025-07-12T00:27:51.756800562Z" level=info msg="StopPodSandbox for \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\"" Jul 12 00:27:51.757593 env[1927]: time="2025-07-12T00:27:51.757108260Z" level=info msg="Container to stop \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:51.763072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5-shm.mount: Deactivated successfully. Jul 12 00:27:51.766499 systemd-networkd[1596]: lxc_health: Link DOWN Jul 12 00:27:51.766511 systemd-networkd[1596]: lxc_health: Lost carrier Jul 12 00:27:51.869007 env[1927]: time="2025-07-12T00:27:51.868829057Z" level=info msg="shim disconnected" id=f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5 Jul 12 00:27:51.869007 env[1927]: time="2025-07-12T00:27:51.868901874Z" level=warning msg="cleaning up after shim disconnected" id=f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5 namespace=k8s.io Jul 12 00:27:51.869007 env[1927]: time="2025-07-12T00:27:51.868925046Z" level=info msg="cleaning up dead shim" Jul 12 00:27:51.870102 env[1927]: time="2025-07-12T00:27:51.870036302Z" level=info msg="shim disconnected" id=745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f Jul 12 00:27:51.870252 env[1927]: time="2025-07-12T00:27:51.870104931Z" level=warning msg="cleaning up after shim disconnected" id=745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f namespace=k8s.io Jul 12 00:27:51.870252 env[1927]: time="2025-07-12T00:27:51.870126856Z" level=info msg="cleaning up dead shim" Jul 12 00:27:51.888001 env[1927]: time="2025-07-12T00:27:51.887925920Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5850 runtime=io.containerd.runc.v2\n" Jul 12 00:27:51.892034 env[1927]: time="2025-07-12T00:27:51.891902347Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5851 runtime=io.containerd.runc.v2\n" Jul 12 00:27:51.892578 env[1927]: time="2025-07-12T00:27:51.892530090Z" level=info msg="TearDown network for sandbox \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\" successfully" Jul 12 00:27:51.892737 env[1927]: time="2025-07-12T00:27:51.892578799Z" level=info msg="StopPodSandbox for \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\" returns successfully" Jul 12 00:27:51.893813 env[1927]: time="2025-07-12T00:27:51.893001410Z" level=info msg="StopContainer for \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\" returns successfully" Jul 12 00:27:51.896481 env[1927]: time="2025-07-12T00:27:51.896432103Z" level=info msg="StopPodSandbox for \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\"" Jul 12 00:27:51.897077 env[1927]: time="2025-07-12T00:27:51.897031214Z" level=info msg="Container to stop \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:51.897560 env[1927]: time="2025-07-12T00:27:51.897514690Z" level=info msg="Container to stop \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:51.897753 env[1927]: time="2025-07-12T00:27:51.897713630Z" level=info msg="Container to stop \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:51.898215 env[1927]: time="2025-07-12T00:27:51.898167286Z" level=info msg="Container to stop \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:51.898325 env[1927]: time="2025-07-12T00:27:51.898227551Z" level=info msg="Container to stop \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:51.958001 env[1927]: time="2025-07-12T00:27:51.957926020Z" level=info msg="shim disconnected" id=26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5 Jul 12 00:27:51.958001 env[1927]: time="2025-07-12T00:27:51.957995945Z" level=warning msg="cleaning up after shim disconnected" id=26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5 namespace=k8s.io Jul 12 00:27:51.958324 env[1927]: time="2025-07-12T00:27:51.958018241Z" level=info msg="cleaning up dead shim" Jul 12 00:27:51.972731 env[1927]: time="2025-07-12T00:27:51.972639769Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5898 runtime=io.containerd.runc.v2\n" Jul 12 00:27:51.973267 env[1927]: time="2025-07-12T00:27:51.973202003Z" level=info msg="TearDown network for sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" successfully" Jul 12 00:27:51.973267 env[1927]: time="2025-07-12T00:27:51.973255248Z" level=info msg="StopPodSandbox for \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" returns successfully" Jul 12 00:27:52.000920 kubelet[2959]: I0712 00:27:52.000868 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9gh4\" (UniqueName: \"kubernetes.io/projected/f41feb0e-c3f7-4bba-b15c-be07edfd5efd-kube-api-access-r9gh4\") pod \"f41feb0e-c3f7-4bba-b15c-be07edfd5efd\" (UID: \"f41feb0e-c3f7-4bba-b15c-be07edfd5efd\") " Jul 12 00:27:52.001646 kubelet[2959]: I0712 00:27:52.001614 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f41feb0e-c3f7-4bba-b15c-be07edfd5efd-cilium-config-path\") pod \"f41feb0e-c3f7-4bba-b15c-be07edfd5efd\" (UID: \"f41feb0e-c3f7-4bba-b15c-be07edfd5efd\") " Jul 12 00:27:52.008772 kubelet[2959]: I0712 00:27:52.008701 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41feb0e-c3f7-4bba-b15c-be07edfd5efd-kube-api-access-r9gh4" (OuterVolumeSpecName: "kube-api-access-r9gh4") pod "f41feb0e-c3f7-4bba-b15c-be07edfd5efd" (UID: "f41feb0e-c3f7-4bba-b15c-be07edfd5efd"). InnerVolumeSpecName "kube-api-access-r9gh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:52.016215 kubelet[2959]: I0712 00:27:52.016158 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f41feb0e-c3f7-4bba-b15c-be07edfd5efd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f41feb0e-c3f7-4bba-b15c-be07edfd5efd" (UID: "f41feb0e-c3f7-4bba-b15c-be07edfd5efd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:27:52.103119 kubelet[2959]: I0712 00:27:52.103076 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-xtables-lock\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.103378 kubelet[2959]: I0712 00:27:52.103350 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-config-path\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.103526 kubelet[2959]: I0712 00:27:52.103500 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrq9l\" (UniqueName: \"kubernetes.io/projected/5d722c65-7198-46d5-95b8-72cf1cf6bceb-kube-api-access-zrq9l\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.103707 kubelet[2959]: I0712 00:27:52.103655 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-host-proc-sys-net\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.103908 kubelet[2959]: I0712 00:27:52.103882 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-host-proc-sys-kernel\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.104061 kubelet[2959]: I0712 00:27:52.104035 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d722c65-7198-46d5-95b8-72cf1cf6bceb-clustermesh-secrets\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.104200 kubelet[2959]: I0712 00:27:52.104176 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cni-path\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.104333 kubelet[2959]: I0712 00:27:52.104309 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-cgroup\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.104476 kubelet[2959]: I0712 00:27:52.104452 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-lib-modules\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.104609 kubelet[2959]: I0712 00:27:52.104586 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-run\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.104770 kubelet[2959]: I0712 00:27:52.104745 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-hostproc\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.104917 kubelet[2959]: I0712 00:27:52.104890 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-etc-cni-netd\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.105053 kubelet[2959]: I0712 00:27:52.105029 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-bpf-maps\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.105202 kubelet[2959]: I0712 00:27:52.105178 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d722c65-7198-46d5-95b8-72cf1cf6bceb-hubble-tls\") pod \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\" (UID: \"5d722c65-7198-46d5-95b8-72cf1cf6bceb\") " Jul 12 00:27:52.105366 kubelet[2959]: I0712 00:27:52.105342 2959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9gh4\" (UniqueName: \"kubernetes.io/projected/f41feb0e-c3f7-4bba-b15c-be07edfd5efd-kube-api-access-r9gh4\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.105485 kubelet[2959]: I0712 00:27:52.105463 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f41feb0e-c3f7-4bba-b15c-be07edfd5efd-cilium-config-path\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.106472 kubelet[2959]: I0712 00:27:52.103201 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.106472 kubelet[2959]: I0712 00:27:52.106369 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.110475 kubelet[2959]: I0712 00:27:52.110408 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.111147 kubelet[2959]: I0712 00:27:52.111082 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:27:52.113217 kubelet[2959]: I0712 00:27:52.111492 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.113399 kubelet[2959]: I0712 00:27:52.111528 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cni-path" (OuterVolumeSpecName: "cni-path") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.113927 kubelet[2959]: I0712 00:27:52.111577 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.114593 kubelet[2959]: I0712 00:27:52.111605 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.114878 kubelet[2959]: I0712 00:27:52.111634 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.115021 kubelet[2959]: I0712 00:27:52.111702 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-hostproc" (OuterVolumeSpecName: "hostproc") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.115152 kubelet[2959]: I0712 00:27:52.111729 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:52.118639 kubelet[2959]: I0712 00:27:52.118583 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d722c65-7198-46d5-95b8-72cf1cf6bceb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:52.120154 kubelet[2959]: I0712 00:27:52.120026 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d722c65-7198-46d5-95b8-72cf1cf6bceb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:27:52.120642 kubelet[2959]: I0712 00:27:52.120600 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d722c65-7198-46d5-95b8-72cf1cf6bceb-kube-api-access-zrq9l" (OuterVolumeSpecName: "kube-api-access-zrq9l") pod "5d722c65-7198-46d5-95b8-72cf1cf6bceb" (UID: "5d722c65-7198-46d5-95b8-72cf1cf6bceb"). InnerVolumeSpecName "kube-api-access-zrq9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:52.206730 kubelet[2959]: I0712 00:27:52.206688 2959 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-bpf-maps\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.206978 kubelet[2959]: I0712 00:27:52.206956 2959 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d722c65-7198-46d5-95b8-72cf1cf6bceb-hubble-tls\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.207173 kubelet[2959]: I0712 00:27:52.207117 2959 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-xtables-lock\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.207314 kubelet[2959]: I0712 00:27:52.207289 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-config-path\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.207460 kubelet[2959]: I0712 00:27:52.207438 2959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrq9l\" (UniqueName: \"kubernetes.io/projected/5d722c65-7198-46d5-95b8-72cf1cf6bceb-kube-api-access-zrq9l\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.207593 kubelet[2959]: I0712 00:27:52.207573 2959 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-host-proc-sys-net\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.207769 kubelet[2959]: I0712 00:27:52.207746 2959 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-host-proc-sys-kernel\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.207950 kubelet[2959]: I0712 00:27:52.207899 2959 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cni-path\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.208071 kubelet[2959]: I0712 00:27:52.208050 2959 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d722c65-7198-46d5-95b8-72cf1cf6bceb-clustermesh-secrets\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.208214 kubelet[2959]: I0712 00:27:52.208193 2959 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-lib-modules\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.208442 kubelet[2959]: I0712 00:27:52.208421 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-run\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.208586 kubelet[2959]: I0712 00:27:52.208565 2959 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-hostproc\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.208740 kubelet[2959]: I0712 00:27:52.208720 2959 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-etc-cni-netd\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.208872 kubelet[2959]: I0712 00:27:52.208851 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d722c65-7198-46d5-95b8-72cf1cf6bceb-cilium-cgroup\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:52.630054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f-rootfs.mount: Deactivated successfully. Jul 12 00:27:52.630329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5-rootfs.mount: Deactivated successfully. Jul 12 00:27:52.630561 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5-shm.mount: Deactivated successfully. Jul 12 00:27:52.630812 systemd[1]: var-lib-kubelet-pods-5d722c65\x2d7198\x2d46d5\x2d95b8\x2d72cf1cf6bceb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzrq9l.mount: Deactivated successfully. Jul 12 00:27:52.631047 systemd[1]: var-lib-kubelet-pods-5d722c65\x2d7198\x2d46d5\x2d95b8\x2d72cf1cf6bceb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:27:52.631281 systemd[1]: var-lib-kubelet-pods-5d722c65\x2d7198\x2d46d5\x2d95b8\x2d72cf1cf6bceb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:27:52.631507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5-rootfs.mount: Deactivated successfully. Jul 12 00:27:52.631780 systemd[1]: var-lib-kubelet-pods-f41feb0e\x2dc3f7\x2d4bba\x2db15c\x2dbe07edfd5efd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9gh4.mount: Deactivated successfully. Jul 12 00:27:52.723844 kubelet[2959]: I0712 00:27:52.723788 2959 scope.go:117] "RemoveContainer" containerID="745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f" Jul 12 00:27:52.737337 env[1927]: time="2025-07-12T00:27:52.737119622Z" level=info msg="RemoveContainer for \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\"" Jul 12 00:27:52.745626 env[1927]: time="2025-07-12T00:27:52.745569847Z" level=info msg="RemoveContainer for \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\" returns successfully" Jul 12 00:27:52.746258 kubelet[2959]: I0712 00:27:52.746227 2959 scope.go:117] "RemoveContainer" containerID="2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83" Jul 12 00:27:52.751262 env[1927]: time="2025-07-12T00:27:52.751172050Z" level=info msg="RemoveContainer for \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\"" Jul 12 00:27:52.760859 env[1927]: time="2025-07-12T00:27:52.760803565Z" level=info msg="RemoveContainer for \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\" returns successfully" Jul 12 00:27:52.761529 kubelet[2959]: I0712 00:27:52.761479 2959 scope.go:117] "RemoveContainer" containerID="40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937" Jul 12 00:27:52.763505 env[1927]: time="2025-07-12T00:27:52.763454736Z" level=info msg="RemoveContainer for \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\"" Jul 12 00:27:52.788713 env[1927]: time="2025-07-12T00:27:52.784716819Z" level=info msg="RemoveContainer for \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\" returns successfully" Jul 12 00:27:52.788901 kubelet[2959]: I0712 00:27:52.785207 2959 scope.go:117] "RemoveContainer" containerID="590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd" Jul 12 00:27:52.821140 env[1927]: time="2025-07-12T00:27:52.821051698Z" level=info msg="RemoveContainer for \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\"" Jul 12 00:27:52.827995 env[1927]: time="2025-07-12T00:27:52.827896223Z" level=info msg="RemoveContainer for \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\" returns successfully" Jul 12 00:27:52.828626 kubelet[2959]: I0712 00:27:52.828593 2959 scope.go:117] "RemoveContainer" containerID="1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106" Jul 12 00:27:52.831316 env[1927]: time="2025-07-12T00:27:52.830880700Z" level=info msg="RemoveContainer for \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\"" Jul 12 00:27:52.836834 env[1927]: time="2025-07-12T00:27:52.836781024Z" level=info msg="RemoveContainer for \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\" returns successfully" Jul 12 00:27:52.837325 kubelet[2959]: I0712 00:27:52.837264 2959 scope.go:117] "RemoveContainer" containerID="745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f" Jul 12 00:27:52.837888 env[1927]: time="2025-07-12T00:27:52.837768857Z" level=error msg="ContainerStatus for \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\": not found" Jul 12 00:27:52.838197 kubelet[2959]: E0712 00:27:52.838134 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\": not found" containerID="745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f" Jul 12 00:27:52.838316 kubelet[2959]: I0712 00:27:52.838213 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f"} err="failed to get container status \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\": rpc error: code = NotFound desc = an error occurred when try to find container \"745d635c2b19793e53fdf534100bd62c3aec1eb7b4a880f937bb5c4170bd631f\": not found" Jul 12 00:27:52.838316 kubelet[2959]: I0712 00:27:52.838280 2959 scope.go:117] "RemoveContainer" containerID="2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83" Jul 12 00:27:52.838760 env[1927]: time="2025-07-12T00:27:52.838635177Z" level=error msg="ContainerStatus for \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\": not found" Jul 12 00:27:52.839007 kubelet[2959]: E0712 00:27:52.838973 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\": not found" containerID="2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83" Jul 12 00:27:52.839190 kubelet[2959]: I0712 00:27:52.839152 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83"} err="failed to get container status \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b8035ba560443fe8ed4a519a11622f63b13519842a5992b8792acc9e4663b83\": not found" Jul 12 00:27:52.839319 kubelet[2959]: I0712 00:27:52.839296 2959 scope.go:117] "RemoveContainer" containerID="40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937" Jul 12 00:27:52.839812 env[1927]: time="2025-07-12T00:27:52.839733580Z" level=error msg="ContainerStatus for \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\": not found" Jul 12 00:27:52.840245 kubelet[2959]: E0712 00:27:52.840179 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\": not found" containerID="40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937" Jul 12 00:27:52.840342 kubelet[2959]: I0712 00:27:52.840255 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937"} err="failed to get container status \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\": rpc error: code = NotFound desc = an error occurred when try to find container \"40f96b0197176ff04a6cc402dc90b92a76c5270931e80b175518fee34a999937\": not found" Jul 12 00:27:52.840342 kubelet[2959]: I0712 00:27:52.840296 2959 scope.go:117] "RemoveContainer" containerID="590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd" Jul 12 00:27:52.840907 env[1927]: time="2025-07-12T00:27:52.840795599Z" level=error msg="ContainerStatus for \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\": not found" Jul 12 00:27:52.841178 kubelet[2959]: E0712 00:27:52.841144 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\": not found" containerID="590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd" Jul 12 00:27:52.841346 kubelet[2959]: I0712 00:27:52.841311 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd"} err="failed to get container status \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"590fa56dcb4d0a8734a67d899e640eb53cc722bd9ce32c7e7c8faf980b6b39bd\": not found" Jul 12 00:27:52.841470 kubelet[2959]: I0712 00:27:52.841448 2959 scope.go:117] "RemoveContainer" containerID="1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106" Jul 12 00:27:52.841999 env[1927]: time="2025-07-12T00:27:52.841918627Z" level=error msg="ContainerStatus for \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\": not found" Jul 12 00:27:52.842632 kubelet[2959]: E0712 00:27:52.842566 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\": not found" containerID="1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106" Jul 12 00:27:52.842811 kubelet[2959]: I0712 00:27:52.842645 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106"} err="failed to get container status \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fae93d65416a32cdfecf220408af60fa382d7cbe918b8baa4cf1130ea6a3106\": not found" Jul 12 00:27:52.842811 kubelet[2959]: I0712 00:27:52.842709 2959 scope.go:117] "RemoveContainer" containerID="cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa" Jul 12 00:27:52.844912 env[1927]: time="2025-07-12T00:27:52.844833070Z" level=info msg="RemoveContainer for \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\"" Jul 12 00:27:52.851191 env[1927]: time="2025-07-12T00:27:52.851113741Z" level=info msg="RemoveContainer for \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\" returns successfully" Jul 12 00:27:52.851583 kubelet[2959]: I0712 00:27:52.851552 2959 scope.go:117] "RemoveContainer" containerID="cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa" Jul 12 00:27:52.852278 env[1927]: time="2025-07-12T00:27:52.852190544Z" level=error msg="ContainerStatus for \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\": not found" Jul 12 00:27:52.852614 kubelet[2959]: E0712 00:27:52.852580 2959 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\": not found" containerID="cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa" Jul 12 00:27:52.852791 kubelet[2959]: I0712 00:27:52.852753 2959 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa"} err="failed to get container status \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd8ffa47cbc8bbfdefe62eafb93e7d9ed5517edfd4cfdc74d6ef67976e2950fa\": not found" Jul 12 00:27:52.909336 kubelet[2959]: I0712 00:27:52.909293 2959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d722c65-7198-46d5-95b8-72cf1cf6bceb" path="/var/lib/kubelet/pods/5d722c65-7198-46d5-95b8-72cf1cf6bceb/volumes" Jul 12 00:27:52.911585 kubelet[2959]: I0712 00:27:52.911546 2959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41feb0e-c3f7-4bba-b15c-be07edfd5efd" path="/var/lib/kubelet/pods/f41feb0e-c3f7-4bba-b15c-be07edfd5efd/volumes" Jul 12 00:27:53.544033 sshd[5746]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:53.549167 systemd[1]: sshd@23-172.31.23.9:22-147.75.109.163:59086.service: Deactivated successfully. Jul 12 00:27:53.550617 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:27:53.552808 systemd-logind[1914]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:27:53.554759 systemd-logind[1914]: Removed session 24. Jul 12 00:27:53.569007 systemd[1]: Started sshd@24-172.31.23.9:22-147.75.109.163:59092.service. Jul 12 00:27:53.744093 sshd[5917]: Accepted publickey for core from 147.75.109.163 port 59092 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:53.747314 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:53.754754 systemd-logind[1914]: New session 25 of user core. Jul 12 00:27:53.756048 systemd[1]: Started session-25.scope. Jul 12 00:27:54.180738 kubelet[2959]: E0712 00:27:54.180679 2959 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:27:55.368486 sshd[5917]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:55.375340 systemd-logind[1914]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:27:55.375816 systemd[1]: sshd@24-172.31.23.9:22-147.75.109.163:59092.service: Deactivated successfully. Jul 12 00:27:55.378713 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:27:55.379858 systemd-logind[1914]: Removed session 25. Jul 12 00:27:55.385178 kubelet[2959]: E0712 00:27:55.385118 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d722c65-7198-46d5-95b8-72cf1cf6bceb" containerName="mount-cgroup" Jul 12 00:27:55.385178 kubelet[2959]: E0712 00:27:55.385170 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d722c65-7198-46d5-95b8-72cf1cf6bceb" containerName="apply-sysctl-overwrites" Jul 12 00:27:55.385178 kubelet[2959]: E0712 00:27:55.385189 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d722c65-7198-46d5-95b8-72cf1cf6bceb" containerName="mount-bpf-fs" Jul 12 00:27:55.385949 kubelet[2959]: E0712 00:27:55.385204 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f41feb0e-c3f7-4bba-b15c-be07edfd5efd" containerName="cilium-operator" Jul 12 00:27:55.385949 kubelet[2959]: E0712 00:27:55.385219 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d722c65-7198-46d5-95b8-72cf1cf6bceb" containerName="clean-cilium-state" Jul 12 00:27:55.385949 kubelet[2959]: E0712 00:27:55.385234 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5d722c65-7198-46d5-95b8-72cf1cf6bceb" containerName="cilium-agent" Jul 12 00:27:55.385949 kubelet[2959]: I0712 00:27:55.385307 2959 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d722c65-7198-46d5-95b8-72cf1cf6bceb" containerName="cilium-agent" Jul 12 00:27:55.385949 kubelet[2959]: I0712 00:27:55.385324 2959 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41feb0e-c3f7-4bba-b15c-be07edfd5efd" containerName="cilium-operator" Jul 12 00:27:55.402700 systemd[1]: Started sshd@25-172.31.23.9:22-147.75.109.163:59102.service. Jul 12 00:27:55.444984 kubelet[2959]: I0712 00:27:55.444900 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c309846-9388-41d2-abeb-15d473ba2525-cilium-ipsec-secrets\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445276 kubelet[2959]: I0712 00:27:55.444983 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c309846-9388-41d2-abeb-15d473ba2525-cilium-config-path\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445276 kubelet[2959]: I0712 00:27:55.445031 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cilium-run\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445276 kubelet[2959]: I0712 00:27:55.445071 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-lib-modules\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445276 kubelet[2959]: I0712 00:27:55.445105 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-xtables-lock\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445276 kubelet[2959]: I0712 00:27:55.445140 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-host-proc-sys-kernel\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445276 kubelet[2959]: I0712 00:27:55.445184 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-hostproc\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445917 kubelet[2959]: I0712 00:27:55.445219 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-etc-cni-netd\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445917 kubelet[2959]: I0712 00:27:55.445256 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c309846-9388-41d2-abeb-15d473ba2525-hubble-tls\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445917 kubelet[2959]: I0712 00:27:55.445290 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-bpf-maps\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445917 kubelet[2959]: I0712 00:27:55.445328 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c309846-9388-41d2-abeb-15d473ba2525-clustermesh-secrets\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445917 kubelet[2959]: I0712 00:27:55.445365 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cilium-cgroup\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.445917 kubelet[2959]: I0712 00:27:55.445403 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdzlr\" (UniqueName: \"kubernetes.io/projected/2c309846-9388-41d2-abeb-15d473ba2525-kube-api-access-wdzlr\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.446283 kubelet[2959]: I0712 00:27:55.445438 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cni-path\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.446283 kubelet[2959]: I0712 00:27:55.445476 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-host-proc-sys-net\") pod \"cilium-pk4z8\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " pod="kube-system/cilium-pk4z8" Jul 12 00:27:55.628170 sshd[5928]: Accepted publickey for core from 147.75.109.163 port 59102 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:55.632029 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:55.675842 systemd[1]: Started session-26.scope. Jul 12 00:27:55.676813 systemd-logind[1914]: New session 26 of user core. Jul 12 00:27:55.723637 env[1927]: time="2025-07-12T00:27:55.723544278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pk4z8,Uid:2c309846-9388-41d2-abeb-15d473ba2525,Namespace:kube-system,Attempt:0,}" Jul 12 00:27:55.759942 env[1927]: time="2025-07-12T00:27:55.759413059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:55.759942 env[1927]: time="2025-07-12T00:27:55.759510273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:55.759942 env[1927]: time="2025-07-12T00:27:55.759536698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:55.760261 env[1927]: time="2025-07-12T00:27:55.760045074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23 pid=5944 runtime=io.containerd.runc.v2 Jul 12 00:27:55.854770 env[1927]: time="2025-07-12T00:27:55.854632078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pk4z8,Uid:2c309846-9388-41d2-abeb-15d473ba2525,Namespace:kube-system,Attempt:0,} returns sandbox id \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\"" Jul 12 00:27:55.865794 env[1927]: time="2025-07-12T00:27:55.865722420Z" level=info msg="CreateContainer within sandbox \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:27:55.896601 env[1927]: time="2025-07-12T00:27:55.896536856Z" level=info msg="CreateContainer within sandbox \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"50b59796af8c9009a393493ec001a723070087e2383e9580b8f9eeca3e317539\"" Jul 12 00:27:55.901788 env[1927]: time="2025-07-12T00:27:55.901733079Z" level=info msg="StartContainer for \"50b59796af8c9009a393493ec001a723070087e2383e9580b8f9eeca3e317539\"" Jul 12 00:27:56.013732 sshd[5928]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:56.021787 systemd[1]: sshd@25-172.31.23.9:22-147.75.109.163:59102.service: Deactivated successfully. Jul 12 00:27:56.023234 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:27:56.031568 systemd-logind[1914]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:27:56.041717 systemd-logind[1914]: Removed session 26. Jul 12 00:27:56.046641 systemd[1]: Started sshd@26-172.31.23.9:22-147.75.109.163:51752.service. Jul 12 00:27:56.086309 env[1927]: time="2025-07-12T00:27:56.086193842Z" level=info msg="StartContainer for \"50b59796af8c9009a393493ec001a723070087e2383e9580b8f9eeca3e317539\" returns successfully" Jul 12 00:27:56.167064 env[1927]: time="2025-07-12T00:27:56.166886217Z" level=info msg="shim disconnected" id=50b59796af8c9009a393493ec001a723070087e2383e9580b8f9eeca3e317539 Jul 12 00:27:56.167064 env[1927]: time="2025-07-12T00:27:56.166956227Z" level=warning msg="cleaning up after shim disconnected" id=50b59796af8c9009a393493ec001a723070087e2383e9580b8f9eeca3e317539 namespace=k8s.io Jul 12 00:27:56.167064 env[1927]: time="2025-07-12T00:27:56.166978523Z" level=info msg="cleaning up dead shim" Jul 12 00:27:56.181206 env[1927]: time="2025-07-12T00:27:56.181143341Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6037 runtime=io.containerd.runc.v2\n" Jul 12 00:27:56.246110 sshd[6011]: Accepted publickey for core from 147.75.109.163 port 51752 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:56.249353 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:56.258475 systemd[1]: Started session-27.scope. Jul 12 00:27:56.259018 systemd-logind[1914]: New session 27 of user core. Jul 12 00:27:56.743833 env[1927]: time="2025-07-12T00:27:56.743722373Z" level=info msg="StopPodSandbox for \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\"" Jul 12 00:27:56.746115 env[1927]: time="2025-07-12T00:27:56.745858458Z" level=info msg="Container to stop \"50b59796af8c9009a393493ec001a723070087e2383e9580b8f9eeca3e317539\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:27:56.753467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23-shm.mount: Deactivated successfully. Jul 12 00:27:56.827418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23-rootfs.mount: Deactivated successfully. Jul 12 00:27:56.841738 env[1927]: time="2025-07-12T00:27:56.841634303Z" level=info msg="shim disconnected" id=eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23 Jul 12 00:27:56.842339 env[1927]: time="2025-07-12T00:27:56.842303159Z" level=warning msg="cleaning up after shim disconnected" id=eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23 namespace=k8s.io Jul 12 00:27:56.842496 env[1927]: time="2025-07-12T00:27:56.842468522Z" level=info msg="cleaning up dead shim" Jul 12 00:27:56.857988 env[1927]: time="2025-07-12T00:27:56.857928102Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:27:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6080 runtime=io.containerd.runc.v2\n" Jul 12 00:27:56.858776 env[1927]: time="2025-07-12T00:27:56.858733928Z" level=info msg="TearDown network for sandbox \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\" successfully" Jul 12 00:27:56.858953 env[1927]: time="2025-07-12T00:27:56.858920399Z" level=info msg="StopPodSandbox for \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\" returns successfully" Jul 12 00:27:56.973116 kubelet[2959]: I0712 00:27:56.972997 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c309846-9388-41d2-abeb-15d473ba2525-cilium-ipsec-secrets\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.973116 kubelet[2959]: I0712 00:27:56.973097 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cilium-cgroup\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.973879 kubelet[2959]: I0712 00:27:56.973138 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cni-path\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.973879 kubelet[2959]: I0712 00:27:56.973201 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-hostproc\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.973879 kubelet[2959]: I0712 00:27:56.973261 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-etc-cni-netd\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.973879 kubelet[2959]: I0712 00:27:56.973304 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-host-proc-sys-kernel\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.973879 kubelet[2959]: I0712 00:27:56.973366 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cilium-run\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.973879 kubelet[2959]: I0712 00:27:56.973407 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-xtables-lock\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.974361 kubelet[2959]: I0712 00:27:56.973470 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-host-proc-sys-net\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.974361 kubelet[2959]: I0712 00:27:56.973535 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c309846-9388-41d2-abeb-15d473ba2525-cilium-config-path\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.974361 kubelet[2959]: I0712 00:27:56.973577 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-lib-modules\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.974361 kubelet[2959]: I0712 00:27:56.973649 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c309846-9388-41d2-abeb-15d473ba2525-hubble-tls\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.974361 kubelet[2959]: I0712 00:27:56.973732 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-bpf-maps\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.974361 kubelet[2959]: I0712 00:27:56.973798 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c309846-9388-41d2-abeb-15d473ba2525-clustermesh-secrets\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.974735 kubelet[2959]: I0712 00:27:56.973865 2959 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdzlr\" (UniqueName: \"kubernetes.io/projected/2c309846-9388-41d2-abeb-15d473ba2525-kube-api-access-wdzlr\") pod \"2c309846-9388-41d2-abeb-15d473ba2525\" (UID: \"2c309846-9388-41d2-abeb-15d473ba2525\") " Jul 12 00:27:56.974920 kubelet[2959]: I0712 00:27:56.974872 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.975070 kubelet[2959]: I0712 00:27:56.975010 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.975070 kubelet[2959]: I0712 00:27:56.974980 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.976927 kubelet[2959]: I0712 00:27:56.976881 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.977266 kubelet[2959]: I0712 00:27:56.977134 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cni-path" (OuterVolumeSpecName: "cni-path") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.985194 systemd[1]: var-lib-kubelet-pods-2c309846\x2d9388\x2d41d2\x2dabeb\x2d15d473ba2525-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 12 00:27:56.988763 kubelet[2959]: I0712 00:27:56.977166 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-hostproc" (OuterVolumeSpecName: "hostproc") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.988963 kubelet[2959]: I0712 00:27:56.977191 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.989093 kubelet[2959]: I0712 00:27:56.977215 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.989201 kubelet[2959]: I0712 00:27:56.980488 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.989308 kubelet[2959]: I0712 00:27:56.988624 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:27:56.990923 kubelet[2959]: I0712 00:27:56.990877 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c309846-9388-41d2-abeb-15d473ba2525-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:27:56.991216 kubelet[2959]: I0712 00:27:56.991186 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c309846-9388-41d2-abeb-15d473ba2525-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:27:56.995500 systemd[1]: var-lib-kubelet-pods-2c309846\x2d9388\x2d41d2\x2dabeb\x2d15d473ba2525-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwdzlr.mount: Deactivated successfully. Jul 12 00:27:56.999018 kubelet[2959]: I0712 00:27:56.998911 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c309846-9388-41d2-abeb-15d473ba2525-kube-api-access-wdzlr" (OuterVolumeSpecName: "kube-api-access-wdzlr") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "kube-api-access-wdzlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:57.002708 kubelet[2959]: I0712 00:27:57.002631 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c309846-9388-41d2-abeb-15d473ba2525-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:27:57.005032 kubelet[2959]: I0712 00:27:57.004979 2959 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c309846-9388-41d2-abeb-15d473ba2525-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2c309846-9388-41d2-abeb-15d473ba2525" (UID: "2c309846-9388-41d2-abeb-15d473ba2525"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:27:57.074149 kubelet[2959]: I0712 00:27:57.074093 2959 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-host-proc-sys-net\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074149 kubelet[2959]: I0712 00:27:57.074148 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c309846-9388-41d2-abeb-15d473ba2525-cilium-config-path\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074383 kubelet[2959]: I0712 00:27:57.074174 2959 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-lib-modules\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074383 kubelet[2959]: I0712 00:27:57.074197 2959 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c309846-9388-41d2-abeb-15d473ba2525-hubble-tls\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074383 kubelet[2959]: I0712 00:27:57.074221 2959 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdzlr\" (UniqueName: \"kubernetes.io/projected/2c309846-9388-41d2-abeb-15d473ba2525-kube-api-access-wdzlr\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074383 kubelet[2959]: I0712 00:27:57.074246 2959 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-bpf-maps\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074383 kubelet[2959]: I0712 00:27:57.074270 2959 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c309846-9388-41d2-abeb-15d473ba2525-clustermesh-secrets\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074383 kubelet[2959]: I0712 00:27:57.074292 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c309846-9388-41d2-abeb-15d473ba2525-cilium-ipsec-secrets\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074383 kubelet[2959]: I0712 00:27:57.074312 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cilium-cgroup\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074383 kubelet[2959]: I0712 00:27:57.074333 2959 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cni-path\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074907 kubelet[2959]: I0712 00:27:57.074353 2959 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-etc-cni-netd\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074907 kubelet[2959]: I0712 00:27:57.074373 2959 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-hostproc\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074907 kubelet[2959]: I0712 00:27:57.074392 2959 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-host-proc-sys-kernel\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074907 kubelet[2959]: I0712 00:27:57.074413 2959 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-cilium-run\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.074907 kubelet[2959]: I0712 00:27:57.074433 2959 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c309846-9388-41d2-abeb-15d473ba2525-xtables-lock\") on node \"ip-172-31-23-9\" DevicePath \"\"" Jul 12 00:27:57.561331 systemd[1]: var-lib-kubelet-pods-2c309846\x2d9388\x2d41d2\x2dabeb\x2d15d473ba2525-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:27:57.561913 systemd[1]: var-lib-kubelet-pods-2c309846\x2d9388\x2d41d2\x2dabeb\x2d15d473ba2525-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:27:57.743416 kubelet[2959]: I0712 00:27:57.743378 2959 scope.go:117] "RemoveContainer" containerID="50b59796af8c9009a393493ec001a723070087e2383e9580b8f9eeca3e317539" Jul 12 00:27:57.745599 env[1927]: time="2025-07-12T00:27:57.745545986Z" level=info msg="RemoveContainer for \"50b59796af8c9009a393493ec001a723070087e2383e9580b8f9eeca3e317539\"" Jul 12 00:27:57.754059 env[1927]: time="2025-07-12T00:27:57.753984268Z" level=info msg="RemoveContainer for \"50b59796af8c9009a393493ec001a723070087e2383e9580b8f9eeca3e317539\" returns successfully" Jul 12 00:27:57.829756 kubelet[2959]: E0712 00:27:57.829528 2959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c309846-9388-41d2-abeb-15d473ba2525" containerName="mount-cgroup" Jul 12 00:27:57.829756 kubelet[2959]: I0712 00:27:57.829706 2959 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c309846-9388-41d2-abeb-15d473ba2525" containerName="mount-cgroup" Jul 12 00:27:57.865786 kubelet[2959]: W0712 00:27:57.863557 2959 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-23-9" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-9' and this object Jul 12 00:27:57.865786 kubelet[2959]: E0712 00:27:57.863648 2959 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-23-9\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-9' and this object" logger="UnhandledError" Jul 12 00:27:57.865786 kubelet[2959]: W0712 00:27:57.863787 2959 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-23-9" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-9' and this object Jul 12 00:27:57.865786 kubelet[2959]: E0712 00:27:57.863820 2959 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-23-9\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-9' and this object" logger="UnhandledError" Jul 12 00:27:57.879219 kubelet[2959]: I0712 00:27:57.879162 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-cilium-run\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.879481 kubelet[2959]: I0712 00:27:57.879451 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-cilium-cgroup\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.879627 kubelet[2959]: I0712 00:27:57.879601 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-cni-path\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.879835 kubelet[2959]: I0712 00:27:57.879806 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d53558e-c24e-4305-8af2-c8127c6cd3ad-cilium-ipsec-secrets\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.880000 kubelet[2959]: I0712 00:27:57.879974 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-host-proc-sys-kernel\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.880146 kubelet[2959]: I0712 00:27:57.880121 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-etc-cni-netd\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.880286 kubelet[2959]: I0712 00:27:57.880261 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d53558e-c24e-4305-8af2-c8127c6cd3ad-clustermesh-secrets\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.880417 kubelet[2959]: I0712 00:27:57.880392 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d53558e-c24e-4305-8af2-c8127c6cd3ad-hubble-tls\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.880560 kubelet[2959]: I0712 00:27:57.880534 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4d9x\" (UniqueName: \"kubernetes.io/projected/6d53558e-c24e-4305-8af2-c8127c6cd3ad-kube-api-access-w4d9x\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.880749 kubelet[2959]: I0712 00:27:57.880715 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-host-proc-sys-net\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.880908 kubelet[2959]: I0712 00:27:57.880881 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-lib-modules\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.881058 kubelet[2959]: I0712 00:27:57.881033 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d53558e-c24e-4305-8af2-c8127c6cd3ad-cilium-config-path\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.881200 kubelet[2959]: I0712 00:27:57.881174 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-bpf-maps\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.881330 kubelet[2959]: I0712 00:27:57.881305 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-hostproc\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:57.881480 kubelet[2959]: I0712 00:27:57.881454 2959 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d53558e-c24e-4305-8af2-c8127c6cd3ad-xtables-lock\") pod \"cilium-bs4vj\" (UID: \"6d53558e-c24e-4305-8af2-c8127c6cd3ad\") " pod="kube-system/cilium-bs4vj" Jul 12 00:27:58.908030 kubelet[2959]: I0712 00:27:58.907959 2959 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c309846-9388-41d2-abeb-15d473ba2525" path="/var/lib/kubelet/pods/2c309846-9388-41d2-abeb-15d473ba2525/volumes" Jul 12 00:27:58.984346 kubelet[2959]: E0712 00:27:58.984282 2959 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 12 00:27:58.984501 kubelet[2959]: E0712 00:27:58.984402 2959 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d53558e-c24e-4305-8af2-c8127c6cd3ad-cilium-ipsec-secrets podName:6d53558e-c24e-4305-8af2-c8127c6cd3ad nodeName:}" failed. No retries permitted until 2025-07-12 00:27:59.484370682 +0000 UTC m=+170.832904062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/6d53558e-c24e-4305-8af2-c8127c6cd3ad-cilium-ipsec-secrets") pod "cilium-bs4vj" (UID: "6d53558e-c24e-4305-8af2-c8127c6cd3ad") : failed to sync secret cache: timed out waiting for the condition Jul 12 00:27:59.181941 kubelet[2959]: E0712 00:27:59.181798 2959 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:27:59.669583 env[1927]: time="2025-07-12T00:27:59.669494451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bs4vj,Uid:6d53558e-c24e-4305-8af2-c8127c6cd3ad,Namespace:kube-system,Attempt:0,}" Jul 12 00:27:59.705489 env[1927]: time="2025-07-12T00:27:59.705150697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:27:59.705489 env[1927]: time="2025-07-12T00:27:59.705219135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:27:59.705489 env[1927]: time="2025-07-12T00:27:59.705254439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:27:59.706811 env[1927]: time="2025-07-12T00:27:59.706619655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810 pid=6109 runtime=io.containerd.runc.v2 Jul 12 00:27:59.802395 env[1927]: time="2025-07-12T00:27:59.802335501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bs4vj,Uid:6d53558e-c24e-4305-8af2-c8127c6cd3ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\"" Jul 12 00:27:59.809534 env[1927]: time="2025-07-12T00:27:59.809447531Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:27:59.833025 env[1927]: time="2025-07-12T00:27:59.832937525Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d4301c9add404ba99f88f8a9d78771ebae1e94ea32962130a63cfc8162e75e3\"" Jul 12 00:27:59.835430 env[1927]: time="2025-07-12T00:27:59.834370674Z" level=info msg="StartContainer for \"9d4301c9add404ba99f88f8a9d78771ebae1e94ea32962130a63cfc8162e75e3\"" Jul 12 00:27:59.852747 systemd[1]: run-containerd-runc-k8s.io-0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810-runc.KrQR4K.mount: Deactivated successfully. Jul 12 00:27:59.953119 env[1927]: time="2025-07-12T00:27:59.952366893Z" level=info msg="StartContainer for \"9d4301c9add404ba99f88f8a9d78771ebae1e94ea32962130a63cfc8162e75e3\" returns successfully" Jul 12 00:28:00.004585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d4301c9add404ba99f88f8a9d78771ebae1e94ea32962130a63cfc8162e75e3-rootfs.mount: Deactivated successfully. Jul 12 00:28:00.023340 env[1927]: time="2025-07-12T00:28:00.023272962Z" level=info msg="shim disconnected" id=9d4301c9add404ba99f88f8a9d78771ebae1e94ea32962130a63cfc8162e75e3 Jul 12 00:28:00.023619 env[1927]: time="2025-07-12T00:28:00.023341435Z" level=warning msg="cleaning up after shim disconnected" id=9d4301c9add404ba99f88f8a9d78771ebae1e94ea32962130a63cfc8162e75e3 namespace=k8s.io Jul 12 00:28:00.023619 env[1927]: time="2025-07-12T00:28:00.023364079Z" level=info msg="cleaning up dead shim" Jul 12 00:28:00.039193 env[1927]: time="2025-07-12T00:28:00.039113352Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6190 runtime=io.containerd.runc.v2\n" Jul 12 00:28:00.765610 env[1927]: time="2025-07-12T00:28:00.765543540Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:28:00.794284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180483758.mount: Deactivated successfully. Jul 12 00:28:00.809003 env[1927]: time="2025-07-12T00:28:00.808902775Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a4547d7f41ee16db20e7dd2972e4cf258a03b4ee109414500d756a82bd510c4c\"" Jul 12 00:28:00.812239 env[1927]: time="2025-07-12T00:28:00.812186895Z" level=info msg="StartContainer for \"a4547d7f41ee16db20e7dd2972e4cf258a03b4ee109414500d756a82bd510c4c\"" Jul 12 00:28:00.925905 env[1927]: time="2025-07-12T00:28:00.925823348Z" level=info msg="StartContainer for \"a4547d7f41ee16db20e7dd2972e4cf258a03b4ee109414500d756a82bd510c4c\" returns successfully" Jul 12 00:28:00.976921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4547d7f41ee16db20e7dd2972e4cf258a03b4ee109414500d756a82bd510c4c-rootfs.mount: Deactivated successfully. Jul 12 00:28:00.984407 env[1927]: time="2025-07-12T00:28:00.984323386Z" level=info msg="shim disconnected" id=a4547d7f41ee16db20e7dd2972e4cf258a03b4ee109414500d756a82bd510c4c Jul 12 00:28:00.984726 env[1927]: time="2025-07-12T00:28:00.984405323Z" level=warning msg="cleaning up after shim disconnected" id=a4547d7f41ee16db20e7dd2972e4cf258a03b4ee109414500d756a82bd510c4c namespace=k8s.io Jul 12 00:28:00.984726 env[1927]: time="2025-07-12T00:28:00.984428663Z" level=info msg="cleaning up dead shim" Jul 12 00:28:00.999500 env[1927]: time="2025-07-12T00:28:00.999426699Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6252 runtime=io.containerd.runc.v2\n" Jul 12 00:28:01.766719 env[1927]: time="2025-07-12T00:28:01.764225137Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:28:01.802448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484563332.mount: Deactivated successfully. Jul 12 00:28:01.816362 env[1927]: time="2025-07-12T00:28:01.816298162Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"74b6b1ff0433fc4e06a55d1b11056a86ee2f236c5799eb7d5325b161b4983623\"" Jul 12 00:28:01.817748 env[1927]: time="2025-07-12T00:28:01.817655145Z" level=info msg="StartContainer for \"74b6b1ff0433fc4e06a55d1b11056a86ee2f236c5799eb7d5325b161b4983623\"" Jul 12 00:28:01.866592 systemd[1]: run-containerd-runc-k8s.io-74b6b1ff0433fc4e06a55d1b11056a86ee2f236c5799eb7d5325b161b4983623-runc.uvbtfc.mount: Deactivated successfully. Jul 12 00:28:01.961349 env[1927]: time="2025-07-12T00:28:01.961282763Z" level=info msg="StartContainer for \"74b6b1ff0433fc4e06a55d1b11056a86ee2f236c5799eb7d5325b161b4983623\" returns successfully" Jul 12 00:28:01.993141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74b6b1ff0433fc4e06a55d1b11056a86ee2f236c5799eb7d5325b161b4983623-rootfs.mount: Deactivated successfully. Jul 12 00:28:02.006929 env[1927]: time="2025-07-12T00:28:02.006827940Z" level=info msg="shim disconnected" id=74b6b1ff0433fc4e06a55d1b11056a86ee2f236c5799eb7d5325b161b4983623 Jul 12 00:28:02.007266 env[1927]: time="2025-07-12T00:28:02.007233511Z" level=warning msg="cleaning up after shim disconnected" id=74b6b1ff0433fc4e06a55d1b11056a86ee2f236c5799eb7d5325b161b4983623 namespace=k8s.io Jul 12 00:28:02.007382 env[1927]: time="2025-07-12T00:28:02.007355541Z" level=info msg="cleaning up dead shim" Jul 12 00:28:02.021729 env[1927]: time="2025-07-12T00:28:02.021476876Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6313 runtime=io.containerd.runc.v2\n" Jul 12 00:28:02.775142 env[1927]: time="2025-07-12T00:28:02.775082003Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:28:02.804728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824833314.mount: Deactivated successfully. Jul 12 00:28:02.829760 env[1927]: time="2025-07-12T00:28:02.829639689Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"21c91baafd1daff36058c5d7f610554db833741e84283a764b1f763a8f2feb81\"" Jul 12 00:28:02.836970 env[1927]: time="2025-07-12T00:28:02.836915220Z" level=info msg="StartContainer for \"21c91baafd1daff36058c5d7f610554db833741e84283a764b1f763a8f2feb81\"" Jul 12 00:28:02.968167 env[1927]: time="2025-07-12T00:28:02.968104091Z" level=info msg="StartContainer for \"21c91baafd1daff36058c5d7f610554db833741e84283a764b1f763a8f2feb81\" returns successfully" Jul 12 00:28:03.000818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21c91baafd1daff36058c5d7f610554db833741e84283a764b1f763a8f2feb81-rootfs.mount: Deactivated successfully. Jul 12 00:28:03.003887 env[1927]: time="2025-07-12T00:28:03.003812615Z" level=info msg="shim disconnected" id=21c91baafd1daff36058c5d7f610554db833741e84283a764b1f763a8f2feb81 Jul 12 00:28:03.003887 env[1927]: time="2025-07-12T00:28:03.003880524Z" level=warning msg="cleaning up after shim disconnected" id=21c91baafd1daff36058c5d7f610554db833741e84283a764b1f763a8f2feb81 namespace=k8s.io Jul 12 00:28:03.004147 env[1927]: time="2025-07-12T00:28:03.003902641Z" level=info msg="cleaning up dead shim" Jul 12 00:28:03.017312 env[1927]: time="2025-07-12T00:28:03.017239401Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6371 runtime=io.containerd.runc.v2\n" Jul 12 00:28:03.507142 kubelet[2959]: I0712 00:28:03.507064 2959 setters.go:600] "Node became not ready" node="ip-172-31-23-9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:28:03Z","lastTransitionTime":"2025-07-12T00:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:28:03.783487 env[1927]: time="2025-07-12T00:28:03.783266549Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:28:03.822512 env[1927]: time="2025-07-12T00:28:03.822409589Z" level=info msg="CreateContainer within sandbox \"0e705648b8adbdf375aea0b5579d12224a36f9aa56d9ea01e217f46abcb42810\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e308c4d00003083e592b571d262302e2345c3028a133fe1736ec9ad80192e913\"" Jul 12 00:28:03.825284 env[1927]: time="2025-07-12T00:28:03.824070585Z" level=info msg="StartContainer for \"e308c4d00003083e592b571d262302e2345c3028a133fe1736ec9ad80192e913\"" Jul 12 00:28:03.862381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2614982925.mount: Deactivated successfully. Jul 12 00:28:03.957945 env[1927]: time="2025-07-12T00:28:03.957837463Z" level=info msg="StartContainer for \"e308c4d00003083e592b571d262302e2345c3028a133fe1736ec9ad80192e913\" returns successfully" Jul 12 00:28:04.036539 systemd[1]: run-containerd-runc-k8s.io-e308c4d00003083e592b571d262302e2345c3028a133fe1736ec9ad80192e913-runc.jR1rMM.mount: Deactivated successfully. Jul 12 00:28:04.759732 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 12 00:28:07.027000 systemd[1]: run-containerd-runc-k8s.io-e308c4d00003083e592b571d262302e2345c3028a133fe1736ec9ad80192e913-runc.tPH1aA.mount: Deactivated successfully. Jul 12 00:28:08.893811 systemd-networkd[1596]: lxc_health: Link UP Jul 12 00:28:08.903159 (udev-worker)[6913]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:28:08.952577 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:28:08.949405 systemd-networkd[1596]: lxc_health: Gained carrier Jul 12 00:28:09.110508 env[1927]: time="2025-07-12T00:28:09.110168485Z" level=info msg="StopPodSandbox for \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\"" Jul 12 00:28:09.110508 env[1927]: time="2025-07-12T00:28:09.110326659Z" level=info msg="TearDown network for sandbox \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\" successfully" Jul 12 00:28:09.110508 env[1927]: time="2025-07-12T00:28:09.110386276Z" level=info msg="StopPodSandbox for \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\" returns successfully" Jul 12 00:28:09.112712 env[1927]: time="2025-07-12T00:28:09.111847468Z" level=info msg="RemovePodSandbox for \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\"" Jul 12 00:28:09.112712 env[1927]: time="2025-07-12T00:28:09.111912605Z" level=info msg="Forcibly stopping sandbox \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\"" Jul 12 00:28:09.112712 env[1927]: time="2025-07-12T00:28:09.112042531Z" level=info msg="TearDown network for sandbox \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\" successfully" Jul 12 00:28:09.132060 env[1927]: time="2025-07-12T00:28:09.131940348Z" level=info msg="RemovePodSandbox \"f202438fc0f2cc4965ea0cf61df50160b8c470fc3786f4429d8df25ab5c04ad5\" returns successfully" Jul 12 00:28:09.136181 env[1927]: time="2025-07-12T00:28:09.136008451Z" level=info msg="StopPodSandbox for \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\"" Jul 12 00:28:09.136373 env[1927]: time="2025-07-12T00:28:09.136217231Z" level=info msg="TearDown network for sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" successfully" Jul 12 00:28:09.136373 env[1927]: time="2025-07-12T00:28:09.136305876Z" level=info msg="StopPodSandbox for \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" returns successfully" Jul 12 00:28:09.159623 env[1927]: time="2025-07-12T00:28:09.148886488Z" level=info msg="RemovePodSandbox for \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\"" Jul 12 00:28:09.159623 env[1927]: time="2025-07-12T00:28:09.148974269Z" level=info msg="Forcibly stopping sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\"" Jul 12 00:28:09.159623 env[1927]: time="2025-07-12T00:28:09.149160212Z" level=info msg="TearDown network for sandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" successfully" Jul 12 00:28:09.173150 env[1927]: time="2025-07-12T00:28:09.172890868Z" level=info msg="RemovePodSandbox \"26c169043b244caeba0f8270bc1f37b83d033874ea24940ee42ea531939a68c5\" returns successfully" Jul 12 00:28:09.181460 env[1927]: time="2025-07-12T00:28:09.181242562Z" level=info msg="StopPodSandbox for \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\"" Jul 12 00:28:09.181646 env[1927]: time="2025-07-12T00:28:09.181433797Z" level=info msg="TearDown network for sandbox \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\" successfully" Jul 12 00:28:09.181646 env[1927]: time="2025-07-12T00:28:09.181520366Z" level=info msg="StopPodSandbox for \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\" returns successfully" Jul 12 00:28:09.191577 env[1927]: time="2025-07-12T00:28:09.189814867Z" level=info msg="RemovePodSandbox for \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\"" Jul 12 00:28:09.191577 env[1927]: time="2025-07-12T00:28:09.189878432Z" level=info msg="Forcibly stopping sandbox \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\"" Jul 12 00:28:09.191577 env[1927]: time="2025-07-12T00:28:09.190037855Z" level=info msg="TearDown network for sandbox \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\" successfully" Jul 12 00:28:09.208932 env[1927]: time="2025-07-12T00:28:09.208868014Z" level=info msg="RemovePodSandbox \"eec98696338bdbfc3a86cd3c61ae4cae370d383cded14e1fb3fc522cc63f0e23\" returns successfully" Jul 12 00:28:09.357411 systemd[1]: run-containerd-runc-k8s.io-e308c4d00003083e592b571d262302e2345c3028a133fe1736ec9ad80192e913-runc.R5Ux4j.mount: Deactivated successfully. Jul 12 00:28:09.707294 kubelet[2959]: I0712 00:28:09.707183 2959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bs4vj" podStartSLOduration=12.707161071 podStartE2EDuration="12.707161071s" podCreationTimestamp="2025-07-12 00:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:28:04.813712198 +0000 UTC m=+176.162245578" watchObservedRunningTime="2025-07-12 00:28:09.707161071 +0000 UTC m=+181.055694451" Jul 12 00:28:10.463853 systemd-networkd[1596]: lxc_health: Gained IPv6LL Jul 12 00:28:11.776287 systemd[1]: run-containerd-runc-k8s.io-e308c4d00003083e592b571d262302e2345c3028a133fe1736ec9ad80192e913-runc.ybxnpd.mount: Deactivated successfully. Jul 12 00:28:14.091088 systemd[1]: run-containerd-runc-k8s.io-e308c4d00003083e592b571d262302e2345c3028a133fe1736ec9ad80192e913-runc.WidDjr.mount: Deactivated successfully. Jul 12 00:28:14.258500 sshd[6011]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:14.264613 systemd-logind[1914]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:28:14.265040 systemd[1]: sshd@26-172.31.23.9:22-147.75.109.163:51752.service: Deactivated successfully. Jul 12 00:28:14.266593 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:28:14.273301 systemd-logind[1914]: Removed session 27. Jul 12 00:28:35.226321 amazon-ssm-agent[1898]: 2025-07-12 00:28:35 INFO [HealthCheck] HealthCheck reporting agent health. Jul 12 00:28:49.748380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f206fd67bd867e6fdaa3622c4f689911be503dc5f0ac3f445807535dcab39d58-rootfs.mount: Deactivated successfully. Jul 12 00:28:49.759379 env[1927]: time="2025-07-12T00:28:49.759314289Z" level=info msg="shim disconnected" id=f206fd67bd867e6fdaa3622c4f689911be503dc5f0ac3f445807535dcab39d58 Jul 12 00:28:49.760267 env[1927]: time="2025-07-12T00:28:49.760212035Z" level=warning msg="cleaning up after shim disconnected" id=f206fd67bd867e6fdaa3622c4f689911be503dc5f0ac3f445807535dcab39d58 namespace=k8s.io Jul 12 00:28:49.760407 env[1927]: time="2025-07-12T00:28:49.760378430Z" level=info msg="cleaning up dead shim" Jul 12 00:28:49.773869 env[1927]: time="2025-07-12T00:28:49.773815528Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7037 runtime=io.containerd.runc.v2\n" Jul 12 00:28:49.902726 kubelet[2959]: I0712 00:28:49.902590 2959 scope.go:117] "RemoveContainer" containerID="f206fd67bd867e6fdaa3622c4f689911be503dc5f0ac3f445807535dcab39d58" Jul 12 00:28:49.906964 env[1927]: time="2025-07-12T00:28:49.906898732Z" level=info msg="CreateContainer within sandbox \"4575fdb1fb3755a3344abb225313424572693c13bbb2c2f36d174f1cec610d28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 12 00:28:49.930620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180714629.mount: Deactivated successfully. Jul 12 00:28:49.948057 env[1927]: time="2025-07-12T00:28:49.947995774Z" level=info msg="CreateContainer within sandbox \"4575fdb1fb3755a3344abb225313424572693c13bbb2c2f36d174f1cec610d28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3239a86b9251cdbeb1dd7c22a4bc60c312247fb87ebe752ae6895dcf61dcf713\"" Jul 12 00:28:49.948968 env[1927]: time="2025-07-12T00:28:49.948925512Z" level=info msg="StartContainer for \"3239a86b9251cdbeb1dd7c22a4bc60c312247fb87ebe752ae6895dcf61dcf713\"" Jul 12 00:28:50.081856 env[1927]: time="2025-07-12T00:28:50.081011608Z" level=info msg="StartContainer for \"3239a86b9251cdbeb1dd7c22a4bc60c312247fb87ebe752ae6895dcf61dcf713\" returns successfully" Jul 12 00:28:53.528481 kubelet[2959]: E0712 00:28:53.528426 2959 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-9?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 12 00:28:55.843287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9935c1c28016a07006d25ef29202970c08ffcc8685cd3e16b1868c0e6d4f373b-rootfs.mount: Deactivated successfully. Jul 12 00:28:55.859444 env[1927]: time="2025-07-12T00:28:55.859367133Z" level=info msg="shim disconnected" id=9935c1c28016a07006d25ef29202970c08ffcc8685cd3e16b1868c0e6d4f373b Jul 12 00:28:55.859444 env[1927]: time="2025-07-12T00:28:55.859437058Z" level=warning msg="cleaning up after shim disconnected" id=9935c1c28016a07006d25ef29202970c08ffcc8685cd3e16b1868c0e6d4f373b namespace=k8s.io Jul 12 00:28:55.860235 env[1927]: time="2025-07-12T00:28:55.859459078Z" level=info msg="cleaning up dead shim" Jul 12 00:28:55.873396 env[1927]: time="2025-07-12T00:28:55.873340470Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7093 runtime=io.containerd.runc.v2\n" Jul 12 00:28:55.925412 kubelet[2959]: I0712 00:28:55.925359 2959 scope.go:117] "RemoveContainer" containerID="9935c1c28016a07006d25ef29202970c08ffcc8685cd3e16b1868c0e6d4f373b" Jul 12 00:28:55.929477 env[1927]: time="2025-07-12T00:28:55.929423733Z" level=info msg="CreateContainer within sandbox \"aa5bcb477d76b886a5b186f8c3d9a0bf846c2a481bf68420c60902a79184dabf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 12 00:28:55.964803 env[1927]: time="2025-07-12T00:28:55.964712119Z" level=info msg="CreateContainer within sandbox \"aa5bcb477d76b886a5b186f8c3d9a0bf846c2a481bf68420c60902a79184dabf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"12ceb7043e1ef1465c512800e00f97d60361e3d05fbe126ed28a58f1952d1bc3\"" Jul 12 00:28:55.965465 env[1927]: time="2025-07-12T00:28:55.965401758Z" level=info msg="StartContainer for \"12ceb7043e1ef1465c512800e00f97d60361e3d05fbe126ed28a58f1952d1bc3\"" Jul 12 00:28:56.107827 env[1927]: time="2025-07-12T00:28:56.106486720Z" level=info msg="StartContainer for \"12ceb7043e1ef1465c512800e00f97d60361e3d05fbe126ed28a58f1952d1bc3\" returns successfully"