Mar 17 18:20:16.947015 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 18:20:16.947053 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:20:16.947076 kernel: efi: EFI v2.70 by EDK II Mar 17 18:20:16.947091 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Mar 17 18:20:16.947105 kernel: ACPI: Early table checksum verification disabled Mar 17 18:20:16.947119 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 18:20:16.947148 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 18:20:16.947166 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 18:20:16.947181 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 18:20:16.947195 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 18:20:16.947214 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 18:20:16.947228 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 18:20:16.947242 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 18:20:16.947256 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 18:20:16.947272 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 18:20:16.947291 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 18:20:16.947305 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 18:20:16.947320 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 18:20:16.947334 kernel: printk: bootconsole [uart0] enabled Mar 17 18:20:16.947348 kernel: NUMA: Failed to initialise from firmware Mar 17 18:20:16.947363 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 18:20:16.947378 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Mar 17 18:20:16.947393 kernel: Zone ranges: Mar 17 18:20:16.947408 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 18:20:16.947422 kernel: DMA32 empty Mar 17 18:20:16.947436 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 18:20:16.947454 kernel: Movable zone start for each node Mar 17 18:20:16.947469 kernel: Early memory node ranges Mar 17 18:20:16.947483 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 18:20:16.947498 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 18:20:16.947513 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 18:20:16.947527 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 18:20:16.947542 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 18:20:16.947557 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 18:20:16.947571 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 18:20:16.947586 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 18:20:16.947600 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 18:20:16.947614 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 18:20:16.947633 kernel: psci: probing for conduit method from ACPI. Mar 17 18:20:16.947648 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 18:20:16.947694 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:20:16.947712 kernel: psci: Trusted OS migration not required Mar 17 18:20:16.947727 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:20:16.947746 kernel: ACPI: SRAT not present Mar 17 18:20:16.947762 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:20:16.947777 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:20:16.947793 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:20:16.947808 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:20:16.947824 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:20:16.947839 kernel: CPU features: detected: Spectre-v2 Mar 17 18:20:16.947869 kernel: CPU features: detected: Spectre-v3a Mar 17 18:20:16.947890 kernel: CPU features: detected: Spectre-BHB Mar 17 18:20:16.947906 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:20:16.947921 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:20:16.947941 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 18:20:16.947957 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 18:20:16.947972 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 18:20:16.947987 kernel: Policy zone: Normal Mar 17 18:20:16.948005 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:20:16.948022 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:20:16.948037 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:20:16.948053 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:20:16.948068 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:20:16.948083 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 18:20:16.948103 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Mar 17 18:20:16.948120 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:20:16.948135 kernel: trace event string verifier disabled Mar 17 18:20:16.948150 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:20:16.948167 kernel: rcu: RCU event tracing is enabled. Mar 17 18:20:16.948183 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:20:16.948198 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:20:16.948225 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:20:16.948244 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:20:16.948272 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:20:16.948288 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:20:16.948304 kernel: GICv3: 96 SPIs implemented Mar 17 18:20:16.948324 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:20:16.948339 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:20:16.948354 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:20:16.948369 kernel: GICv3: 16 PPIs implemented Mar 17 18:20:16.948385 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 18:20:16.948400 kernel: ACPI: SRAT not present Mar 17 18:20:16.948415 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 18:20:16.948431 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:20:16.948446 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:20:16.948462 kernel: GICv3: using LPI property table @0x00000004000b0000 Mar 17 18:20:16.948477 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 18:20:16.948496 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Mar 17 18:20:16.948511 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 18:20:16.948527 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 18:20:16.948542 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 18:20:16.948558 kernel: Console: colour dummy device 80x25 Mar 17 18:20:16.948573 kernel: printk: console [tty1] enabled Mar 17 18:20:16.948589 kernel: ACPI: Core revision 20210730 Mar 17 18:20:16.948605 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 18:20:16.948621 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:20:16.948636 kernel: LSM: Security Framework initializing Mar 17 18:20:16.956686 kernel: SELinux: Initializing. Mar 17 18:20:16.956716 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:20:16.956733 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:20:16.956750 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:20:16.956767 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 18:20:16.956782 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 18:20:16.956798 kernel: Remapping and enabling EFI services. Mar 17 18:20:16.956814 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:20:16.956830 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:20:16.956846 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 18:20:16.956869 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Mar 17 18:20:16.956885 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 18:20:16.956901 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:20:16.956917 kernel: SMP: Total of 2 processors activated. Mar 17 18:20:16.956933 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:20:16.956948 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 18:20:16.956964 kernel: CPU features: detected: CRC32 instructions Mar 17 18:20:16.956980 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:20:16.956995 kernel: alternatives: patching kernel code Mar 17 18:20:16.957014 kernel: devtmpfs: initialized Mar 17 18:20:16.957031 kernel: KASLR disabled due to lack of seed Mar 17 18:20:16.957057 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:20:16.957077 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:20:16.957094 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:20:16.957110 kernel: SMBIOS 3.0.0 present. Mar 17 18:20:16.957126 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 18:20:16.957142 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:20:16.957159 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:20:16.957175 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:20:16.957192 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:20:16.957212 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:20:16.957229 kernel: audit: type=2000 audit(0.248:1): state=initialized audit_enabled=0 res=1 Mar 17 18:20:16.957245 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:20:16.957262 kernel: cpuidle: using governor menu Mar 17 18:20:16.957278 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:20:16.957298 kernel: ASID allocator initialised with 32768 entries Mar 17 18:20:16.957315 kernel: ACPI: bus type PCI registered Mar 17 18:20:16.957331 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:20:16.957348 kernel: Serial: AMBA PL011 UART driver Mar 17 18:20:16.957364 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:20:16.957381 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:20:16.957397 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:20:16.957413 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:20:16.957430 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:20:16.957451 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:20:16.957468 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:20:16.957484 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:20:16.957501 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:20:16.957517 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:20:16.957533 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:20:16.957550 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:20:16.957567 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:20:16.957583 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:20:16.957603 kernel: ACPI: Interpreter enabled Mar 17 18:20:16.957620 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:20:16.957636 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:20:16.957669 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 18:20:16.957986 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:20:16.958182 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:20:16.958370 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:20:16.958557 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 18:20:16.958791 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 18:20:16.958816 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 18:20:16.958833 kernel: acpiphp: Slot [1] registered Mar 17 18:20:16.958850 kernel: acpiphp: Slot [2] registered Mar 17 18:20:16.958866 kernel: acpiphp: Slot [3] registered Mar 17 18:20:16.958882 kernel: acpiphp: Slot [4] registered Mar 17 18:20:16.958899 kernel: acpiphp: Slot [5] registered Mar 17 18:20:16.958915 kernel: acpiphp: Slot [6] registered Mar 17 18:20:16.958931 kernel: acpiphp: Slot [7] registered Mar 17 18:20:16.958952 kernel: acpiphp: Slot [8] registered Mar 17 18:20:16.958968 kernel: acpiphp: Slot [9] registered Mar 17 18:20:16.958984 kernel: acpiphp: Slot [10] registered Mar 17 18:20:16.959000 kernel: acpiphp: Slot [11] registered Mar 17 18:20:16.959016 kernel: acpiphp: Slot [12] registered Mar 17 18:20:16.959033 kernel: acpiphp: Slot [13] registered Mar 17 18:20:16.959049 kernel: acpiphp: Slot [14] registered Mar 17 18:20:16.959065 kernel: acpiphp: Slot [15] registered Mar 17 18:20:16.959081 kernel: acpiphp: Slot [16] registered Mar 17 18:20:16.959100 kernel: acpiphp: Slot [17] registered Mar 17 18:20:16.959117 kernel: acpiphp: Slot [18] registered Mar 17 18:20:16.959133 kernel: acpiphp: Slot [19] registered Mar 17 18:20:16.959149 kernel: acpiphp: Slot [20] registered Mar 17 18:20:16.959165 kernel: acpiphp: Slot [21] registered Mar 17 18:20:16.959181 kernel: acpiphp: Slot [22] registered Mar 17 18:20:16.959197 kernel: acpiphp: Slot [23] registered Mar 17 18:20:16.959214 kernel: acpiphp: Slot [24] registered Mar 17 18:20:16.959230 kernel: acpiphp: Slot [25] registered Mar 17 18:20:16.959246 kernel: acpiphp: Slot [26] registered Mar 17 18:20:16.959266 kernel: acpiphp: Slot [27] registered Mar 17 18:20:16.959282 kernel: acpiphp: Slot [28] registered Mar 17 18:20:16.959298 kernel: acpiphp: Slot [29] registered Mar 17 18:20:16.959315 kernel: acpiphp: Slot [30] registered Mar 17 18:20:16.959331 kernel: acpiphp: Slot [31] registered Mar 17 18:20:16.959347 kernel: PCI host bridge to bus 0000:00 Mar 17 18:20:16.959548 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 18:20:16.961267 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:20:16.961455 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 18:20:16.961634 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 18:20:16.961922 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 18:20:16.962138 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 18:20:16.962337 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 18:20:16.962557 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 18:20:16.967277 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 18:20:16.967515 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 18:20:16.967778 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 18:20:16.968039 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 18:20:16.968265 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 18:20:16.968486 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 18:20:16.968743 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 18:20:16.968977 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 18:20:16.969193 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 18:20:16.969414 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 18:20:16.973251 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 18:20:16.973521 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 18:20:16.973763 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 18:20:16.973972 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:20:16.974179 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 18:20:16.974203 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:20:16.974220 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:20:16.974237 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:20:16.974254 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:20:16.974271 kernel: iommu: Default domain type: Translated Mar 17 18:20:16.974287 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:20:16.974304 kernel: vgaarb: loaded Mar 17 18:20:16.974320 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:20:16.974342 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:20:16.974358 kernel: PTP clock support registered Mar 17 18:20:16.974375 kernel: Registered efivars operations Mar 17 18:20:16.974391 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:20:16.974407 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:20:16.974424 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:20:16.974451 kernel: pnp: PnP ACPI init Mar 17 18:20:16.974693 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 18:20:16.974724 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:20:16.974742 kernel: NET: Registered PF_INET protocol family Mar 17 18:20:16.974758 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:20:16.974775 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:20:16.974792 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:20:16.974809 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:20:16.974826 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:20:16.974842 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:20:16.974858 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:20:16.974879 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:20:16.974896 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:20:16.974912 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:20:16.974929 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 18:20:16.974945 kernel: kvm [1]: HYP mode not available Mar 17 18:20:16.974962 kernel: Initialise system trusted keyrings Mar 17 18:20:16.974978 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:20:16.974995 kernel: Key type asymmetric registered Mar 17 18:20:16.975011 kernel: Asymmetric key parser 'x509' registered Mar 17 18:20:16.975031 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:20:16.975048 kernel: io scheduler mq-deadline registered Mar 17 18:20:16.975064 kernel: io scheduler kyber registered Mar 17 18:20:16.975080 kernel: io scheduler bfq registered Mar 17 18:20:16.975313 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 18:20:16.975340 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:20:16.980691 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:20:16.980735 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 18:20:16.980761 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 18:20:16.980779 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:20:16.980797 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 18:20:16.981038 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 18:20:16.981063 kernel: printk: console [ttyS0] disabled Mar 17 18:20:16.981081 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 18:20:16.981098 kernel: printk: console [ttyS0] enabled Mar 17 18:20:16.981115 kernel: printk: bootconsole [uart0] disabled Mar 17 18:20:16.981131 kernel: thunder_xcv, ver 1.0 Mar 17 18:20:16.981147 kernel: thunder_bgx, ver 1.0 Mar 17 18:20:16.981169 kernel: nicpf, ver 1.0 Mar 17 18:20:16.981185 kernel: nicvf, ver 1.0 Mar 17 18:20:16.981398 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:20:16.981583 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:20:16 UTC (1742235616) Mar 17 18:20:16.981606 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:20:16.981624 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:20:16.981640 kernel: Segment Routing with IPv6 Mar 17 18:20:16.981676 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:20:16.981701 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:20:16.981718 kernel: Key type dns_resolver registered Mar 17 18:20:16.981735 kernel: registered taskstats version 1 Mar 17 18:20:16.981752 kernel: Loading compiled-in X.509 certificates Mar 17 18:20:16.981769 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:20:16.981785 kernel: Key type .fscrypt registered Mar 17 18:20:16.981801 kernel: Key type fscrypt-provisioning registered Mar 17 18:20:16.981818 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:20:16.981834 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:20:16.981855 kernel: ima: No architecture policies found Mar 17 18:20:16.981871 kernel: clk: Disabling unused clocks Mar 17 18:20:16.981888 kernel: Freeing unused kernel memory: 36416K Mar 17 18:20:16.981904 kernel: Run /init as init process Mar 17 18:20:16.981920 kernel: with arguments: Mar 17 18:20:16.981936 kernel: /init Mar 17 18:20:16.981953 kernel: with environment: Mar 17 18:20:16.981968 kernel: HOME=/ Mar 17 18:20:16.981985 kernel: TERM=linux Mar 17 18:20:16.982004 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:20:16.982027 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:20:16.982048 systemd[1]: Detected virtualization amazon. Mar 17 18:20:16.982067 systemd[1]: Detected architecture arm64. Mar 17 18:20:16.982085 systemd[1]: Running in initrd. Mar 17 18:20:16.982103 systemd[1]: No hostname configured, using default hostname. Mar 17 18:20:16.982120 systemd[1]: Hostname set to . Mar 17 18:20:16.982142 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:20:16.982160 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:20:16.982178 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:20:16.982195 systemd[1]: Reached target cryptsetup.target. Mar 17 18:20:16.982212 systemd[1]: Reached target paths.target. Mar 17 18:20:16.982230 systemd[1]: Reached target slices.target. Mar 17 18:20:16.982247 systemd[1]: Reached target swap.target. Mar 17 18:20:16.982265 systemd[1]: Reached target timers.target. Mar 17 18:20:16.982287 systemd[1]: Listening on iscsid.socket. Mar 17 18:20:16.982322 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:20:16.982341 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:20:16.982359 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:20:16.982377 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:20:16.982395 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:20:16.982413 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:20:16.982431 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:20:16.982453 systemd[1]: Reached target sockets.target. Mar 17 18:20:16.982471 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:20:16.982489 systemd[1]: Finished network-cleanup.service. Mar 17 18:20:16.982506 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:20:16.982524 systemd[1]: Starting systemd-journald.service... Mar 17 18:20:16.982552 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:20:16.982575 systemd[1]: Starting systemd-resolved.service... Mar 17 18:20:16.982593 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:20:16.982611 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:20:16.982635 kernel: audit: type=1130 audit(1742235616.949:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:16.982672 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:20:16.982695 kernel: audit: type=1130 audit(1742235616.961:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:16.982713 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:20:16.982735 systemd-journald[309]: Journal started Mar 17 18:20:16.982830 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2c3ea83c47698eaedd97982b56a23e) is 8.0M, max 75.4M, 67.4M free. Mar 17 18:20:16.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:16.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:16.939530 systemd-modules-load[310]: Inserted module 'overlay' Mar 17 18:20:17.009767 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:20:17.009818 kernel: audit: type=1130 audit(1742235616.980:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:16.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.020683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:20:17.020755 systemd[1]: Started systemd-journald.service. Mar 17 18:20:17.013018 systemd-resolved[311]: Positive Trust Anchors: Mar 17 18:20:17.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.013044 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:20:17.013098 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:20:17.060288 kernel: audit: type=1130 audit(1742235617.023:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.068243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:20:17.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.082283 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:20:17.086079 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:20:17.086110 kernel: audit: type=1130 audit(1742235617.072:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.086909 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:20:17.101262 kernel: audit: type=1130 audit(1742235617.082:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.101309 kernel: Bridge firewalling registered Mar 17 18:20:17.096797 systemd-modules-load[310]: Inserted module 'br_netfilter' Mar 17 18:20:17.122680 kernel: SCSI subsystem initialized Mar 17 18:20:17.129519 dracut-cmdline[326]: dracut-dracut-053 Mar 17 18:20:17.143186 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:20:17.143254 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:20:17.143498 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:20:17.158688 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:20:17.164630 systemd-modules-load[310]: Inserted module 'dm_multipath' Mar 17 18:20:17.167806 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:20:17.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.178122 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:20:17.185693 kernel: audit: type=1130 audit(1742235617.166:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.198413 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:20:17.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.208702 kernel: audit: type=1130 audit(1742235617.197:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.291689 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:20:17.312839 kernel: iscsi: registered transport (tcp) Mar 17 18:20:17.339019 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:20:17.339099 kernel: QLogic iSCSI HBA Driver Mar 17 18:20:17.531195 systemd-resolved[311]: Defaulting to hostname 'linux'. Mar 17 18:20:17.533608 kernel: random: crng init done Mar 17 18:20:17.535475 systemd[1]: Started systemd-resolved.service. Mar 17 18:20:17.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.546604 kernel: audit: type=1130 audit(1742235617.534:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.537502 systemd[1]: Reached target nss-lookup.target. Mar 17 18:20:17.564486 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:20:17.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:17.568514 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:20:17.633697 kernel: raid6: neonx8 gen() 6391 MB/s Mar 17 18:20:17.651683 kernel: raid6: neonx8 xor() 4761 MB/s Mar 17 18:20:17.669683 kernel: raid6: neonx4 gen() 6454 MB/s Mar 17 18:20:17.687683 kernel: raid6: neonx4 xor() 4973 MB/s Mar 17 18:20:17.705683 kernel: raid6: neonx2 gen() 5740 MB/s Mar 17 18:20:17.723683 kernel: raid6: neonx2 xor() 4549 MB/s Mar 17 18:20:17.741682 kernel: raid6: neonx1 gen() 4410 MB/s Mar 17 18:20:17.759684 kernel: raid6: neonx1 xor() 3687 MB/s Mar 17 18:20:17.777683 kernel: raid6: int64x8 gen() 3394 MB/s Mar 17 18:20:17.795683 kernel: raid6: int64x8 xor() 2081 MB/s Mar 17 18:20:17.813683 kernel: raid6: int64x4 gen() 3760 MB/s Mar 17 18:20:17.831684 kernel: raid6: int64x4 xor() 2189 MB/s Mar 17 18:20:17.849683 kernel: raid6: int64x2 gen() 3553 MB/s Mar 17 18:20:17.867684 kernel: raid6: int64x2 xor() 1941 MB/s Mar 17 18:20:17.885683 kernel: raid6: int64x1 gen() 2740 MB/s Mar 17 18:20:17.904928 kernel: raid6: int64x1 xor() 1445 MB/s Mar 17 18:20:17.904958 kernel: raid6: using algorithm neonx4 gen() 6454 MB/s Mar 17 18:20:17.904981 kernel: raid6: .... xor() 4973 MB/s, rmw enabled Mar 17 18:20:17.906614 kernel: raid6: using neon recovery algorithm Mar 17 18:20:17.924691 kernel: xor: measuring software checksum speed Mar 17 18:20:17.928047 kernel: 8regs : 8803 MB/sec Mar 17 18:20:17.928085 kernel: 32regs : 11079 MB/sec Mar 17 18:20:17.929886 kernel: arm64_neon : 9003 MB/sec Mar 17 18:20:17.929915 kernel: xor: using function: 32regs (11079 MB/sec) Mar 17 18:20:18.022702 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:20:18.039638 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:20:18.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:18.041000 audit: BPF prog-id=7 op=LOAD Mar 17 18:20:18.041000 audit: BPF prog-id=8 op=LOAD Mar 17 18:20:18.044256 systemd[1]: Starting systemd-udevd.service... Mar 17 18:20:18.073287 systemd-udevd[509]: Using default interface naming scheme 'v252'. Mar 17 18:20:18.082993 systemd[1]: Started systemd-udevd.service. Mar 17 18:20:18.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:18.092409 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:20:18.122337 dracut-pre-trigger[522]: rd.md=0: removing MD RAID activation Mar 17 18:20:18.180335 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:20:18.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:18.184820 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:20:18.295137 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:20:18.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:18.420699 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:20:18.420777 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 18:20:18.435565 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 18:20:18.435825 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 18:20:18.436070 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:8d:b3:02:aa:6b Mar 17 18:20:18.437960 (udev-worker)[565]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:20:18.441377 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 18:20:18.443693 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 18:20:18.452688 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 18:20:18.460001 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:20:18.460057 kernel: GPT:9289727 != 16777215 Mar 17 18:20:18.460081 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:20:18.463196 kernel: GPT:9289727 != 16777215 Mar 17 18:20:18.465411 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:20:18.465459 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:20:18.533703 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (563) Mar 17 18:20:18.585338 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:20:18.614564 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:20:18.614784 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:20:18.620735 systemd[1]: Starting disk-uuid.service... Mar 17 18:20:18.642737 disk-uuid[667]: Primary Header is updated. Mar 17 18:20:18.642737 disk-uuid[667]: Secondary Entries is updated. Mar 17 18:20:18.642737 disk-uuid[667]: Secondary Header is updated. Mar 17 18:20:18.665322 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:20:18.693734 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:20:19.653695 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:20:19.654007 disk-uuid[671]: The operation has completed successfully. Mar 17 18:20:19.833177 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:20:19.835359 systemd[1]: Finished disk-uuid.service. Mar 17 18:20:19.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:19.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:19.867819 systemd[1]: Starting verity-setup.service... Mar 17 18:20:19.903681 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:20:20.001915 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:20:20.006598 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:20:20.009942 systemd[1]: Finished verity-setup.service. Mar 17 18:20:20.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.101700 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:20:20.102557 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:20:20.105475 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:20:20.109206 systemd[1]: Starting ignition-setup.service... Mar 17 18:20:20.112188 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:20:20.148473 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:20:20.148542 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:20:20.150519 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:20:20.159691 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:20:20.178756 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:20:20.196748 systemd[1]: Finished ignition-setup.service. Mar 17 18:20:20.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.201494 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:20:20.256475 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:20:20.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.258000 audit: BPF prog-id=9 op=LOAD Mar 17 18:20:20.261225 systemd[1]: Starting systemd-networkd.service... Mar 17 18:20:20.305393 systemd-networkd[1100]: lo: Link UP Mar 17 18:20:20.305416 systemd-networkd[1100]: lo: Gained carrier Mar 17 18:20:20.308709 systemd-networkd[1100]: Enumeration completed Mar 17 18:20:20.309156 systemd-networkd[1100]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:20:20.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.309359 systemd[1]: Started systemd-networkd.service. Mar 17 18:20:20.316003 systemd[1]: Reached target network.target. Mar 17 18:20:20.333104 systemd-networkd[1100]: eth0: Link UP Mar 17 18:20:20.333113 systemd-networkd[1100]: eth0: Gained carrier Mar 17 18:20:20.333401 systemd[1]: Starting iscsiuio.service... Mar 17 18:20:20.350061 systemd[1]: Started iscsiuio.service. Mar 17 18:20:20.351784 systemd-networkd[1100]: eth0: DHCPv4 address 172.31.30.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 18:20:20.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.358913 systemd[1]: Starting iscsid.service... Mar 17 18:20:20.366384 iscsid[1105]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:20:20.366384 iscsid[1105]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:20:20.366384 iscsid[1105]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:20:20.366384 iscsid[1105]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:20:20.366384 iscsid[1105]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:20:20.399909 iscsid[1105]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:20:20.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.381807 systemd[1]: Started iscsid.service. Mar 17 18:20:20.400621 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:20:20.424789 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:20:20.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.427935 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:20:20.429848 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:20:20.435398 systemd[1]: Reached target remote-fs.target. Mar 17 18:20:20.441129 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:20:20.459612 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:20:20.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.883176 ignition[1055]: Ignition 2.14.0 Mar 17 18:20:20.884064 ignition[1055]: Stage: fetch-offline Mar 17 18:20:20.885447 ignition[1055]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:20:20.885512 ignition[1055]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:20:20.913812 ignition[1055]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:20:20.916428 ignition[1055]: Ignition finished successfully Mar 17 18:20:20.919599 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:20:20.931736 kernel: kauditd_printk_skb: 18 callbacks suppressed Mar 17 18:20:20.931778 kernel: audit: type=1130 audit(1742235620.920:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:20.924197 systemd[1]: Starting ignition-fetch.service... Mar 17 18:20:20.942309 ignition[1124]: Ignition 2.14.0 Mar 17 18:20:20.943985 ignition[1124]: Stage: fetch Mar 17 18:20:20.945356 ignition[1124]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:20:20.945457 ignition[1124]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:20:20.958739 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:20:20.961940 ignition[1124]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:20:20.967552 ignition[1124]: INFO : PUT result: OK Mar 17 18:20:20.970769 ignition[1124]: DEBUG : parsed url from cmdline: "" Mar 17 18:20:20.970769 ignition[1124]: INFO : no config URL provided Mar 17 18:20:20.970769 ignition[1124]: INFO : reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:20:20.976446 ignition[1124]: INFO : no config at "/usr/lib/ignition/user.ign" Mar 17 18:20:20.976446 ignition[1124]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:20:20.976446 ignition[1124]: INFO : PUT result: OK Mar 17 18:20:20.976446 ignition[1124]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 18:20:20.986038 ignition[1124]: INFO : GET result: OK Mar 17 18:20:20.986038 ignition[1124]: DEBUG : parsing config with SHA512: be95fd5ae63b3910749b7a62d8dd98f1ba8207082a31e6d36e1b4e198738f24647cf6bdef993696d255b38d49a2c8d5ca86c9cec833780643b48bee9720f5442 Mar 17 18:20:20.987540 unknown[1124]: fetched base config from "system" Mar 17 18:20:20.988491 ignition[1124]: fetch: fetch complete Mar 17 18:20:20.987557 unknown[1124]: fetched base config from "system" Mar 17 18:20:20.988504 ignition[1124]: fetch: fetch passed Mar 17 18:20:20.987572 unknown[1124]: fetched user config from "aws" Mar 17 18:20:20.988586 ignition[1124]: Ignition finished successfully Mar 17 18:20:21.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.001925 systemd[1]: Finished ignition-fetch.service. Mar 17 18:20:21.013110 kernel: audit: type=1130 audit(1742235621.001:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.004978 systemd[1]: Starting ignition-kargs.service... Mar 17 18:20:21.026842 ignition[1130]: Ignition 2.14.0 Mar 17 18:20:21.026871 ignition[1130]: Stage: kargs Mar 17 18:20:21.027164 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:20:21.027220 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:20:21.040606 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:20:21.043056 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:20:21.046198 ignition[1130]: INFO : PUT result: OK Mar 17 18:20:21.051108 ignition[1130]: kargs: kargs passed Mar 17 18:20:21.051359 ignition[1130]: Ignition finished successfully Mar 17 18:20:21.055836 systemd[1]: Finished ignition-kargs.service. Mar 17 18:20:21.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.060143 systemd[1]: Starting ignition-disks.service... Mar 17 18:20:21.070688 kernel: audit: type=1130 audit(1742235621.057:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.075075 ignition[1136]: Ignition 2.14.0 Mar 17 18:20:21.075571 ignition[1136]: Stage: disks Mar 17 18:20:21.075925 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:20:21.075981 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:20:21.089230 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:20:21.091476 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:20:21.094527 ignition[1136]: INFO : PUT result: OK Mar 17 18:20:21.099016 ignition[1136]: disks: disks passed Mar 17 18:20:21.099118 ignition[1136]: Ignition finished successfully Mar 17 18:20:21.103250 systemd[1]: Finished ignition-disks.service. Mar 17 18:20:21.106272 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:20:21.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.116031 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:20:21.136486 kernel: audit: type=1130 audit(1742235621.105:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.117735 systemd[1]: Reached target local-fs.target. Mar 17 18:20:21.118472 systemd[1]: Reached target sysinit.target. Mar 17 18:20:21.119089 systemd[1]: Reached target basic.target. Mar 17 18:20:21.121770 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:20:21.167919 systemd-fsck[1144]: ROOT: clean, 623/553520 files, 56021/553472 blocks Mar 17 18:20:21.173933 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:20:21.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.178250 systemd[1]: Mounting sysroot.mount... Mar 17 18:20:21.187038 kernel: audit: type=1130 audit(1742235621.174:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.205707 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:20:21.206524 systemd[1]: Mounted sysroot.mount. Mar 17 18:20:21.208238 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:20:21.224489 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:20:21.226828 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:20:21.231146 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:20:21.234546 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:20:21.241425 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:20:21.251791 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:20:21.255057 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:20:21.275642 initrd-setup-root[1166]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:20:21.286581 initrd-setup-root[1174]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:20:21.288979 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1161) Mar 17 18:20:21.298324 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:20:21.298390 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:20:21.300377 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:20:21.302545 initrd-setup-root[1186]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:20:21.312873 initrd-setup-root[1206]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:20:21.345702 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:20:21.356771 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:20:21.520864 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:20:21.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.525140 systemd[1]: Starting ignition-mount.service... Mar 17 18:20:21.534899 systemd[1]: Starting sysroot-boot.service... Mar 17 18:20:21.540865 kernel: audit: type=1130 audit(1742235621.522:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.546557 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:20:21.546780 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:20:21.576428 systemd[1]: Finished sysroot-boot.service. Mar 17 18:20:21.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.586735 kernel: audit: type=1130 audit(1742235621.578:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.618846 ignition[1229]: INFO : Ignition 2.14.0 Mar 17 18:20:21.620675 ignition[1229]: INFO : Stage: mount Mar 17 18:20:21.622264 ignition[1229]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:20:21.624703 ignition[1229]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:20:21.639316 ignition[1229]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:20:21.642042 ignition[1229]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:20:21.644716 ignition[1229]: INFO : PUT result: OK Mar 17 18:20:21.649718 ignition[1229]: INFO : mount: mount passed Mar 17 18:20:21.651441 ignition[1229]: INFO : Ignition finished successfully Mar 17 18:20:21.654296 systemd[1]: Finished ignition-mount.service. Mar 17 18:20:21.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.658531 systemd[1]: Starting ignition-files.service... Mar 17 18:20:21.666744 kernel: audit: type=1130 audit(1742235621.655:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:21.674930 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:20:21.697703 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1236) Mar 17 18:20:21.702961 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:20:21.703003 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:20:21.703027 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:20:21.718686 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:20:21.723741 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:20:21.742802 ignition[1255]: INFO : Ignition 2.14.0 Mar 17 18:20:21.742802 ignition[1255]: INFO : Stage: files Mar 17 18:20:21.746042 ignition[1255]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:20:21.746042 ignition[1255]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:20:21.760815 ignition[1255]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:20:21.763208 ignition[1255]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:20:21.765999 ignition[1255]: INFO : PUT result: OK Mar 17 18:20:21.770564 ignition[1255]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:20:21.775862 ignition[1255]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:20:21.778493 ignition[1255]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:20:21.812909 ignition[1255]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:20:21.815587 ignition[1255]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:20:21.819793 unknown[1255]: wrote ssh authorized keys file for user: core Mar 17 18:20:21.821944 ignition[1255]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:20:21.825515 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Mar 17 18:20:21.828881 ignition[1255]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:20:21.840883 ignition[1255]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2082483500" Mar 17 18:20:21.840883 ignition[1255]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2082483500": device or resource busy Mar 17 18:20:21.840883 ignition[1255]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2082483500", trying btrfs: device or resource busy Mar 17 18:20:21.840883 ignition[1255]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2082483500" Mar 17 18:20:21.840883 ignition[1255]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2082483500" Mar 17 18:20:21.855615 ignition[1255]: INFO : op(3): [started] unmounting "/mnt/oem2082483500" Mar 17 18:20:21.855615 ignition[1255]: INFO : op(3): [finished] unmounting "/mnt/oem2082483500" Mar 17 18:20:21.855615 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Mar 17 18:20:21.863316 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:20:21.863316 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:20:21.869937 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:20:21.869937 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:20:21.869937 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:20:21.869937 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:20:21.869937 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Mar 17 18:20:21.869937 ignition[1255]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:20:21.902303 ignition[1255]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2073038270" Mar 17 18:20:21.902303 ignition[1255]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2073038270": device or resource busy Mar 17 18:20:21.902303 ignition[1255]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2073038270", trying btrfs: device or resource busy Mar 17 18:20:21.902303 ignition[1255]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2073038270" Mar 17 18:20:21.902303 ignition[1255]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2073038270" Mar 17 18:20:21.902303 ignition[1255]: INFO : op(6): [started] unmounting "/mnt/oem2073038270" Mar 17 18:20:21.902303 ignition[1255]: INFO : op(6): [finished] unmounting "/mnt/oem2073038270" Mar 17 18:20:21.902303 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Mar 17 18:20:21.902303 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Mar 17 18:20:21.902303 ignition[1255]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:20:21.937411 ignition[1255]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem529321295" Mar 17 18:20:21.937411 ignition[1255]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem529321295": device or resource busy Mar 17 18:20:21.937411 ignition[1255]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem529321295", trying btrfs: device or resource busy Mar 17 18:20:21.937411 ignition[1255]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem529321295" Mar 17 18:20:21.937411 ignition[1255]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem529321295" Mar 17 18:20:21.937411 ignition[1255]: INFO : op(9): [started] unmounting "/mnt/oem529321295" Mar 17 18:20:21.937411 ignition[1255]: INFO : op(9): [finished] unmounting "/mnt/oem529321295" Mar 17 18:20:21.937411 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Mar 17 18:20:21.959541 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:20:21.959541 ignition[1255]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:20:21.977626 ignition[1255]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3692976855" Mar 17 18:20:21.981568 ignition[1255]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3692976855": device or resource busy Mar 17 18:20:21.981568 ignition[1255]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3692976855", trying btrfs: device or resource busy Mar 17 18:20:21.981568 ignition[1255]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3692976855" Mar 17 18:20:21.990568 ignition[1255]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3692976855" Mar 17 18:20:21.993111 ignition[1255]: INFO : op(c): [started] unmounting "/mnt/oem3692976855" Mar 17 18:20:21.995284 ignition[1255]: INFO : op(c): [finished] unmounting "/mnt/oem3692976855" Mar 17 18:20:21.997358 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:20:21.997358 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:20:21.997358 ignition[1255]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 18:20:22.087944 systemd-networkd[1100]: eth0: Gained IPv6LL Mar 17 18:20:22.477045 ignition[1255]: INFO : GET result: OK Mar 17 18:20:22.965314 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(c): [started] processing unit "amazon-ssm-agent.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(c): op(d): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(c): op(d): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(c): [finished] processing unit "amazon-ssm-agent.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(e): [started] processing unit "nvidia.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(e): [finished] processing unit "nvidia.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(f): [started] setting preset to enabled for "amazon-ssm-agent.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(f): [finished] setting preset to enabled for "amazon-ssm-agent.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(10): [started] setting preset to enabled for "nvidia.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(10): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:20:22.970475 ignition[1255]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:20:23.015708 ignition[1255]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:20:23.015708 ignition[1255]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:20:23.015708 ignition[1255]: INFO : files: files passed Mar 17 18:20:23.015708 ignition[1255]: INFO : Ignition finished successfully Mar 17 18:20:23.043502 kernel: audit: type=1130 audit(1742235623.029:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.022114 systemd[1]: Finished ignition-files.service. Mar 17 18:20:23.050501 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:20:23.054589 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:20:23.058543 systemd[1]: Starting ignition-quench.service... Mar 17 18:20:23.064590 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:20:23.064956 systemd[1]: Finished ignition-quench.service. Mar 17 18:20:23.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.077696 kernel: audit: type=1130 audit(1742235623.067:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.083600 initrd-setup-root-after-ignition[1280]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:20:23.087810 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:20:23.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.090264 systemd[1]: Reached target ignition-complete.target. Mar 17 18:20:23.096112 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:20:23.124520 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:20:23.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.125717 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:20:23.126247 systemd[1]: Reached target initrd-fs.target. Mar 17 18:20:23.126323 systemd[1]: Reached target initrd.target. Mar 17 18:20:23.127721 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:20:23.129170 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:20:23.157509 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:20:23.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.162025 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:20:23.182961 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:20:23.186170 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:20:23.189528 systemd[1]: Stopped target timers.target. Mar 17 18:20:23.192335 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:20:23.194353 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:20:23.197596 systemd[1]: Stopped target initrd.target. Mar 17 18:20:23.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.200365 systemd[1]: Stopped target basic.target. Mar 17 18:20:23.203249 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:20:23.206507 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:20:23.209857 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:20:23.213300 systemd[1]: Stopped target remote-fs.target. Mar 17 18:20:23.216289 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:20:23.219423 systemd[1]: Stopped target sysinit.target. Mar 17 18:20:23.223507 systemd[1]: Stopped target local-fs.target. Mar 17 18:20:23.228303 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:20:23.231464 systemd[1]: Stopped target swap.target. Mar 17 18:20:23.236105 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:20:23.238015 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:20:23.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.241215 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:20:23.244256 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:20:23.246240 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:20:23.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.249325 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:20:23.251685 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:20:23.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.255338 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:20:23.257303 systemd[1]: Stopped ignition-files.service. Mar 17 18:20:23.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.261705 systemd[1]: Stopping ignition-mount.service... Mar 17 18:20:23.263475 systemd[1]: Stopping iscsiuio.service... Mar 17 18:20:23.271180 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:20:23.282278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:20:23.284417 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:20:23.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.287730 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:20:23.292752 ignition[1293]: INFO : Ignition 2.14.0 Mar 17 18:20:23.294620 ignition[1293]: INFO : Stage: umount Mar 17 18:20:23.296205 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:20:23.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.299760 ignition[1293]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:20:23.302640 ignition[1293]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:20:23.317461 ignition[1293]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:20:23.319860 ignition[1293]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:20:23.322894 ignition[1293]: INFO : PUT result: OK Mar 17 18:20:23.325967 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:20:23.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.326207 systemd[1]: Stopped iscsiuio.service. Mar 17 18:20:23.333562 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:20:23.335902 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:20:23.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.344264 ignition[1293]: INFO : umount: umount passed Mar 17 18:20:23.346173 ignition[1293]: INFO : Ignition finished successfully Mar 17 18:20:23.349251 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:20:23.351120 systemd[1]: Stopped ignition-mount.service. Mar 17 18:20:23.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.354220 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:20:23.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.354322 systemd[1]: Stopped ignition-disks.service. Mar 17 18:20:23.357625 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:20:23.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.357745 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:20:23.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.365177 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:20:23.365273 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:20:23.366889 systemd[1]: Stopped target network.target. Mar 17 18:20:23.371826 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:20:23.371948 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:20:23.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.381741 systemd[1]: Stopped target paths.target. Mar 17 18:20:23.384552 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:20:23.388713 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:20:23.391916 systemd[1]: Stopped target slices.target. Mar 17 18:20:23.394588 systemd[1]: Stopped target sockets.target. Mar 17 18:20:23.401208 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:20:23.401264 systemd[1]: Closed iscsid.socket. Mar 17 18:20:23.405238 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:20:23.405323 systemd[1]: Closed iscsiuio.socket. Mar 17 18:20:23.413177 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:20:23.413269 systemd[1]: Stopped ignition-setup.service. Mar 17 18:20:23.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.418014 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:20:23.421204 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:20:23.424727 systemd-networkd[1100]: eth0: DHCPv6 lease lost Mar 17 18:20:23.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.426121 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:20:23.427251 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:20:23.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.427491 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:20:23.430844 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:20:23.432648 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:20:23.441146 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:20:23.443021 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:20:23.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.444000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:20:23.446519 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:20:23.446606 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:20:23.448000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:20:23.449899 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:20:23.453250 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:20:23.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.457505 systemd[1]: Stopping network-cleanup.service... Mar 17 18:20:23.459044 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:20:23.459170 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:20:23.461020 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:20:23.461109 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:20:23.462939 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:20:23.463027 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:20:23.465016 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:20:23.469414 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:20:23.493362 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:20:23.495950 systemd[1]: Stopped network-cleanup.service. Mar 17 18:20:23.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.500358 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:20:23.500639 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:20:23.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.503340 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:20:23.503426 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:20:23.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.509705 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:20:23.509778 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:20:23.514119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:20:23.514203 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:20:23.521519 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:20:23.521604 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:20:23.523250 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:20:23.523331 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:20:23.530381 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:20:23.547485 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:20:23.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.547605 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:20:23.552505 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:20:23.552599 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:20:23.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.559762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:20:23.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.559871 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:20:23.563911 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:20:23.577487 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:20:23.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:23.578096 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:20:23.578583 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:20:23.582165 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:20:23.601299 systemd[1]: Switching root. Mar 17 18:20:23.624769 iscsid[1105]: iscsid shutting down. Mar 17 18:20:23.626433 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Mar 17 18:20:23.626509 systemd-journald[309]: Journal stopped Mar 17 18:20:28.679272 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:20:28.680792 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:20:28.680830 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:20:28.680862 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:20:28.680893 kernel: SELinux: policy capability open_perms=1 Mar 17 18:20:28.680922 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:20:28.680953 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:20:28.680983 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:20:28.681017 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:20:28.681052 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:20:28.681082 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:20:28.681112 systemd[1]: Successfully loaded SELinux policy in 111.045ms. Mar 17 18:20:28.681176 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.699ms. Mar 17 18:20:28.681211 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:20:28.681243 systemd[1]: Detected virtualization amazon. Mar 17 18:20:28.681275 systemd[1]: Detected architecture arm64. Mar 17 18:20:28.681306 systemd[1]: Detected first boot. Mar 17 18:20:28.681340 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:20:28.681371 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:20:28.681403 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:20:28.681439 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:20:28.681474 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:20:28.681508 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:20:28.681541 kernel: kauditd_printk_skb: 56 callbacks suppressed Mar 17 18:20:28.681571 kernel: audit: type=1334 audit(1742235628.279:88): prog-id=12 op=LOAD Mar 17 18:20:28.681603 kernel: audit: type=1334 audit(1742235628.279:89): prog-id=3 op=UNLOAD Mar 17 18:20:28.681634 kernel: audit: type=1334 audit(1742235628.281:90): prog-id=13 op=LOAD Mar 17 18:20:28.681683 kernel: audit: type=1334 audit(1742235628.283:91): prog-id=14 op=LOAD Mar 17 18:20:28.681719 kernel: audit: type=1334 audit(1742235628.283:92): prog-id=4 op=UNLOAD Mar 17 18:20:28.681756 kernel: audit: type=1334 audit(1742235628.283:93): prog-id=5 op=UNLOAD Mar 17 18:20:28.681786 kernel: audit: type=1334 audit(1742235628.288:94): prog-id=15 op=LOAD Mar 17 18:20:28.681817 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:20:28.681854 kernel: audit: type=1334 audit(1742235628.288:95): prog-id=12 op=UNLOAD Mar 17 18:20:28.681882 kernel: audit: type=1334 audit(1742235628.290:96): prog-id=16 op=LOAD Mar 17 18:20:28.681912 systemd[1]: Stopped iscsid.service. Mar 17 18:20:28.681940 kernel: audit: type=1334 audit(1742235628.292:97): prog-id=17 op=LOAD Mar 17 18:20:28.681971 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:20:28.682001 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:20:28.682041 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:20:28.682071 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:20:28.682104 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:20:28.682142 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 18:20:28.682188 systemd[1]: Created slice system-getty.slice. Mar 17 18:20:28.682233 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:20:28.682265 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:20:28.682301 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:20:28.682332 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:20:28.682372 systemd[1]: Created slice user.slice. Mar 17 18:20:28.682402 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:20:28.682435 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:20:28.682470 systemd[1]: Set up automount boot.automount. Mar 17 18:20:28.682501 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:20:28.682540 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:20:28.682569 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:20:28.682600 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:20:28.682631 systemd[1]: Reached target integritysetup.target. Mar 17 18:20:28.682682 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:20:28.682730 systemd[1]: Reached target remote-fs.target. Mar 17 18:20:28.682766 systemd[1]: Reached target slices.target. Mar 17 18:20:28.682796 systemd[1]: Reached target swap.target. Mar 17 18:20:28.682825 systemd[1]: Reached target torcx.target. Mar 17 18:20:28.682855 systemd[1]: Reached target veritysetup.target. Mar 17 18:20:28.682884 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:20:28.682925 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:20:28.682958 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:20:28.682988 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:20:28.683017 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:20:28.683046 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:20:28.683080 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:20:28.683111 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:20:28.683142 systemd[1]: Mounting media.mount... Mar 17 18:20:28.683174 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:20:28.683205 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:20:28.683248 systemd[1]: Mounting tmp.mount... Mar 17 18:20:28.683281 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:20:28.683311 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:20:28.683345 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:20:28.683380 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:20:28.683412 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:20:28.683443 systemd[1]: Starting modprobe@drm.service... Mar 17 18:20:28.683475 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:20:28.683507 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:20:28.683540 systemd[1]: Starting modprobe@loop.service... Mar 17 18:20:28.683573 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:20:28.683602 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:20:28.683631 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:20:28.683680 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:20:28.683714 kernel: fuse: init (API version 7.34) Mar 17 18:20:28.683759 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:20:28.683793 kernel: loop: module loaded Mar 17 18:20:28.683821 systemd[1]: Stopped systemd-journald.service. Mar 17 18:20:28.683850 systemd[1]: Starting systemd-journald.service... Mar 17 18:20:28.683896 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:20:28.683930 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:20:28.683960 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:20:28.683993 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:20:28.684024 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:20:28.684053 systemd[1]: Stopped verity-setup.service. Mar 17 18:20:28.684082 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:20:28.684113 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:20:28.684142 systemd[1]: Mounted media.mount. Mar 17 18:20:28.684170 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:20:28.684200 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:20:28.684231 systemd[1]: Mounted tmp.mount. Mar 17 18:20:28.684265 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:20:28.684294 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:20:28.684323 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:20:28.684354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:20:28.684384 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:20:28.684416 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:20:28.684445 systemd[1]: Finished modprobe@drm.service. Mar 17 18:20:28.684474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:20:28.684507 systemd-journald[1408]: Journal started Mar 17 18:20:28.684604 systemd-journald[1408]: Runtime Journal (/run/log/journal/ec2c3ea83c47698eaedd97982b56a23e) is 8.0M, max 75.4M, 67.4M free. Mar 17 18:20:28.687184 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:20:24.310000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:20:24.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:20:24.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:20:24.437000 audit: BPF prog-id=10 op=LOAD Mar 17 18:20:24.437000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:20:24.437000 audit: BPF prog-id=11 op=LOAD Mar 17 18:20:28.696939 systemd[1]: Started systemd-journald.service. Mar 17 18:20:24.437000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:20:24.572000 audit[1326]: AVC avc: denied { associate } for pid=1326 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:20:24.572000 audit[1326]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b2 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1309 pid=1326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:24.572000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:20:24.576000 audit[1326]: AVC avc: denied { associate } for pid=1326 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:20:24.576000 audit[1326]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145989 a2=1ed a3=0 items=2 ppid=1309 pid=1326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:24.576000 audit: CWD cwd="/" Mar 17 18:20:24.576000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:20:24.576000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:20:24.576000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:20:28.279000 audit: BPF prog-id=12 op=LOAD Mar 17 18:20:28.279000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:20:28.281000 audit: BPF prog-id=13 op=LOAD Mar 17 18:20:28.283000 audit: BPF prog-id=14 op=LOAD Mar 17 18:20:28.283000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:20:28.283000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:20:28.288000 audit: BPF prog-id=15 op=LOAD Mar 17 18:20:28.288000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:20:28.290000 audit: BPF prog-id=16 op=LOAD Mar 17 18:20:28.292000 audit: BPF prog-id=17 op=LOAD Mar 17 18:20:28.292000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:20:28.292000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:20:28.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.313000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:20:28.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.570000 audit: BPF prog-id=18 op=LOAD Mar 17 18:20:28.570000 audit: BPF prog-id=19 op=LOAD Mar 17 18:20:28.570000 audit: BPF prog-id=20 op=LOAD Mar 17 18:20:28.570000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:20:28.570000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:20:28.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.674000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:20:28.674000 audit[1408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe92df6a0 a2=4000 a3=1 items=0 ppid=1 pid=1408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:28.674000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:20:28.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:24.569835 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:20:28.278163 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:20:28.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:24.571021 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:20:28.278184 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Mar 17 18:20:24.571071 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:20:28.296082 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:20:24.571141 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:20:28.694214 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:20:24.571167 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:20:28.694543 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:20:24.571237 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:20:28.696950 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:20:24.571267 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:20:28.697243 systemd[1]: Finished modprobe@loop.service. Mar 17 18:20:24.571686 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:20:28.703893 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:20:24.571767 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:20:28.707007 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:20:24.571820 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:20:28.709320 systemd[1]: Reached target network-pre.target. Mar 17 18:20:24.573164 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:20:28.713788 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:20:24.573242 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:20:28.722850 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:20:24.573286 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:20:28.724378 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:20:24.573327 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:20:24.573372 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:20:24.573410 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:20:27.493313 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:20:27.493907 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:20:27.494145 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:20:28.729524 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:20:27.494587 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:20:27.494716 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:20:28.733508 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:20:27.494853 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2025-03-17T18:20:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:20:28.735779 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:20:28.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.740996 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:20:28.742834 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:20:28.746543 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:20:28.749431 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:20:28.753015 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:20:28.758450 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:20:28.770484 systemd-journald[1408]: Time spent on flushing to /var/log/journal/ec2c3ea83c47698eaedd97982b56a23e is 66.215ms for 1117 entries. Mar 17 18:20:28.770484 systemd-journald[1408]: System Journal (/var/log/journal/ec2c3ea83c47698eaedd97982b56a23e) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:20:28.872811 systemd-journald[1408]: Received client request to flush runtime journal. Mar 17 18:20:28.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.797483 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:20:28.799489 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:20:28.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.816048 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:20:28.823951 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:20:28.832383 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:20:28.874386 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:20:28.917853 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:20:28.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.921946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:20:28.932015 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:20:28.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:28.936646 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:20:28.952120 udevadm[1446]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:20:29.010680 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:20:29.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:29.620747 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:20:29.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:29.623000 audit: BPF prog-id=21 op=LOAD Mar 17 18:20:29.623000 audit: BPF prog-id=22 op=LOAD Mar 17 18:20:29.623000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:20:29.623000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:20:29.626235 systemd[1]: Starting systemd-udevd.service... Mar 17 18:20:29.663263 systemd-udevd[1447]: Using default interface naming scheme 'v252'. Mar 17 18:20:29.710898 systemd[1]: Started systemd-udevd.service. Mar 17 18:20:29.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:29.712000 audit: BPF prog-id=23 op=LOAD Mar 17 18:20:29.717877 systemd[1]: Starting systemd-networkd.service... Mar 17 18:20:29.725000 audit: BPF prog-id=24 op=LOAD Mar 17 18:20:29.725000 audit: BPF prog-id=25 op=LOAD Mar 17 18:20:29.726000 audit: BPF prog-id=26 op=LOAD Mar 17 18:20:29.728929 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:20:29.799398 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:20:29.815588 systemd[1]: Started systemd-userdbd.service. Mar 17 18:20:29.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:29.862569 (udev-worker)[1448]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:20:29.963901 systemd-networkd[1457]: lo: Link UP Mar 17 18:20:29.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:29.963927 systemd-networkd[1457]: lo: Gained carrier Mar 17 18:20:29.964873 systemd-networkd[1457]: Enumeration completed Mar 17 18:20:29.965044 systemd[1]: Started systemd-networkd.service. Mar 17 18:20:29.969158 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:20:29.972553 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:20:29.977724 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:20:29.978822 systemd-networkd[1457]: eth0: Link UP Mar 17 18:20:29.979254 systemd-networkd[1457]: eth0: Gained carrier Mar 17 18:20:29.993886 systemd-networkd[1457]: eth0: DHCPv4 address 172.31.30.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 18:20:30.170037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:20:30.172825 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:20:30.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.177469 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:20:30.235986 lvm[1566]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:20:30.271256 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:20:30.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.273408 systemd[1]: Reached target cryptsetup.target. Mar 17 18:20:30.277224 systemd[1]: Starting lvm2-activation.service... Mar 17 18:20:30.285686 lvm[1567]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:20:30.325388 systemd[1]: Finished lvm2-activation.service. Mar 17 18:20:30.327278 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:20:30.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.328962 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:20:30.329019 systemd[1]: Reached target local-fs.target. Mar 17 18:20:30.330604 systemd[1]: Reached target machines.target. Mar 17 18:20:30.334471 systemd[1]: Starting ldconfig.service... Mar 17 18:20:30.336901 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:20:30.337018 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:30.339218 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:20:30.343343 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:20:30.348374 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:20:30.353640 systemd[1]: Starting systemd-sysext.service... Mar 17 18:20:30.373029 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1569 (bootctl) Mar 17 18:20:30.375452 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:20:30.388287 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:20:30.398947 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:20:30.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.401680 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:20:30.402041 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:20:30.424713 kernel: loop0: detected capacity change from 0 to 189592 Mar 17 18:20:30.516068 systemd-fsck[1579]: fsck.fat 4.2 (2021-01-31) Mar 17 18:20:30.516068 systemd-fsck[1579]: /dev/nvme0n1p1: 236 files, 117179/258078 clusters Mar 17 18:20:30.518647 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:20:30.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.523434 systemd[1]: Mounting boot.mount... Mar 17 18:20:30.570074 systemd[1]: Mounted boot.mount. Mar 17 18:20:30.587694 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:20:30.607522 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:20:30.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.616718 kernel: loop1: detected capacity change from 0 to 189592 Mar 17 18:20:30.639889 (sd-sysext)[1594]: Using extensions 'kubernetes'. Mar 17 18:20:30.642006 (sd-sysext)[1594]: Merged extensions into '/usr'. Mar 17 18:20:30.669859 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:20:30.670964 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:20:30.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.681804 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:20:30.683834 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:20:30.686523 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:20:30.690342 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:20:30.696290 systemd[1]: Starting modprobe@loop.service... Mar 17 18:20:30.697946 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:20:30.698219 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:30.702454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:20:30.702813 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:20:30.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.705508 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:20:30.708648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:20:30.708959 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:20:30.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.715506 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:20:30.720765 systemd[1]: Finished systemd-sysext.service. Mar 17 18:20:30.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.723117 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:20:30.723412 systemd[1]: Finished modprobe@loop.service. Mar 17 18:20:30.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:30.727951 systemd[1]: Starting ensure-sysext.service... Mar 17 18:20:30.729564 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:20:30.731795 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:20:30.746610 systemd[1]: Reloading. Mar 17 18:20:30.769381 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:20:30.781435 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:20:30.789446 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:20:30.879340 /usr/lib/systemd/system-generators/torcx-generator[1622]: time="2025-03-17T18:20:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:20:30.879409 /usr/lib/systemd/system-generators/torcx-generator[1622]: time="2025-03-17T18:20:30Z" level=info msg="torcx already run" Mar 17 18:20:31.144001 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:20:31.144039 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:20:31.185317 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:20:31.254293 ldconfig[1568]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:20:31.325000 audit: BPF prog-id=27 op=LOAD Mar 17 18:20:31.326000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:20:31.326000 audit: BPF prog-id=28 op=LOAD Mar 17 18:20:31.326000 audit: BPF prog-id=29 op=LOAD Mar 17 18:20:31.326000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:20:31.326000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:20:31.328000 audit: BPF prog-id=30 op=LOAD Mar 17 18:20:31.328000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:20:31.328000 audit: BPF prog-id=31 op=LOAD Mar 17 18:20:31.328000 audit: BPF prog-id=32 op=LOAD Mar 17 18:20:31.328000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:20:31.328000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:20:31.329000 audit: BPF prog-id=33 op=LOAD Mar 17 18:20:31.329000 audit: BPF prog-id=34 op=LOAD Mar 17 18:20:31.329000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:20:31.329000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:20:31.332000 audit: BPF prog-id=35 op=LOAD Mar 17 18:20:31.332000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:20:31.344813 systemd[1]: Finished ldconfig.service. Mar 17 18:20:31.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.348644 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:20:31.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.358056 systemd[1]: Starting audit-rules.service... Mar 17 18:20:31.361899 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:20:31.370014 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:20:31.372000 audit: BPF prog-id=36 op=LOAD Mar 17 18:20:31.375685 systemd[1]: Starting systemd-resolved.service... Mar 17 18:20:31.377000 audit: BPF prog-id=37 op=LOAD Mar 17 18:20:31.381549 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:20:31.386761 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:20:31.402616 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:20:31.407299 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:20:31.411282 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:20:31.415330 systemd[1]: Starting modprobe@loop.service... Mar 17 18:20:31.417244 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:20:31.417558 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:31.419740 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:20:31.420000 audit[1684]: SYSTEM_BOOT pid=1684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.428158 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:20:31.432356 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:20:31.432772 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:20:31.433007 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:31.433214 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:20:31.435631 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:20:31.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.446080 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:20:31.452820 systemd[1]: Starting modprobe@drm.service... Mar 17 18:20:31.454481 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:20:31.454875 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:31.455252 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:20:31.459613 systemd[1]: Finished ensure-sysext.service. Mar 17 18:20:31.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.462650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:20:31.462992 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:20:31.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.465926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:20:31.466241 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:20:31.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.468221 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:20:31.477377 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:20:31.477733 systemd[1]: Finished modprobe@loop.service. Mar 17 18:20:31.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.480104 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:20:31.500333 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:20:31.500705 systemd[1]: Finished modprobe@drm.service. Mar 17 18:20:31.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.503976 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:20:31.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.508399 systemd[1]: Starting systemd-update-done.service... Mar 17 18:20:31.531329 systemd[1]: Finished systemd-update-done.service. Mar 17 18:20:31.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:31.577000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:20:31.577000 audit[1703]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff3467dc0 a2=420 a3=0 items=0 ppid=1678 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:31.577000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:20:31.580308 augenrules[1703]: No rules Mar 17 18:20:31.581868 systemd[1]: Finished audit-rules.service. Mar 17 18:20:31.595775 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:20:31.597718 systemd[1]: Reached target time-set.target. Mar 17 18:20:31.602739 systemd-resolved[1682]: Positive Trust Anchors: Mar 17 18:20:31.602763 systemd-resolved[1682]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:20:31.602815 systemd-resolved[1682]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:20:31.624915 systemd-resolved[1682]: Defaulting to hostname 'linux'. Mar 17 18:20:31.627856 systemd[1]: Started systemd-resolved.service. Mar 17 18:20:31.629630 systemd[1]: Reached target network.target. Mar 17 18:20:31.631182 systemd[1]: Reached target nss-lookup.target. Mar 17 18:20:31.632853 systemd[1]: Reached target sysinit.target. Mar 17 18:20:31.634519 systemd[1]: Started motdgen.path. Mar 17 18:20:31.635971 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:20:31.638290 systemd[1]: Started logrotate.timer. Mar 17 18:20:31.639925 systemd[1]: Started mdadm.timer. Mar 17 18:20:31.641263 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:20:31.642903 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:20:31.642951 systemd[1]: Reached target paths.target. Mar 17 18:20:31.644400 systemd[1]: Reached target timers.target. Mar 17 18:20:31.646332 systemd[1]: Listening on dbus.socket. Mar 17 18:20:31.649882 systemd[1]: Starting docker.socket... Mar 17 18:20:31.656695 systemd[1]: Listening on sshd.socket. Mar 17 18:20:31.658919 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:31.660009 systemd[1]: Listening on docker.socket. Mar 17 18:20:31.661912 systemd[1]: Reached target sockets.target. Mar 17 18:20:31.663690 systemd[1]: Reached target basic.target. Mar 17 18:20:31.665398 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:20:31.665568 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:20:31.667874 systemd[1]: Starting containerd.service... Mar 17 18:20:31.671993 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 18:20:31.677249 systemd[1]: Starting dbus.service... Mar 17 18:20:31.680900 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:20:31.684901 systemd[1]: Starting extend-filesystems.service... Mar 17 18:20:31.686695 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:20:31.691997 systemd[1]: Starting motdgen.service... Mar 17 18:20:31.695895 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:20:31.699949 systemd[1]: Starting sshd-keygen.service... Mar 17 18:20:31.708990 systemd[1]: Starting systemd-logind.service... Mar 17 18:20:31.710499 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:31.743194 jq[1714]: false Mar 17 18:20:31.743520 jq[1721]: true Mar 17 18:20:31.710678 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:20:31.711564 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:20:31.713222 systemd[1]: Starting update-engine.service... Mar 17 18:20:31.717075 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:20:31.745844 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:20:31.746238 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:20:31.759154 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:20:31.759491 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:20:31.783417 systemd-timesyncd[1683]: Contacted time server 137.110.222.27:123 (0.flatcar.pool.ntp.org). Mar 17 18:20:31.783581 systemd-timesyncd[1683]: Initial clock synchronization to Mon 2025-03-17 18:20:31.618826 UTC. Mar 17 18:20:31.793952 jq[1724]: true Mar 17 18:20:31.829360 dbus-daemon[1713]: [system] SELinux support is enabled Mar 17 18:20:31.831846 systemd[1]: Started dbus.service. Mar 17 18:20:31.837006 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:20:31.837061 systemd[1]: Reached target system-config.target. Mar 17 18:20:31.838876 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:20:31.838939 systemd[1]: Reached target user-config.target. Mar 17 18:20:31.845406 extend-filesystems[1715]: Found loop1 Mar 17 18:20:31.847293 extend-filesystems[1715]: Found nvme0n1 Mar 17 18:20:31.847293 extend-filesystems[1715]: Found nvme0n1p1 Mar 17 18:20:31.847293 extend-filesystems[1715]: Found nvme0n1p2 Mar 17 18:20:31.847293 extend-filesystems[1715]: Found nvme0n1p3 Mar 17 18:20:31.847293 extend-filesystems[1715]: Found usr Mar 17 18:20:31.847293 extend-filesystems[1715]: Found nvme0n1p4 Mar 17 18:20:31.847293 extend-filesystems[1715]: Found nvme0n1p6 Mar 17 18:20:31.847293 extend-filesystems[1715]: Found nvme0n1p7 Mar 17 18:20:31.859451 extend-filesystems[1715]: Found nvme0n1p9 Mar 17 18:20:31.859451 extend-filesystems[1715]: Checking size of /dev/nvme0n1p9 Mar 17 18:20:31.866493 dbus-daemon[1713]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1457 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 18:20:31.870195 dbus-daemon[1713]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:20:31.872448 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:20:31.872821 systemd[1]: Finished motdgen.service. Mar 17 18:20:31.881060 systemd[1]: Starting systemd-hostnamed.service... Mar 17 18:20:31.929989 extend-filesystems[1715]: Resized partition /dev/nvme0n1p9 Mar 17 18:20:31.941849 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:20:31.943190 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:20:31.945878 systemd-networkd[1457]: eth0: Gained IPv6LL Mar 17 18:20:31.948746 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:20:31.951057 extend-filesystems[1764]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:20:31.951351 systemd[1]: Reached target network-online.target. Mar 17 18:20:31.957054 systemd[1]: Started amazon-ssm-agent.service. Mar 17 18:20:31.961950 systemd[1]: Starting kubelet.service... Mar 17 18:20:31.965963 systemd[1]: Started nvidia.service. Mar 17 18:20:31.978870 update_engine[1720]: I0317 18:20:31.977985 1720 main.cc:92] Flatcar Update Engine starting Mar 17 18:20:31.987765 systemd[1]: Started update-engine.service. Mar 17 18:20:31.988906 update_engine[1720]: I0317 18:20:31.987893 1720 update_check_scheduler.cc:74] Next update check in 8m21s Mar 17 18:20:31.993596 systemd[1]: Started locksmithd.service. Mar 17 18:20:32.032757 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 18:20:32.091729 env[1727]: time="2025-03-17T18:20:32.091602146Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:20:32.158942 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 18:20:32.195268 extend-filesystems[1764]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 18:20:32.195268 extend-filesystems[1764]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:20:32.195268 extend-filesystems[1764]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 18:20:32.202187 extend-filesystems[1715]: Resized filesystem in /dev/nvme0n1p9 Mar 17 18:20:32.212052 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:20:32.212437 systemd[1]: Finished extend-filesystems.service. Mar 17 18:20:32.271760 systemd-logind[1719]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:20:32.280875 systemd-logind[1719]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 18:20:32.282323 systemd-logind[1719]: New seat seat0. Mar 17 18:20:32.299144 systemd[1]: Started systemd-logind.service. Mar 17 18:20:32.308908 amazon-ssm-agent[1765]: 2025/03/17 18:20:32 Failed to load instance info from vault. RegistrationKey does not exist. Mar 17 18:20:32.316106 amazon-ssm-agent[1765]: Initializing new seelog logger Mar 17 18:20:32.325143 amazon-ssm-agent[1765]: New Seelog Logger Creation Complete Mar 17 18:20:32.328878 amazon-ssm-agent[1765]: 2025/03/17 18:20:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 18:20:32.329048 amazon-ssm-agent[1765]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 18:20:32.329447 amazon-ssm-agent[1765]: 2025/03/17 18:20:32 processing appconfig overrides Mar 17 18:20:32.388091 env[1727]: time="2025-03-17T18:20:32.388000078Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:20:32.392030 env[1727]: time="2025-03-17T18:20:32.391954642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:32.406737 env[1727]: time="2025-03-17T18:20:32.401889606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:20:32.406737 env[1727]: time="2025-03-17T18:20:32.406737674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:32.407226 env[1727]: time="2025-03-17T18:20:32.407153779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:20:32.407307 env[1727]: time="2025-03-17T18:20:32.407223288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:32.407307 env[1727]: time="2025-03-17T18:20:32.407258008Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:20:32.407307 env[1727]: time="2025-03-17T18:20:32.407282149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:32.407477 env[1727]: time="2025-03-17T18:20:32.407455829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:32.408025 env[1727]: time="2025-03-17T18:20:32.407963481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:32.408282 env[1727]: time="2025-03-17T18:20:32.408230365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:20:32.408355 env[1727]: time="2025-03-17T18:20:32.408276920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:20:32.408430 env[1727]: time="2025-03-17T18:20:32.408402258Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:20:32.408488 env[1727]: time="2025-03-17T18:20:32.408428573Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:20:32.418392 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:20:32.427414 env[1727]: time="2025-03-17T18:20:32.427344456Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:20:32.427557 env[1727]: time="2025-03-17T18:20:32.427414189Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:20:32.427557 env[1727]: time="2025-03-17T18:20:32.427488835Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:20:32.427681 env[1727]: time="2025-03-17T18:20:32.427571896Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:20:32.427744 env[1727]: time="2025-03-17T18:20:32.427605652Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:20:32.427800 env[1727]: time="2025-03-17T18:20:32.427753074Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:20:32.427800 env[1727]: time="2025-03-17T18:20:32.427787406Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:20:32.428327 env[1727]: time="2025-03-17T18:20:32.428278955Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:20:32.428426 env[1727]: time="2025-03-17T18:20:32.428337487Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:20:32.428426 env[1727]: time="2025-03-17T18:20:32.428371454Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:20:32.428426 env[1727]: time="2025-03-17T18:20:32.428410734Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:20:32.428574 env[1727]: time="2025-03-17T18:20:32.428441105Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:20:32.428729 env[1727]: time="2025-03-17T18:20:32.428687726Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:20:32.428931 env[1727]: time="2025-03-17T18:20:32.428879212Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:20:32.429444 dbus-daemon[1713]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 18:20:32.429714 systemd[1]: Started systemd-hostnamed.service. Mar 17 18:20:32.430163 env[1727]: time="2025-03-17T18:20:32.429496370Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:20:32.430163 env[1727]: time="2025-03-17T18:20:32.429554361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432184 env[1727]: time="2025-03-17T18:20:32.429585613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:20:32.432184 env[1727]: time="2025-03-17T18:20:32.431946502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432184 env[1727]: time="2025-03-17T18:20:32.432097521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432184 env[1727]: time="2025-03-17T18:20:32.432130960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432471 env[1727]: time="2025-03-17T18:20:32.432162894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432471 env[1727]: time="2025-03-17T18:20:32.432221484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432471 env[1727]: time="2025-03-17T18:20:32.432253489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432471 env[1727]: time="2025-03-17T18:20:32.432282226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432471 env[1727]: time="2025-03-17T18:20:32.432311115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432471 env[1727]: time="2025-03-17T18:20:32.432353298Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:20:32.432814 env[1727]: time="2025-03-17T18:20:32.432624578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432814 env[1727]: time="2025-03-17T18:20:32.432686095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432814 env[1727]: time="2025-03-17T18:20:32.432718452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.432814 env[1727]: time="2025-03-17T18:20:32.432751538Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:20:32.432814 env[1727]: time="2025-03-17T18:20:32.432785270Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:20:32.433054 env[1727]: time="2025-03-17T18:20:32.432811092Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:20:32.433054 env[1727]: time="2025-03-17T18:20:32.432847422Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:20:32.433054 env[1727]: time="2025-03-17T18:20:32.432914158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:20:32.433458 env[1727]: time="2025-03-17T18:20:32.433228408Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:20:32.433458 env[1727]: time="2025-03-17T18:20:32.433340829Z" level=info msg="Connect containerd service" Mar 17 18:20:32.433458 env[1727]: time="2025-03-17T18:20:32.433426276Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:20:32.434126 dbus-daemon[1713]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1755 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 18:20:32.439565 systemd[1]: Starting polkit.service... Mar 17 18:20:32.446215 env[1727]: time="2025-03-17T18:20:32.446070181Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:20:32.446403 env[1727]: time="2025-03-17T18:20:32.446290240Z" level=info msg="Start subscribing containerd event" Mar 17 18:20:32.446403 env[1727]: time="2025-03-17T18:20:32.446393364Z" level=info msg="Start recovering state" Mar 17 18:20:32.453924 env[1727]: time="2025-03-17T18:20:32.453793019Z" level=info msg="Start event monitor" Mar 17 18:20:32.453924 env[1727]: time="2025-03-17T18:20:32.453886670Z" level=info msg="Start snapshots syncer" Mar 17 18:20:32.453924 env[1727]: time="2025-03-17T18:20:32.453920320Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:20:32.453924 env[1727]: time="2025-03-17T18:20:32.453941534Z" level=info msg="Start streaming server" Mar 17 18:20:32.454717 env[1727]: time="2025-03-17T18:20:32.454640038Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:20:32.455039 env[1727]: time="2025-03-17T18:20:32.454862306Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:20:32.473056 systemd[1]: Started containerd.service. Mar 17 18:20:32.475414 env[1727]: time="2025-03-17T18:20:32.475051714Z" level=info msg="containerd successfully booted in 0.441780s" Mar 17 18:20:32.485897 polkitd[1804]: Started polkitd version 121 Mar 17 18:20:32.507854 polkitd[1804]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 18:20:32.508806 polkitd[1804]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 18:20:32.512499 polkitd[1804]: Finished loading, compiling and executing 2 rules Mar 17 18:20:32.513587 dbus-daemon[1713]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 18:20:32.513877 systemd[1]: Started polkit.service. Mar 17 18:20:32.516710 polkitd[1804]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 18:20:32.547851 systemd-hostnamed[1755]: Hostname set to (transient) Mar 17 18:20:32.547997 systemd-resolved[1682]: System hostname changed to 'ip-172-31-30-28'. Mar 17 18:20:32.751173 coreos-metadata[1712]: Mar 17 18:20:32.751 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 18:20:32.752236 coreos-metadata[1712]: Mar 17 18:20:32.752 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Mar 17 18:20:32.752984 coreos-metadata[1712]: Mar 17 18:20:32.752 INFO Fetch successful Mar 17 18:20:32.753098 coreos-metadata[1712]: Mar 17 18:20:32.752 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 18:20:32.760758 coreos-metadata[1712]: Mar 17 18:20:32.760 INFO Fetch successful Mar 17 18:20:32.764548 unknown[1712]: wrote ssh authorized keys file for user: core Mar 17 18:20:32.791793 update-ssh-keys[1869]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:20:32.792730 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 18:20:33.005789 amazon-ssm-agent[1765]: 2025-03-17 18:20:32 INFO Create new startup processor Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:32 INFO [LongRunningPluginsManager] registered plugins: {} Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:32 INFO Initializing bookkeeping folders Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:32 INFO removing the completed state files Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:32 INFO Initializing bookkeeping folders for long running plugins Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO Initializing healthcheck folders for long running plugins Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO Initializing locations for inventory plugin Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO Initializing default location for custom inventory Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO Initializing default location for file inventory Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO Initializing default location for role inventory Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO Init the cloudwatchlogs publisher Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform independent plugin aws:refreshAssociation Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform independent plugin aws:configureDocker Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform independent plugin aws:runDockerAction Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform independent plugin aws:configurePackage Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform independent plugin aws:downloadContent Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform independent plugin aws:runDocument Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform independent plugin aws:softwareInventory Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform independent plugin aws:runPowerShellScript Mar 17 18:20:33.006907 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform independent plugin aws:updateSsmAgent Mar 17 18:20:33.007973 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Successfully loaded platform dependent plugin aws:runShellScript Mar 17 18:20:33.007973 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Mar 17 18:20:33.007973 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO OS: linux, Arch: arm64 Mar 17 18:20:33.019071 amazon-ssm-agent[1765]: datastore file /var/lib/amazon/ssm/i-0c5f0603753ecbfef/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Mar 17 18:20:33.104261 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] Starting document processing engine... Mar 17 18:20:33.198066 locksmithd[1769]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:20:33.200412 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] [EngineProcessor] Starting Mar 17 18:20:33.294799 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Mar 17 18:20:33.389297 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] Starting message polling Mar 17 18:20:33.484020 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] Starting send replies to MDS Mar 17 18:20:33.579018 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [instanceID=i-0c5f0603753ecbfef] Starting association polling Mar 17 18:20:33.674056 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Mar 17 18:20:33.769379 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] [Association] Launching response handler Mar 17 18:20:33.864889 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Mar 17 18:20:33.960607 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Mar 17 18:20:34.056447 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Mar 17 18:20:34.152620 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessageGatewayService] Starting session document processing engine... Mar 17 18:20:34.248898 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessageGatewayService] [EngineProcessor] Starting Mar 17 18:20:34.345378 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Mar 17 18:20:34.391250 systemd[1]: Started kubelet.service. Mar 17 18:20:34.442167 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0c5f0603753ecbfef, requestId: 0dc36a96-6e9f-4f3a-ab56-d0bd8ab7195d Mar 17 18:20:34.538934 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [OfflineService] Starting document processing engine... Mar 17 18:20:34.636025 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [OfflineService] [EngineProcessor] Starting Mar 17 18:20:34.733165 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [OfflineService] [EngineProcessor] Initial processing Mar 17 18:20:34.830380 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [OfflineService] Starting message polling Mar 17 18:20:34.927782 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [OfflineService] Starting send replies to MDS Mar 17 18:20:35.025540 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessageGatewayService] listening reply. Mar 17 18:20:35.123349 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [HealthCheck] HealthCheck reporting agent health. Mar 17 18:20:35.221382 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [LongRunningPluginsManager] starting long running plugin manager Mar 17 18:20:35.319737 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Mar 17 18:20:35.418179 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Mar 17 18:20:35.516842 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [StartupProcessor] Executing startup processor tasks Mar 17 18:20:35.575073 kubelet[1906]: E0317 18:20:35.574955 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:20:35.578948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:20:35.579247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:20:35.579699 systemd[1]: kubelet.service: Consumed 1.383s CPU time. Mar 17 18:20:35.615635 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Mar 17 18:20:35.714599 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Mar 17 18:20:35.813834 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 Mar 17 18:20:35.913281 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c5f0603753ecbfef?role=subscribe&stream=input Mar 17 18:20:36.012894 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c5f0603753ecbfef?role=subscribe&stream=input Mar 17 18:20:36.112675 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessageGatewayService] Starting receiving message from control channel Mar 17 18:20:36.212752 amazon-ssm-agent[1765]: 2025-03-17 18:20:33 INFO [MessageGatewayService] [EngineProcessor] Initial processing Mar 17 18:20:36.312884 amazon-ssm-agent[1765]: 2025-03-17 18:20:34 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Mar 17 18:20:36.446717 sshd_keygen[1733]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:20:36.482209 systemd[1]: Finished sshd-keygen.service. Mar 17 18:20:36.486573 systemd[1]: Starting issuegen.service... Mar 17 18:20:36.497380 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:20:36.497775 systemd[1]: Finished issuegen.service. Mar 17 18:20:36.502251 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:20:36.517390 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:20:36.522151 systemd[1]: Started getty@tty1.service. Mar 17 18:20:36.526481 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:20:36.528641 systemd[1]: Reached target getty.target. Mar 17 18:20:36.530397 systemd[1]: Reached target multi-user.target. Mar 17 18:20:36.534635 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:20:36.551070 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:20:36.551439 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:20:36.553579 systemd[1]: Startup finished in 1.126s (kernel) + 7.601s (initrd) + 12.360s (userspace) = 21.088s. Mar 17 18:20:40.095934 systemd[1]: Created slice system-sshd.slice. Mar 17 18:20:40.099013 systemd[1]: Started sshd@0-172.31.30.28:22-139.178.89.65:42230.service. Mar 17 18:20:40.290175 sshd[1927]: Accepted publickey for core from 139.178.89.65 port 42230 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:40.295590 sshd[1927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:40.316006 systemd[1]: Created slice user-500.slice. Mar 17 18:20:40.319464 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:20:40.326776 systemd-logind[1719]: New session 1 of user core. Mar 17 18:20:40.340789 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:20:40.345414 systemd[1]: Starting user@500.service... Mar 17 18:20:40.353170 (systemd)[1930]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:40.538663 systemd[1930]: Queued start job for default target default.target. Mar 17 18:20:40.539694 systemd[1930]: Reached target paths.target. Mar 17 18:20:40.539746 systemd[1930]: Reached target sockets.target. Mar 17 18:20:40.539778 systemd[1930]: Reached target timers.target. Mar 17 18:20:40.539821 systemd[1930]: Reached target basic.target. Mar 17 18:20:40.539979 systemd[1]: Started user@500.service. Mar 17 18:20:40.541846 systemd[1]: Started session-1.scope. Mar 17 18:20:40.543141 systemd[1930]: Reached target default.target. Mar 17 18:20:40.543566 systemd[1930]: Startup finished in 178ms. Mar 17 18:20:40.688748 systemd[1]: Started sshd@1-172.31.30.28:22-139.178.89.65:42234.service. Mar 17 18:20:40.869860 sshd[1939]: Accepted publickey for core from 139.178.89.65 port 42234 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:40.872417 sshd[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:40.879572 systemd-logind[1719]: New session 2 of user core. Mar 17 18:20:40.881481 systemd[1]: Started session-2.scope. Mar 17 18:20:41.012924 sshd[1939]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:41.018422 systemd-logind[1719]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:20:41.019569 systemd[1]: sshd@1-172.31.30.28:22-139.178.89.65:42234.service: Deactivated successfully. Mar 17 18:20:41.020866 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:20:41.022268 systemd-logind[1719]: Removed session 2. Mar 17 18:20:41.041606 systemd[1]: Started sshd@2-172.31.30.28:22-139.178.89.65:42240.service. Mar 17 18:20:41.219436 sshd[1945]: Accepted publickey for core from 139.178.89.65 port 42240 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:41.221986 sshd[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:41.230083 systemd-logind[1719]: New session 3 of user core. Mar 17 18:20:41.231042 systemd[1]: Started session-3.scope. Mar 17 18:20:41.353932 sshd[1945]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:41.358398 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:20:41.359662 systemd-logind[1719]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:20:41.360052 systemd[1]: sshd@2-172.31.30.28:22-139.178.89.65:42240.service: Deactivated successfully. Mar 17 18:20:41.362441 systemd-logind[1719]: Removed session 3. Mar 17 18:20:41.384072 systemd[1]: Started sshd@3-172.31.30.28:22-139.178.89.65:38774.service. Mar 17 18:20:41.557985 sshd[1951]: Accepted publickey for core from 139.178.89.65 port 38774 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:41.560941 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:41.568255 systemd-logind[1719]: New session 4 of user core. Mar 17 18:20:41.569146 systemd[1]: Started session-4.scope. Mar 17 18:20:41.698755 sshd[1951]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:41.704174 systemd-logind[1719]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:20:41.704786 systemd[1]: sshd@3-172.31.30.28:22-139.178.89.65:38774.service: Deactivated successfully. Mar 17 18:20:41.706022 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:20:41.707456 systemd-logind[1719]: Removed session 4. Mar 17 18:20:41.727094 systemd[1]: Started sshd@4-172.31.30.28:22-139.178.89.65:38790.service. Mar 17 18:20:41.902001 sshd[1957]: Accepted publickey for core from 139.178.89.65 port 38790 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:41.904436 sshd[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:41.912940 systemd[1]: Started session-5.scope. Mar 17 18:20:41.913723 systemd-logind[1719]: New session 5 of user core. Mar 17 18:20:42.036496 sudo[1960]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:20:42.037582 sudo[1960]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:20:42.064937 systemd[1]: Starting coreos-metadata.service... Mar 17 18:20:42.224977 coreos-metadata[1964]: Mar 17 18:20:42.224 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 18:20:42.225726 coreos-metadata[1964]: Mar 17 18:20:42.225 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Mar 17 18:20:42.226121 coreos-metadata[1964]: Mar 17 18:20:42.226 INFO Fetch successful Mar 17 18:20:42.226766 coreos-metadata[1964]: Mar 17 18:20:42.226 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Mar 17 18:20:42.227375 coreos-metadata[1964]: Mar 17 18:20:42.227 INFO Fetch successful Mar 17 18:20:42.227625 coreos-metadata[1964]: Mar 17 18:20:42.227 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Mar 17 18:20:42.228092 coreos-metadata[1964]: Mar 17 18:20:42.228 INFO Fetch successful Mar 17 18:20:42.228182 coreos-metadata[1964]: Mar 17 18:20:42.228 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Mar 17 18:20:42.228808 coreos-metadata[1964]: Mar 17 18:20:42.228 INFO Fetch successful Mar 17 18:20:42.228889 coreos-metadata[1964]: Mar 17 18:20:42.228 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Mar 17 18:20:42.229463 coreos-metadata[1964]: Mar 17 18:20:42.229 INFO Fetch successful Mar 17 18:20:42.229553 coreos-metadata[1964]: Mar 17 18:20:42.229 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Mar 17 18:20:42.230186 coreos-metadata[1964]: Mar 17 18:20:42.230 INFO Fetch successful Mar 17 18:20:42.230277 coreos-metadata[1964]: Mar 17 18:20:42.230 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Mar 17 18:20:42.230735 coreos-metadata[1964]: Mar 17 18:20:42.230 INFO Fetch successful Mar 17 18:20:42.230808 coreos-metadata[1964]: Mar 17 18:20:42.230 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Mar 17 18:20:42.231412 coreos-metadata[1964]: Mar 17 18:20:42.231 INFO Fetch successful Mar 17 18:20:42.245356 systemd[1]: Finished coreos-metadata.service. Mar 17 18:20:43.309476 systemd[1]: Stopped kubelet.service. Mar 17 18:20:43.310403 systemd[1]: kubelet.service: Consumed 1.383s CPU time. Mar 17 18:20:43.314584 systemd[1]: Starting kubelet.service... Mar 17 18:20:43.380074 systemd[1]: Reloading. Mar 17 18:20:43.581554 /usr/lib/systemd/system-generators/torcx-generator[2020]: time="2025-03-17T18:20:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:20:43.581610 /usr/lib/systemd/system-generators/torcx-generator[2020]: time="2025-03-17T18:20:43Z" level=info msg="torcx already run" Mar 17 18:20:43.764261 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:20:43.764299 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:20:43.802553 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:20:44.028121 systemd[1]: Started kubelet.service. Mar 17 18:20:44.035994 systemd[1]: Stopping kubelet.service... Mar 17 18:20:44.037920 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:20:44.038325 systemd[1]: Stopped kubelet.service. Mar 17 18:20:44.041527 systemd[1]: Starting kubelet.service... Mar 17 18:20:44.299057 systemd[1]: Started kubelet.service. Mar 17 18:20:44.372814 kubelet[2080]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:20:44.373389 kubelet[2080]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:20:44.373495 kubelet[2080]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:20:44.373927 kubelet[2080]: I0317 18:20:44.373872 2080 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:20:45.987470 kubelet[2080]: I0317 18:20:45.987421 2080 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:20:45.988107 kubelet[2080]: I0317 18:20:45.988083 2080 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:20:45.988687 kubelet[2080]: I0317 18:20:45.988626 2080 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:20:46.042126 kubelet[2080]: I0317 18:20:46.042067 2080 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:20:46.058357 kubelet[2080]: E0317 18:20:46.058293 2080 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:20:46.058357 kubelet[2080]: I0317 18:20:46.058347 2080 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:20:46.065359 kubelet[2080]: I0317 18:20:46.065292 2080 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:20:46.067099 kubelet[2080]: I0317 18:20:46.067045 2080 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:20:46.067474 kubelet[2080]: I0317 18:20:46.067405 2080 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:20:46.067780 kubelet[2080]: I0317 18:20:46.067469 2080 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.30.28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:20:46.067956 kubelet[2080]: I0317 18:20:46.067814 2080 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:20:46.067956 kubelet[2080]: I0317 18:20:46.067836 2080 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:20:46.068095 kubelet[2080]: I0317 18:20:46.068036 2080 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:20:46.071880 kubelet[2080]: I0317 18:20:46.071826 2080 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:20:46.072083 kubelet[2080]: I0317 18:20:46.071907 2080 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:20:46.072083 kubelet[2080]: I0317 18:20:46.072002 2080 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:20:46.072083 kubelet[2080]: I0317 18:20:46.072028 2080 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:20:46.072474 kubelet[2080]: E0317 18:20:46.072423 2080 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:46.072643 kubelet[2080]: E0317 18:20:46.072619 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:46.079717 kubelet[2080]: I0317 18:20:46.079641 2080 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:20:46.081239 kubelet[2080]: W0317 18:20:46.081200 2080 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.30.28" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 18:20:46.090012 kubelet[2080]: I0317 18:20:46.089961 2080 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:20:46.093496 kubelet[2080]: E0317 18:20:46.093403 2080 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.30.28\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:20:46.093890 kubelet[2080]: W0317 18:20:46.093858 2080 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 18:20:46.094093 kubelet[2080]: E0317 18:20:46.094063 2080 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:20:46.094695 kubelet[2080]: W0317 18:20:46.094630 2080 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:20:46.095771 kubelet[2080]: I0317 18:20:46.095725 2080 server.go:1269] "Started kubelet" Mar 17 18:20:46.102720 kubelet[2080]: I0317 18:20:46.102646 2080 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:20:46.104722 kubelet[2080]: I0317 18:20:46.104690 2080 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:20:46.109353 kubelet[2080]: I0317 18:20:46.109224 2080 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:20:46.109804 kubelet[2080]: I0317 18:20:46.109758 2080 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:20:46.116922 kubelet[2080]: E0317 18:20:46.116883 2080 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:20:46.120335 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:20:46.120545 kubelet[2080]: I0317 18:20:46.120501 2080 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:20:46.127040 kubelet[2080]: I0317 18:20:46.127000 2080 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:20:46.130601 kubelet[2080]: I0317 18:20:46.127743 2080 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:20:46.131166 kubelet[2080]: I0317 18:20:46.127783 2080 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:20:46.131285 kubelet[2080]: I0317 18:20:46.131253 2080 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:20:46.131285 kubelet[2080]: E0317 18:20:46.128035 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:46.132786 kubelet[2080]: I0317 18:20:46.132743 2080 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:20:46.136836 kubelet[2080]: E0317 18:20:46.136797 2080 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.30.28\" not found" node="172.31.30.28" Mar 17 18:20:46.142094 kubelet[2080]: I0317 18:20:46.142060 2080 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:20:46.142301 kubelet[2080]: I0317 18:20:46.142279 2080 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:20:46.172509 kubelet[2080]: I0317 18:20:46.172449 2080 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:20:46.173373 kubelet[2080]: I0317 18:20:46.173344 2080 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:20:46.173684 kubelet[2080]: I0317 18:20:46.173641 2080 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:20:46.176940 kubelet[2080]: I0317 18:20:46.176886 2080 policy_none.go:49] "None policy: Start" Mar 17 18:20:46.179381 kubelet[2080]: I0317 18:20:46.179324 2080 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:20:46.179619 kubelet[2080]: I0317 18:20:46.179600 2080 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:20:46.190876 systemd[1]: Created slice kubepods.slice. Mar 17 18:20:46.203938 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:20:46.210072 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:20:46.219445 kubelet[2080]: I0317 18:20:46.219402 2080 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:20:46.219779 kubelet[2080]: I0317 18:20:46.219731 2080 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:20:46.219859 kubelet[2080]: I0317 18:20:46.219764 2080 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:20:46.222221 kubelet[2080]: I0317 18:20:46.221350 2080 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:20:46.224109 kubelet[2080]: E0317 18:20:46.224063 2080 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.30.28\" not found" Mar 17 18:20:46.302591 kubelet[2080]: I0317 18:20:46.302450 2080 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:20:46.304638 kubelet[2080]: I0317 18:20:46.304596 2080 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:20:46.304839 kubelet[2080]: I0317 18:20:46.304817 2080 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:20:46.305013 kubelet[2080]: I0317 18:20:46.304991 2080 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:20:46.305198 kubelet[2080]: E0317 18:20:46.305172 2080 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 18:20:46.323448 kubelet[2080]: I0317 18:20:46.323413 2080 kubelet_node_status.go:72] "Attempting to register node" node="172.31.30.28" Mar 17 18:20:46.341351 kubelet[2080]: I0317 18:20:46.341313 2080 kubelet_node_status.go:75] "Successfully registered node" node="172.31.30.28" Mar 17 18:20:46.341607 kubelet[2080]: E0317 18:20:46.341582 2080 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.30.28\": node \"172.31.30.28\" not found" Mar 17 18:20:46.368840 kubelet[2080]: E0317 18:20:46.368796 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:46.469029 kubelet[2080]: E0317 18:20:46.468993 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:46.570294 kubelet[2080]: E0317 18:20:46.570182 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:46.671146 kubelet[2080]: E0317 18:20:46.671093 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:46.771714 kubelet[2080]: E0317 18:20:46.771684 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:46.837367 sudo[1960]: pam_unix(sudo:session): session closed for user root Mar 17 18:20:46.862025 sshd[1957]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:46.866706 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:20:46.867900 systemd[1]: sshd@4-172.31.30.28:22-139.178.89.65:38790.service: Deactivated successfully. Mar 17 18:20:46.869552 systemd-logind[1719]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:20:46.871289 systemd-logind[1719]: Removed session 5. Mar 17 18:20:46.872370 kubelet[2080]: E0317 18:20:46.872333 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:46.972951 kubelet[2080]: E0317 18:20:46.972916 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:46.992343 kubelet[2080]: I0317 18:20:46.992290 2080 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 18:20:46.992877 kubelet[2080]: W0317 18:20:46.992472 2080 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:20:46.992877 kubelet[2080]: W0317 18:20:46.992523 2080 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:20:46.992877 kubelet[2080]: W0317 18:20:46.992567 2080 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:20:47.073332 kubelet[2080]: E0317 18:20:47.073281 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:47.073435 kubelet[2080]: E0317 18:20:47.073349 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:47.173493 kubelet[2080]: E0317 18:20:47.173388 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:47.274095 kubelet[2080]: E0317 18:20:47.274041 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:47.375125 kubelet[2080]: E0317 18:20:47.375088 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:47.475931 kubelet[2080]: E0317 18:20:47.475805 2080 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.28\" not found" Mar 17 18:20:47.577768 kubelet[2080]: I0317 18:20:47.577707 2080 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 18:20:47.578240 env[1727]: time="2025-03-17T18:20:47.578180035Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:20:47.579518 kubelet[2080]: I0317 18:20:47.579473 2080 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 18:20:48.073700 kubelet[2080]: E0317 18:20:48.073637 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:48.074229 kubelet[2080]: I0317 18:20:48.073723 2080 apiserver.go:52] "Watching apiserver" Mar 17 18:20:48.087969 systemd[1]: Created slice kubepods-besteffort-podc8e9df21_32aa_4cbb_af60_cf1f2da555bb.slice. Mar 17 18:20:48.104805 systemd[1]: Created slice kubepods-burstable-pod5bd26170_28c9_48f8_a100_0f124a02c2a8.slice. Mar 17 18:20:48.132425 kubelet[2080]: I0317 18:20:48.132359 2080 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:20:48.145519 kubelet[2080]: I0317 18:20:48.145448 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5bd26170-28c9-48f8-a100-0f124a02c2a8-clustermesh-secrets\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.145878 kubelet[2080]: I0317 18:20:48.145848 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-host-proc-sys-net\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.146105 kubelet[2080]: I0317 18:20:48.146078 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-run\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.146325 kubelet[2080]: I0317 18:20:48.146259 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-cgroup\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.146523 kubelet[2080]: I0317 18:20:48.146499 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-lib-modules\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.146734 kubelet[2080]: I0317 18:20:48.146710 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-xtables-lock\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.146951 kubelet[2080]: I0317 18:20:48.146887 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8e9df21-32aa-4cbb-af60-cf1f2da555bb-xtables-lock\") pod \"kube-proxy-5mplk\" (UID: \"c8e9df21-32aa-4cbb-af60-cf1f2da555bb\") " pod="kube-system/kube-proxy-5mplk" Mar 17 18:20:48.147117 kubelet[2080]: I0317 18:20:48.147093 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8e9df21-32aa-4cbb-af60-cf1f2da555bb-lib-modules\") pod \"kube-proxy-5mplk\" (UID: \"c8e9df21-32aa-4cbb-af60-cf1f2da555bb\") " pod="kube-system/kube-proxy-5mplk" Mar 17 18:20:48.147337 kubelet[2080]: I0317 18:20:48.147271 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-hostproc\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.147504 kubelet[2080]: I0317 18:20:48.147481 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cni-path\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.147733 kubelet[2080]: I0317 18:20:48.147706 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-config-path\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.147974 kubelet[2080]: I0317 18:20:48.147950 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c8e9df21-32aa-4cbb-af60-cf1f2da555bb-kube-proxy\") pod \"kube-proxy-5mplk\" (UID: \"c8e9df21-32aa-4cbb-af60-cf1f2da555bb\") " pod="kube-system/kube-proxy-5mplk" Mar 17 18:20:48.148186 kubelet[2080]: I0317 18:20:48.148109 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-bpf-maps\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.148358 kubelet[2080]: I0317 18:20:48.148334 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-etc-cni-netd\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.148607 kubelet[2080]: I0317 18:20:48.148582 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-host-proc-sys-kernel\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.148812 kubelet[2080]: I0317 18:20:48.148767 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghjjc\" (UniqueName: \"kubernetes.io/projected/c8e9df21-32aa-4cbb-af60-cf1f2da555bb-kube-api-access-ghjjc\") pod \"kube-proxy-5mplk\" (UID: \"c8e9df21-32aa-4cbb-af60-cf1f2da555bb\") " pod="kube-system/kube-proxy-5mplk" Mar 17 18:20:48.149028 kubelet[2080]: I0317 18:20:48.149004 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5bd26170-28c9-48f8-a100-0f124a02c2a8-hubble-tls\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.149185 kubelet[2080]: I0317 18:20:48.149162 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rslp8\" (UniqueName: \"kubernetes.io/projected/5bd26170-28c9-48f8-a100-0f124a02c2a8-kube-api-access-rslp8\") pod \"cilium-zdlvf\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " pod="kube-system/cilium-zdlvf" Mar 17 18:20:48.250784 kubelet[2080]: I0317 18:20:48.250721 2080 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:20:48.403011 env[1727]: time="2025-03-17T18:20:48.402949203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5mplk,Uid:c8e9df21-32aa-4cbb-af60-cf1f2da555bb,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:48.418184 env[1727]: time="2025-03-17T18:20:48.418125288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdlvf,Uid:5bd26170-28c9-48f8-a100-0f124a02c2a8,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:49.015478 env[1727]: time="2025-03-17T18:20:49.015394304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:49.019999 env[1727]: time="2025-03-17T18:20:49.019930528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:49.026622 env[1727]: time="2025-03-17T18:20:49.026570532Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:49.029497 env[1727]: time="2025-03-17T18:20:49.029449674Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:49.031126 env[1727]: time="2025-03-17T18:20:49.031072449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:49.035246 env[1727]: time="2025-03-17T18:20:49.035177207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:49.036933 env[1727]: time="2025-03-17T18:20:49.036887205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:49.043458 env[1727]: time="2025-03-17T18:20:49.043386544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:49.074615 kubelet[2080]: E0317 18:20:49.074553 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:49.082083 env[1727]: time="2025-03-17T18:20:49.081920351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:49.082295 env[1727]: time="2025-03-17T18:20:49.082033606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:49.082295 env[1727]: time="2025-03-17T18:20:49.082097622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:49.082869 env[1727]: time="2025-03-17T18:20:49.082744629Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49dd6cf6231abb20b6a0178e44698784f75cd8367891a397d0ff34574128c710 pid=2140 runtime=io.containerd.runc.v2 Mar 17 18:20:49.087593 env[1727]: time="2025-03-17T18:20:49.087446856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:49.087593 env[1727]: time="2025-03-17T18:20:49.087531025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:49.087918 env[1727]: time="2025-03-17T18:20:49.087559118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:49.088507 env[1727]: time="2025-03-17T18:20:49.088403956Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc pid=2147 runtime=io.containerd.runc.v2 Mar 17 18:20:49.114812 systemd[1]: Started cri-containerd-3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc.scope. Mar 17 18:20:49.139044 systemd[1]: Started cri-containerd-49dd6cf6231abb20b6a0178e44698784f75cd8367891a397d0ff34574128c710.scope. Mar 17 18:20:49.184243 env[1727]: time="2025-03-17T18:20:49.184189088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdlvf,Uid:5bd26170-28c9-48f8-a100-0f124a02c2a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\"" Mar 17 18:20:49.188444 env[1727]: time="2025-03-17T18:20:49.188392864Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:20:49.215603 env[1727]: time="2025-03-17T18:20:49.215531189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5mplk,Uid:c8e9df21-32aa-4cbb-af60-cf1f2da555bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"49dd6cf6231abb20b6a0178e44698784f75cd8367891a397d0ff34574128c710\"" Mar 17 18:20:49.267349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169037690.mount: Deactivated successfully. Mar 17 18:20:50.074869 kubelet[2080]: E0317 18:20:50.074813 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:51.075155 kubelet[2080]: E0317 18:20:51.075083 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:52.075506 kubelet[2080]: E0317 18:20:52.075441 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:53.076393 kubelet[2080]: E0317 18:20:53.076334 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:54.076799 kubelet[2080]: E0317 18:20:54.076700 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:55.077121 kubelet[2080]: E0317 18:20:55.077056 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:56.078047 kubelet[2080]: E0317 18:20:56.077904 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:57.078830 kubelet[2080]: E0317 18:20:57.078759 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:58.079344 kubelet[2080]: E0317 18:20:58.079293 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:20:58.957934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911387629.mount: Deactivated successfully. Mar 17 18:20:59.080417 kubelet[2080]: E0317 18:20:59.080311 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:00.081113 kubelet[2080]: E0317 18:21:00.081030 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:01.082147 kubelet[2080]: E0317 18:21:01.082091 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:02.082817 kubelet[2080]: E0317 18:21:02.082744 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:02.581604 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 18:21:02.839453 env[1727]: time="2025-03-17T18:21:02.839309563Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:02.844211 env[1727]: time="2025-03-17T18:21:02.844159828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:02.847990 env[1727]: time="2025-03-17T18:21:02.847921667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:02.849293 env[1727]: time="2025-03-17T18:21:02.849245024Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:21:02.852946 env[1727]: time="2025-03-17T18:21:02.852870859Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:21:02.856489 env[1727]: time="2025-03-17T18:21:02.856410662Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:21:02.880943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996815896.mount: Deactivated successfully. Mar 17 18:21:02.893364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001628221.mount: Deactivated successfully. Mar 17 18:21:02.916645 env[1727]: time="2025-03-17T18:21:02.916559902Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\"" Mar 17 18:21:02.918177 env[1727]: time="2025-03-17T18:21:02.918111449Z" level=info msg="StartContainer for \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\"" Mar 17 18:21:02.955351 systemd[1]: Started cri-containerd-1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22.scope. Mar 17 18:21:03.016802 env[1727]: time="2025-03-17T18:21:03.016699840Z" level=info msg="StartContainer for \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\" returns successfully" Mar 17 18:21:03.037528 systemd[1]: cri-containerd-1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22.scope: Deactivated successfully. Mar 17 18:21:03.083731 kubelet[2080]: E0317 18:21:03.083642 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:03.876598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22-rootfs.mount: Deactivated successfully. Mar 17 18:21:04.084434 kubelet[2080]: E0317 18:21:04.084392 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:04.492267 env[1727]: time="2025-03-17T18:21:04.492183308Z" level=info msg="shim disconnected" id=1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22 Mar 17 18:21:04.492928 env[1727]: time="2025-03-17T18:21:04.492887756Z" level=warning msg="cleaning up after shim disconnected" id=1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22 namespace=k8s.io Mar 17 18:21:04.493057 env[1727]: time="2025-03-17T18:21:04.493029380Z" level=info msg="cleaning up dead shim" Mar 17 18:21:04.507945 env[1727]: time="2025-03-17T18:21:04.507890198Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:21:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2263 runtime=io.containerd.runc.v2\n" Mar 17 18:21:04.619477 amazon-ssm-agent[1765]: 2025-03-17 18:21:04 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Mar 17 18:21:05.087620 kubelet[2080]: E0317 18:21:05.087527 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:05.384577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3614205564.mount: Deactivated successfully. Mar 17 18:21:05.395124 env[1727]: time="2025-03-17T18:21:05.395052837Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:21:05.446011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529749388.mount: Deactivated successfully. Mar 17 18:21:05.462357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040462987.mount: Deactivated successfully. Mar 17 18:21:05.469785 env[1727]: time="2025-03-17T18:21:05.469696023Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\"" Mar 17 18:21:05.472027 env[1727]: time="2025-03-17T18:21:05.471003926Z" level=info msg="StartContainer for \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\"" Mar 17 18:21:05.514162 systemd[1]: Started cri-containerd-5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8.scope. Mar 17 18:21:05.586541 env[1727]: time="2025-03-17T18:21:05.586472931Z" level=info msg="StartContainer for \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\" returns successfully" Mar 17 18:21:05.609182 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:21:05.611443 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:21:05.611818 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:21:05.615803 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:21:05.626941 systemd[1]: cri-containerd-5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8.scope: Deactivated successfully. Mar 17 18:21:05.640477 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:21:05.764014 env[1727]: time="2025-03-17T18:21:05.763952847Z" level=info msg="shim disconnected" id=5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8 Mar 17 18:21:05.764419 env[1727]: time="2025-03-17T18:21:05.764384199Z" level=warning msg="cleaning up after shim disconnected" id=5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8 namespace=k8s.io Mar 17 18:21:05.764562 env[1727]: time="2025-03-17T18:21:05.764533791Z" level=info msg="cleaning up dead shim" Mar 17 18:21:05.780836 env[1727]: time="2025-03-17T18:21:05.780779624Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:21:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2328 runtime=io.containerd.runc.v2\n" Mar 17 18:21:06.073251 kubelet[2080]: E0317 18:21:06.072676 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:06.088251 kubelet[2080]: E0317 18:21:06.088147 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:06.393592 env[1727]: time="2025-03-17T18:21:06.393508804Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:21:06.419056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175713298.mount: Deactivated successfully. Mar 17 18:21:06.445267 env[1727]: time="2025-03-17T18:21:06.445197644Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\"" Mar 17 18:21:06.446287 env[1727]: time="2025-03-17T18:21:06.445439528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:06.447267 env[1727]: time="2025-03-17T18:21:06.447191611Z" level=info msg="StartContainer for \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\"" Mar 17 18:21:06.449255 env[1727]: time="2025-03-17T18:21:06.449207022Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:06.455773 env[1727]: time="2025-03-17T18:21:06.455724892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:06.458631 env[1727]: time="2025-03-17T18:21:06.458574531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:06.462269 env[1727]: time="2025-03-17T18:21:06.461181074Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 18:21:06.465555 env[1727]: time="2025-03-17T18:21:06.465468516Z" level=info msg="CreateContainer within sandbox \"49dd6cf6231abb20b6a0178e44698784f75cd8367891a397d0ff34574128c710\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:21:06.488537 env[1727]: time="2025-03-17T18:21:06.488453523Z" level=info msg="CreateContainer within sandbox \"49dd6cf6231abb20b6a0178e44698784f75cd8367891a397d0ff34574128c710\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb5513ad40d664dca26fcfd20632f44f8d7b52e4f583076ea80b89f1948e994a\"" Mar 17 18:21:06.489764 env[1727]: time="2025-03-17T18:21:06.489700659Z" level=info msg="StartContainer for \"fb5513ad40d664dca26fcfd20632f44f8d7b52e4f583076ea80b89f1948e994a\"" Mar 17 18:21:06.497387 systemd[1]: Started cri-containerd-d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d.scope. Mar 17 18:21:06.539742 systemd[1]: Started cri-containerd-fb5513ad40d664dca26fcfd20632f44f8d7b52e4f583076ea80b89f1948e994a.scope. Mar 17 18:21:06.593831 env[1727]: time="2025-03-17T18:21:06.593720339Z" level=info msg="StartContainer for \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\" returns successfully" Mar 17 18:21:06.603527 systemd[1]: cri-containerd-d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d.scope: Deactivated successfully. Mar 17 18:21:06.665926 env[1727]: time="2025-03-17T18:21:06.665794939Z" level=info msg="StartContainer for \"fb5513ad40d664dca26fcfd20632f44f8d7b52e4f583076ea80b89f1948e994a\" returns successfully" Mar 17 18:21:06.733862 env[1727]: time="2025-03-17T18:21:06.733795901Z" level=info msg="shim disconnected" id=d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d Mar 17 18:21:06.734401 env[1727]: time="2025-03-17T18:21:06.734359109Z" level=warning msg="cleaning up after shim disconnected" id=d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d namespace=k8s.io Mar 17 18:21:06.734587 env[1727]: time="2025-03-17T18:21:06.734547245Z" level=info msg="cleaning up dead shim" Mar 17 18:21:06.749215 env[1727]: time="2025-03-17T18:21:06.749158847Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:21:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2422 runtime=io.containerd.runc.v2\n" Mar 17 18:21:07.089061 kubelet[2080]: E0317 18:21:07.089007 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:07.401480 env[1727]: time="2025-03-17T18:21:07.401052102Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:21:07.419680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196563150.mount: Deactivated successfully. Mar 17 18:21:07.434596 env[1727]: time="2025-03-17T18:21:07.434524193Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\"" Mar 17 18:21:07.434945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592509066.mount: Deactivated successfully. Mar 17 18:21:07.438217 env[1727]: time="2025-03-17T18:21:07.438159640Z" level=info msg="StartContainer for \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\"" Mar 17 18:21:07.450069 kubelet[2080]: I0317 18:21:07.449974 2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5mplk" podStartSLOduration=4.204480252 podStartE2EDuration="21.449951724s" podCreationTimestamp="2025-03-17 18:20:46 +0000 UTC" firstStartedPulling="2025-03-17 18:20:49.217845637 +0000 UTC m=+4.910429855" lastFinishedPulling="2025-03-17 18:21:06.463317109 +0000 UTC m=+22.155901327" observedRunningTime="2025-03-17 18:21:07.449863524 +0000 UTC m=+23.142447754" watchObservedRunningTime="2025-03-17 18:21:07.449951724 +0000 UTC m=+23.142535966" Mar 17 18:21:07.469098 systemd[1]: Started cri-containerd-2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5.scope. Mar 17 18:21:07.528616 systemd[1]: cri-containerd-2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5.scope: Deactivated successfully. Mar 17 18:21:07.531848 env[1727]: time="2025-03-17T18:21:07.531646734Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bd26170_28c9_48f8_a100_0f124a02c2a8.slice/cri-containerd-2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5.scope/memory.events\": no such file or directory" Mar 17 18:21:07.535647 env[1727]: time="2025-03-17T18:21:07.535543745Z" level=info msg="StartContainer for \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\" returns successfully" Mar 17 18:21:07.568472 env[1727]: time="2025-03-17T18:21:07.568397993Z" level=info msg="shim disconnected" id=2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5 Mar 17 18:21:07.568793 env[1727]: time="2025-03-17T18:21:07.568469705Z" level=warning msg="cleaning up after shim disconnected" id=2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5 namespace=k8s.io Mar 17 18:21:07.568793 env[1727]: time="2025-03-17T18:21:07.568495073Z" level=info msg="cleaning up dead shim" Mar 17 18:21:07.582610 env[1727]: time="2025-03-17T18:21:07.582537348Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:21:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2611 runtime=io.containerd.runc.v2\n" Mar 17 18:21:08.090177 kubelet[2080]: E0317 18:21:08.090116 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:08.413394 env[1727]: time="2025-03-17T18:21:08.413067015Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:21:08.442206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819341057.mount: Deactivated successfully. Mar 17 18:21:08.454457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067553468.mount: Deactivated successfully. Mar 17 18:21:08.461014 env[1727]: time="2025-03-17T18:21:08.460932694Z" level=info msg="CreateContainer within sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\"" Mar 17 18:21:08.462112 env[1727]: time="2025-03-17T18:21:08.462065182Z" level=info msg="StartContainer for \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\"" Mar 17 18:21:08.492063 systemd[1]: Started cri-containerd-60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff.scope. Mar 17 18:21:08.563587 env[1727]: time="2025-03-17T18:21:08.563499059Z" level=info msg="StartContainer for \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\" returns successfully" Mar 17 18:21:08.730191 kubelet[2080]: I0317 18:21:08.730030 2080 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:21:08.782701 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:21:09.090688 kubelet[2080]: E0317 18:21:09.090595 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:09.579703 kernel: Initializing XFRM netlink socket Mar 17 18:21:09.585719 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:21:09.786285 kubelet[2080]: I0317 18:21:09.786194 2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zdlvf" podStartSLOduration=10.122023931 podStartE2EDuration="23.786149723s" podCreationTimestamp="2025-03-17 18:20:46 +0000 UTC" firstStartedPulling="2025-03-17 18:20:49.187207674 +0000 UTC m=+4.879791904" lastFinishedPulling="2025-03-17 18:21:02.851333466 +0000 UTC m=+18.543917696" observedRunningTime="2025-03-17 18:21:09.46088264 +0000 UTC m=+25.153466894" watchObservedRunningTime="2025-03-17 18:21:09.786149723 +0000 UTC m=+25.478733953" Mar 17 18:21:09.798306 systemd[1]: Created slice kubepods-besteffort-pod8e2f2206_3b33_462d_9c9d_c69c908c1590.slice. Mar 17 18:21:09.891230 kubelet[2080]: I0317 18:21:09.891056 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjqj4\" (UniqueName: \"kubernetes.io/projected/8e2f2206-3b33-462d-9c9d-c69c908c1590-kube-api-access-jjqj4\") pod \"nginx-deployment-8587fbcb89-mhmdz\" (UID: \"8e2f2206-3b33-462d-9c9d-c69c908c1590\") " pod="default/nginx-deployment-8587fbcb89-mhmdz" Mar 17 18:21:10.090993 kubelet[2080]: E0317 18:21:10.090944 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:10.104809 env[1727]: time="2025-03-17T18:21:10.104308633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mhmdz,Uid:8e2f2206-3b33-462d-9c9d-c69c908c1590,Namespace:default,Attempt:0,}" Mar 17 18:21:11.092279 kubelet[2080]: E0317 18:21:11.092208 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:11.390709 systemd-networkd[1457]: cilium_host: Link UP Mar 17 18:21:11.398131 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:21:11.398317 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:21:11.392961 systemd-networkd[1457]: cilium_net: Link UP Mar 17 18:21:11.395524 systemd-networkd[1457]: cilium_net: Gained carrier Mar 17 18:21:11.398512 (udev-worker)[2764]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:21:11.401017 systemd-networkd[1457]: cilium_host: Gained carrier Mar 17 18:21:11.406214 (udev-worker)[2765]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:21:11.568218 systemd-networkd[1457]: cilium_net: Gained IPv6LL Mar 17 18:21:11.570508 systemd-networkd[1457]: cilium_vxlan: Link UP Mar 17 18:21:11.570527 systemd-networkd[1457]: cilium_vxlan: Gained carrier Mar 17 18:21:12.007886 systemd-networkd[1457]: cilium_host: Gained IPv6LL Mar 17 18:21:12.037748 kernel: NET: Registered PF_ALG protocol family Mar 17 18:21:12.092544 kubelet[2080]: E0317 18:21:12.092502 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:12.967864 systemd-networkd[1457]: cilium_vxlan: Gained IPv6LL Mar 17 18:21:13.094113 kubelet[2080]: E0317 18:21:13.094071 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:13.316601 (udev-worker)[2784]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:21:13.318821 systemd-networkd[1457]: lxc_health: Link UP Mar 17 18:21:13.332409 systemd-networkd[1457]: lxc_health: Gained carrier Mar 17 18:21:13.332694 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:21:13.662516 systemd-networkd[1457]: lxc121499fb44aa: Link UP Mar 17 18:21:13.668771 kernel: eth0: renamed from tmp17325 Mar 17 18:21:13.676697 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc121499fb44aa: link becomes ready Mar 17 18:21:13.677542 systemd-networkd[1457]: lxc121499fb44aa: Gained carrier Mar 17 18:21:14.095354 kubelet[2080]: E0317 18:21:14.095292 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:14.511575 systemd-networkd[1457]: lxc_health: Gained IPv6LL Mar 17 18:21:14.887841 systemd-networkd[1457]: lxc121499fb44aa: Gained IPv6LL Mar 17 18:21:15.095482 kubelet[2080]: E0317 18:21:15.095410 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:16.096609 kubelet[2080]: E0317 18:21:16.096525 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:17.096829 kubelet[2080]: E0317 18:21:17.096752 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:17.469966 update_engine[1720]: I0317 18:21:17.469810 1720 update_attempter.cc:509] Updating boot flags... Mar 17 18:21:18.101072 kubelet[2080]: E0317 18:21:18.100754 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:19.101491 kubelet[2080]: E0317 18:21:19.101416 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:20.101911 kubelet[2080]: E0317 18:21:20.101853 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:21.103688 kubelet[2080]: E0317 18:21:21.103624 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:21.906741 env[1727]: time="2025-03-17T18:21:21.906577079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:21:21.906741 env[1727]: time="2025-03-17T18:21:21.906686903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:21:21.907488 env[1727]: time="2025-03-17T18:21:21.906716147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:21:21.907488 env[1727]: time="2025-03-17T18:21:21.907019987Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/173252d099748931dcbd510fe349057bc403ac6f9bf19094cc72ad7386a48f07 pid=3311 runtime=io.containerd.runc.v2 Mar 17 18:21:21.944134 systemd[1]: run-containerd-runc-k8s.io-173252d099748931dcbd510fe349057bc403ac6f9bf19094cc72ad7386a48f07-runc.v9GG77.mount: Deactivated successfully. Mar 17 18:21:21.949542 systemd[1]: Started cri-containerd-173252d099748931dcbd510fe349057bc403ac6f9bf19094cc72ad7386a48f07.scope. Mar 17 18:21:22.019709 env[1727]: time="2025-03-17T18:21:22.019622532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mhmdz,Uid:8e2f2206-3b33-462d-9c9d-c69c908c1590,Namespace:default,Attempt:0,} returns sandbox id \"173252d099748931dcbd510fe349057bc403ac6f9bf19094cc72ad7386a48f07\"" Mar 17 18:21:22.022568 env[1727]: time="2025-03-17T18:21:22.022055412Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:21:22.105065 kubelet[2080]: E0317 18:21:22.104922 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:23.105519 kubelet[2080]: E0317 18:21:23.105454 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:24.106328 kubelet[2080]: E0317 18:21:24.106271 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:25.106512 kubelet[2080]: E0317 18:21:25.106436 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:25.796182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090029450.mount: Deactivated successfully. Mar 17 18:21:26.072614 kubelet[2080]: E0317 18:21:26.072476 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:26.107465 kubelet[2080]: E0317 18:21:26.107402 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:27.108630 kubelet[2080]: E0317 18:21:27.108558 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:27.993983 env[1727]: time="2025-03-17T18:21:27.993905077Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:28.000147 env[1727]: time="2025-03-17T18:21:27.999070992Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:28.004726 env[1727]: time="2025-03-17T18:21:28.004677475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:28.008232 env[1727]: time="2025-03-17T18:21:28.008181751Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:28.009617 env[1727]: time="2025-03-17T18:21:28.009570559Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 18:21:28.014744 env[1727]: time="2025-03-17T18:21:28.014671710Z" level=info msg="CreateContainer within sandbox \"173252d099748931dcbd510fe349057bc403ac6f9bf19094cc72ad7386a48f07\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 18:21:28.040814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445216434.mount: Deactivated successfully. Mar 17 18:21:28.051746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447350131.mount: Deactivated successfully. Mar 17 18:21:28.057152 env[1727]: time="2025-03-17T18:21:28.057073165Z" level=info msg="CreateContainer within sandbox \"173252d099748931dcbd510fe349057bc403ac6f9bf19094cc72ad7386a48f07\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1b8d3bdb05e526dfbcd734372136bf1d43ab3bfb25215cb189f0c4c87b001a85\"" Mar 17 18:21:28.058307 env[1727]: time="2025-03-17T18:21:28.058252116Z" level=info msg="StartContainer for \"1b8d3bdb05e526dfbcd734372136bf1d43ab3bfb25215cb189f0c4c87b001a85\"" Mar 17 18:21:28.096572 systemd[1]: Started cri-containerd-1b8d3bdb05e526dfbcd734372136bf1d43ab3bfb25215cb189f0c4c87b001a85.scope. Mar 17 18:21:28.109394 kubelet[2080]: E0317 18:21:28.109286 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:28.159351 env[1727]: time="2025-03-17T18:21:28.159289536Z" level=info msg="StartContainer for \"1b8d3bdb05e526dfbcd734372136bf1d43ab3bfb25215cb189f0c4c87b001a85\" returns successfully" Mar 17 18:21:28.476850 kubelet[2080]: I0317 18:21:28.476762 2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-mhmdz" podStartSLOduration=13.48605801 podStartE2EDuration="19.476741108s" podCreationTimestamp="2025-03-17 18:21:09 +0000 UTC" firstStartedPulling="2025-03-17 18:21:22.021615396 +0000 UTC m=+37.714199626" lastFinishedPulling="2025-03-17 18:21:28.012298506 +0000 UTC m=+43.704882724" observedRunningTime="2025-03-17 18:21:28.475944224 +0000 UTC m=+44.168528478" watchObservedRunningTime="2025-03-17 18:21:28.476741108 +0000 UTC m=+44.169325362" Mar 17 18:21:29.109959 kubelet[2080]: E0317 18:21:29.109900 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:30.111033 kubelet[2080]: E0317 18:21:30.110973 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:31.112077 kubelet[2080]: E0317 18:21:31.111997 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:32.113194 kubelet[2080]: E0317 18:21:32.113112 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:33.114328 kubelet[2080]: E0317 18:21:33.114269 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:34.114722 kubelet[2080]: E0317 18:21:34.114668 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:35.115808 kubelet[2080]: E0317 18:21:35.115744 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:35.624140 systemd[1]: Created slice kubepods-besteffort-pod7f4a919e_f5df_423f_b7a3_a23aa3bf1a5c.slice. Mar 17 18:21:35.672450 kubelet[2080]: I0317 18:21:35.672399 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7f4a919e-f5df-423f-b7a3-a23aa3bf1a5c-data\") pod \"nfs-server-provisioner-0\" (UID: \"7f4a919e-f5df-423f-b7a3-a23aa3bf1a5c\") " pod="default/nfs-server-provisioner-0" Mar 17 18:21:35.672688 kubelet[2080]: I0317 18:21:35.672471 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hfl\" (UniqueName: \"kubernetes.io/projected/7f4a919e-f5df-423f-b7a3-a23aa3bf1a5c-kube-api-access-78hfl\") pod \"nfs-server-provisioner-0\" (UID: \"7f4a919e-f5df-423f-b7a3-a23aa3bf1a5c\") " pod="default/nfs-server-provisioner-0" Mar 17 18:21:35.931272 env[1727]: time="2025-03-17T18:21:35.930593755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7f4a919e-f5df-423f-b7a3-a23aa3bf1a5c,Namespace:default,Attempt:0,}" Mar 17 18:21:35.982567 (udev-worker)[3404]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:21:35.982567 (udev-worker)[3403]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:21:35.985501 systemd-networkd[1457]: lxc2fbc84465767: Link UP Mar 17 18:21:36.000742 kernel: eth0: renamed from tmpe7d4e Mar 17 18:21:36.008963 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:21:36.009133 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2fbc84465767: link becomes ready Mar 17 18:21:36.009403 systemd-networkd[1457]: lxc2fbc84465767: Gained carrier Mar 17 18:21:36.116970 kubelet[2080]: E0317 18:21:36.116896 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:36.346860 env[1727]: time="2025-03-17T18:21:36.346688280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:21:36.346860 env[1727]: time="2025-03-17T18:21:36.346782900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:21:36.347178 env[1727]: time="2025-03-17T18:21:36.346821072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:21:36.347687 env[1727]: time="2025-03-17T18:21:36.347575944Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7d4ed6be9e849274d550427103c923bf83b266fefb102ee9f8a358b57b08622 pid=3436 runtime=io.containerd.runc.v2 Mar 17 18:21:36.386769 systemd[1]: Started cri-containerd-e7d4ed6be9e849274d550427103c923bf83b266fefb102ee9f8a358b57b08622.scope. Mar 17 18:21:36.459581 env[1727]: time="2025-03-17T18:21:36.459526238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7f4a919e-f5df-423f-b7a3-a23aa3bf1a5c,Namespace:default,Attempt:0,} returns sandbox id \"e7d4ed6be9e849274d550427103c923bf83b266fefb102ee9f8a358b57b08622\"" Mar 17 18:21:36.463436 env[1727]: time="2025-03-17T18:21:36.463385437Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 18:21:36.793642 systemd[1]: run-containerd-runc-k8s.io-e7d4ed6be9e849274d550427103c923bf83b266fefb102ee9f8a358b57b08622-runc.If3c1Q.mount: Deactivated successfully. Mar 17 18:21:37.117881 kubelet[2080]: E0317 18:21:37.117818 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:37.288101 systemd-networkd[1457]: lxc2fbc84465767: Gained IPv6LL Mar 17 18:21:38.118571 kubelet[2080]: E0317 18:21:38.118509 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:39.119453 kubelet[2080]: E0317 18:21:39.119397 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:39.842887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873604483.mount: Deactivated successfully. Mar 17 18:21:40.120145 kubelet[2080]: E0317 18:21:40.119740 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:41.120777 kubelet[2080]: E0317 18:21:41.120706 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:42.121402 kubelet[2080]: E0317 18:21:42.121334 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:43.122396 kubelet[2080]: E0317 18:21:43.122304 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:43.446585 env[1727]: time="2025-03-17T18:21:43.446179529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:43.451833 env[1727]: time="2025-03-17T18:21:43.451778357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:43.454890 env[1727]: time="2025-03-17T18:21:43.454827437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:43.458759 env[1727]: time="2025-03-17T18:21:43.458709256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:43.460210 env[1727]: time="2025-03-17T18:21:43.460163524Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Mar 17 18:21:43.465479 env[1727]: time="2025-03-17T18:21:43.465424864Z" level=info msg="CreateContainer within sandbox \"e7d4ed6be9e849274d550427103c923bf83b266fefb102ee9f8a358b57b08622\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 18:21:43.486941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350603323.mount: Deactivated successfully. Mar 17 18:21:43.505619 env[1727]: time="2025-03-17T18:21:43.505548877Z" level=info msg="CreateContainer within sandbox \"e7d4ed6be9e849274d550427103c923bf83b266fefb102ee9f8a358b57b08622\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"69f1a9136bd22fa222347fa3112acbdb38b28e3fbdbf3fddb668a7b0e7672853\"" Mar 17 18:21:43.507195 env[1727]: time="2025-03-17T18:21:43.507127477Z" level=info msg="StartContainer for \"69f1a9136bd22fa222347fa3112acbdb38b28e3fbdbf3fddb668a7b0e7672853\"" Mar 17 18:21:43.554928 systemd[1]: Started cri-containerd-69f1a9136bd22fa222347fa3112acbdb38b28e3fbdbf3fddb668a7b0e7672853.scope. Mar 17 18:21:43.612429 env[1727]: time="2025-03-17T18:21:43.612279629Z" level=info msg="StartContainer for \"69f1a9136bd22fa222347fa3112acbdb38b28e3fbdbf3fddb668a7b0e7672853\" returns successfully" Mar 17 18:21:44.123257 kubelet[2080]: E0317 18:21:44.123196 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:44.553897 kubelet[2080]: I0317 18:21:44.553715 2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.553399621 podStartE2EDuration="9.553693563s" podCreationTimestamp="2025-03-17 18:21:35 +0000 UTC" firstStartedPulling="2025-03-17 18:21:36.46220279 +0000 UTC m=+52.154787020" lastFinishedPulling="2025-03-17 18:21:43.462496732 +0000 UTC m=+59.155080962" observedRunningTime="2025-03-17 18:21:44.553593112 +0000 UTC m=+60.246177366" watchObservedRunningTime="2025-03-17 18:21:44.553693563 +0000 UTC m=+60.246277805" Mar 17 18:21:45.124104 kubelet[2080]: E0317 18:21:45.124048 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:46.073025 kubelet[2080]: E0317 18:21:46.072957 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:46.124448 kubelet[2080]: E0317 18:21:46.124397 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:47.124850 kubelet[2080]: E0317 18:21:47.124806 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:48.127035 kubelet[2080]: E0317 18:21:48.125726 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:49.126140 kubelet[2080]: E0317 18:21:49.126094 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:50.127387 kubelet[2080]: E0317 18:21:50.127345 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:51.128807 kubelet[2080]: E0317 18:21:51.128735 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:52.129426 kubelet[2080]: E0317 18:21:52.129357 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:53.130343 kubelet[2080]: E0317 18:21:53.130306 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:53.892038 systemd[1]: Created slice kubepods-besteffort-pod20463898_ddc4_4287_9b86_ac2ecf1368c9.slice. Mar 17 18:21:53.989851 kubelet[2080]: I0317 18:21:53.989795 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvw8\" (UniqueName: \"kubernetes.io/projected/20463898-ddc4-4287-9b86-ac2ecf1368c9-kube-api-access-7kvw8\") pod \"test-pod-1\" (UID: \"20463898-ddc4-4287-9b86-ac2ecf1368c9\") " pod="default/test-pod-1" Mar 17 18:21:53.990169 kubelet[2080]: I0317 18:21:53.990113 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-613a3433-41da-499a-aec4-c901f3a5af03\" (UniqueName: \"kubernetes.io/nfs/20463898-ddc4-4287-9b86-ac2ecf1368c9-pvc-613a3433-41da-499a-aec4-c901f3a5af03\") pod \"test-pod-1\" (UID: \"20463898-ddc4-4287-9b86-ac2ecf1368c9\") " pod="default/test-pod-1" Mar 17 18:21:54.129692 kernel: FS-Cache: Loaded Mar 17 18:21:54.132024 kubelet[2080]: E0317 18:21:54.131971 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:54.181420 kernel: RPC: Registered named UNIX socket transport module. Mar 17 18:21:54.181573 kernel: RPC: Registered udp transport module. Mar 17 18:21:54.181624 kernel: RPC: Registered tcp transport module. Mar 17 18:21:54.185149 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 18:21:54.266720 kernel: FS-Cache: Netfs 'nfs' registered for caching Mar 17 18:21:54.522576 kernel: NFS: Registering the id_resolver key type Mar 17 18:21:54.522773 kernel: Key type id_resolver registered Mar 17 18:21:54.524582 kernel: Key type id_legacy registered Mar 17 18:21:54.686307 nfsidmap[3557]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Mar 17 18:21:54.692589 nfsidmap[3558]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Mar 17 18:21:54.799512 env[1727]: time="2025-03-17T18:21:54.798733639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:20463898-ddc4-4287-9b86-ac2ecf1368c9,Namespace:default,Attempt:0,}" Mar 17 18:21:54.850920 (udev-worker)[3549]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:21:54.851069 systemd-networkd[1457]: lxc04ab8ea1fa6a: Link UP Mar 17 18:21:54.866725 kernel: eth0: renamed from tmpabb5a Mar 17 18:21:54.875876 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:21:54.876030 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc04ab8ea1fa6a: link becomes ready Mar 17 18:21:54.876356 systemd-networkd[1457]: lxc04ab8ea1fa6a: Gained carrier Mar 17 18:21:55.132665 kubelet[2080]: E0317 18:21:55.132590 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:55.162388 env[1727]: time="2025-03-17T18:21:55.162032310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:21:55.162388 env[1727]: time="2025-03-17T18:21:55.162110898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:21:55.162388 env[1727]: time="2025-03-17T18:21:55.162144402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:21:55.162721 env[1727]: time="2025-03-17T18:21:55.162491046Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/abb5adea804c99b6c12ea96f9dba348cfb1587412e1715cc8718f8fa5c6db669 pid=3583 runtime=io.containerd.runc.v2 Mar 17 18:21:55.194580 systemd[1]: run-containerd-runc-k8s.io-abb5adea804c99b6c12ea96f9dba348cfb1587412e1715cc8718f8fa5c6db669-runc.89Gd2V.mount: Deactivated successfully. Mar 17 18:21:55.204249 systemd[1]: Started cri-containerd-abb5adea804c99b6c12ea96f9dba348cfb1587412e1715cc8718f8fa5c6db669.scope. Mar 17 18:21:55.275388 env[1727]: time="2025-03-17T18:21:55.275327615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:20463898-ddc4-4287-9b86-ac2ecf1368c9,Namespace:default,Attempt:0,} returns sandbox id \"abb5adea804c99b6c12ea96f9dba348cfb1587412e1715cc8718f8fa5c6db669\"" Mar 17 18:21:55.278490 env[1727]: time="2025-03-17T18:21:55.278412863Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:21:55.644998 env[1727]: time="2025-03-17T18:21:55.644944646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:55.650422 env[1727]: time="2025-03-17T18:21:55.650362898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:55.654188 env[1727]: time="2025-03-17T18:21:55.654142237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:55.657933 env[1727]: time="2025-03-17T18:21:55.657871345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:55.659550 env[1727]: time="2025-03-17T18:21:55.659476393Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 18:21:55.664540 env[1727]: time="2025-03-17T18:21:55.664488565Z" level=info msg="CreateContainer within sandbox \"abb5adea804c99b6c12ea96f9dba348cfb1587412e1715cc8718f8fa5c6db669\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 18:21:55.690462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801822508.mount: Deactivated successfully. Mar 17 18:21:55.700740 env[1727]: time="2025-03-17T18:21:55.700648799Z" level=info msg="CreateContainer within sandbox \"abb5adea804c99b6c12ea96f9dba348cfb1587412e1715cc8718f8fa5c6db669\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"105207905f835a497483dd1db33b3b7ec21af2dd630c3b27b1daf3e585112830\"" Mar 17 18:21:55.702026 env[1727]: time="2025-03-17T18:21:55.701979959Z" level=info msg="StartContainer for \"105207905f835a497483dd1db33b3b7ec21af2dd630c3b27b1daf3e585112830\"" Mar 17 18:21:55.732531 systemd[1]: Started cri-containerd-105207905f835a497483dd1db33b3b7ec21af2dd630c3b27b1daf3e585112830.scope. Mar 17 18:21:55.789944 env[1727]: time="2025-03-17T18:21:55.789878010Z" level=info msg="StartContainer for \"105207905f835a497483dd1db33b3b7ec21af2dd630c3b27b1daf3e585112830\" returns successfully" Mar 17 18:21:56.133718 kubelet[2080]: E0317 18:21:56.133634 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:56.231859 systemd-networkd[1457]: lxc04ab8ea1fa6a: Gained IPv6LL Mar 17 18:21:56.584038 kubelet[2080]: I0317 18:21:56.583963 2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=20.199277594 podStartE2EDuration="20.583938172s" podCreationTimestamp="2025-03-17 18:21:36 +0000 UTC" firstStartedPulling="2025-03-17 18:21:55.277676267 +0000 UTC m=+70.970260497" lastFinishedPulling="2025-03-17 18:21:55.662336857 +0000 UTC m=+71.354921075" observedRunningTime="2025-03-17 18:21:56.583808728 +0000 UTC m=+72.276392958" watchObservedRunningTime="2025-03-17 18:21:56.583938172 +0000 UTC m=+72.276522402" Mar 17 18:21:57.134426 kubelet[2080]: E0317 18:21:57.134361 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:58.135202 kubelet[2080]: E0317 18:21:58.135139 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:21:59.017427 amazon-ssm-agent[1765]: 2025-03-17 18:21:59 INFO [HealthCheck] HealthCheck reporting agent health. Mar 17 18:21:59.136317 kubelet[2080]: E0317 18:21:59.136251 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:00.136765 kubelet[2080]: E0317 18:22:00.136709 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:01.138088 kubelet[2080]: E0317 18:22:01.138032 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:02.138456 kubelet[2080]: E0317 18:22:02.138418 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:02.251384 systemd[1]: run-containerd-runc-k8s.io-60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff-runc.S3bVHT.mount: Deactivated successfully. Mar 17 18:22:02.283174 env[1727]: time="2025-03-17T18:22:02.283094260Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:22:02.294054 env[1727]: time="2025-03-17T18:22:02.293995948Z" level=info msg="StopContainer for \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\" with timeout 2 (s)" Mar 17 18:22:02.294515 env[1727]: time="2025-03-17T18:22:02.294475060Z" level=info msg="Stop container \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\" with signal terminated" Mar 17 18:22:02.310241 systemd-networkd[1457]: lxc_health: Link DOWN Mar 17 18:22:02.310255 systemd-networkd[1457]: lxc_health: Lost carrier Mar 17 18:22:02.335218 systemd[1]: cri-containerd-60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff.scope: Deactivated successfully. Mar 17 18:22:02.335803 systemd[1]: cri-containerd-60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff.scope: Consumed 13.976s CPU time. Mar 17 18:22:02.372208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff-rootfs.mount: Deactivated successfully. Mar 17 18:22:02.666371 env[1727]: time="2025-03-17T18:22:02.666291848Z" level=info msg="shim disconnected" id=60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff Mar 17 18:22:02.666755 env[1727]: time="2025-03-17T18:22:02.666372212Z" level=warning msg="cleaning up after shim disconnected" id=60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff namespace=k8s.io Mar 17 18:22:02.666755 env[1727]: time="2025-03-17T18:22:02.666394364Z" level=info msg="cleaning up dead shim" Mar 17 18:22:02.679257 env[1727]: time="2025-03-17T18:22:02.679181888Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3713 runtime=io.containerd.runc.v2\n" Mar 17 18:22:02.682449 env[1727]: time="2025-03-17T18:22:02.682385251Z" level=info msg="StopContainer for \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\" returns successfully" Mar 17 18:22:02.683442 env[1727]: time="2025-03-17T18:22:02.683395807Z" level=info msg="StopPodSandbox for \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\"" Mar 17 18:22:02.683819 env[1727]: time="2025-03-17T18:22:02.683768767Z" level=info msg="Container to stop \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.684001 env[1727]: time="2025-03-17T18:22:02.683967031Z" level=info msg="Container to stop \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.684151 env[1727]: time="2025-03-17T18:22:02.684117847Z" level=info msg="Container to stop \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.684297 env[1727]: time="2025-03-17T18:22:02.684264079Z" level=info msg="Container to stop \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.684439 env[1727]: time="2025-03-17T18:22:02.684407527Z" level=info msg="Container to stop \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.687913 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc-shm.mount: Deactivated successfully. Mar 17 18:22:02.699478 systemd[1]: cri-containerd-3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc.scope: Deactivated successfully. Mar 17 18:22:02.736249 env[1727]: time="2025-03-17T18:22:02.736172345Z" level=info msg="shim disconnected" id=3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc Mar 17 18:22:02.736249 env[1727]: time="2025-03-17T18:22:02.736243433Z" level=warning msg="cleaning up after shim disconnected" id=3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc namespace=k8s.io Mar 17 18:22:02.736574 env[1727]: time="2025-03-17T18:22:02.736266197Z" level=info msg="cleaning up dead shim" Mar 17 18:22:02.750684 env[1727]: time="2025-03-17T18:22:02.750553528Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3747 runtime=io.containerd.runc.v2\n" Mar 17 18:22:02.751239 env[1727]: time="2025-03-17T18:22:02.751175344Z" level=info msg="TearDown network for sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" successfully" Mar 17 18:22:02.751347 env[1727]: time="2025-03-17T18:22:02.751229296Z" level=info msg="StopPodSandbox for \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" returns successfully" Mar 17 18:22:02.950321 kubelet[2080]: I0317 18:22:02.948969 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-etc-cni-netd\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950321 kubelet[2080]: I0317 18:22:02.949790 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5bd26170-28c9-48f8-a100-0f124a02c2a8-clustermesh-secrets\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950321 kubelet[2080]: I0317 18:22:02.949858 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-host-proc-sys-net\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950321 kubelet[2080]: I0317 18:22:02.949915 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-cgroup\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950321 kubelet[2080]: I0317 18:22:02.949961 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-hostproc\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950321 kubelet[2080]: I0317 18:22:02.950025 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-config-path\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950885 kubelet[2080]: I0317 18:22:02.950063 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-host-proc-sys-kernel\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950885 kubelet[2080]: I0317 18:22:02.950126 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5bd26170-28c9-48f8-a100-0f124a02c2a8-hubble-tls\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950885 kubelet[2080]: I0317 18:22:02.950188 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-lib-modules\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950885 kubelet[2080]: I0317 18:22:02.950232 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rslp8\" (UniqueName: \"kubernetes.io/projected/5bd26170-28c9-48f8-a100-0f124a02c2a8-kube-api-access-rslp8\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950885 kubelet[2080]: I0317 18:22:02.950293 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-run\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.950885 kubelet[2080]: I0317 18:22:02.950352 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-xtables-lock\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.951246 kubelet[2080]: I0317 18:22:02.950393 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cni-path\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.951246 kubelet[2080]: I0317 18:22:02.950452 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-bpf-maps\") pod \"5bd26170-28c9-48f8-a100-0f124a02c2a8\" (UID: \"5bd26170-28c9-48f8-a100-0f124a02c2a8\") " Mar 17 18:22:02.951246 kubelet[2080]: I0317 18:22:02.950594 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.951246 kubelet[2080]: I0317 18:22:02.950704 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.953103 kubelet[2080]: I0317 18:22:02.953048 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.953313 kubelet[2080]: I0317 18:22:02.953048 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.953469 kubelet[2080]: I0317 18:22:02.953440 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.957643 kubelet[2080]: I0317 18:22:02.953678 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-hostproc" (OuterVolumeSpecName: "hostproc") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.961017 kubelet[2080]: I0317 18:22:02.957474 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.961017 kubelet[2080]: I0317 18:22:02.957521 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.961017 kubelet[2080]: I0317 18:22:02.957578 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cni-path" (OuterVolumeSpecName: "cni-path") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.961358 kubelet[2080]: I0317 18:22:02.961316 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.961610 kubelet[2080]: I0317 18:22:02.961579 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd26170-28c9-48f8-a100-0f124a02c2a8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:22:02.962013 kubelet[2080]: I0317 18:22:02.961980 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd26170-28c9-48f8-a100-0f124a02c2a8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:02.963016 kubelet[2080]: I0317 18:22:02.962947 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:22:02.965253 kubelet[2080]: I0317 18:22:02.965188 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd26170-28c9-48f8-a100-0f124a02c2a8-kube-api-access-rslp8" (OuterVolumeSpecName: "kube-api-access-rslp8") pod "5bd26170-28c9-48f8-a100-0f124a02c2a8" (UID: "5bd26170-28c9-48f8-a100-0f124a02c2a8"). InnerVolumeSpecName "kube-api-access-rslp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:03.051516 kubelet[2080]: I0317 18:22:03.051479 2080 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-run\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.051753 kubelet[2080]: I0317 18:22:03.051729 2080 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-xtables-lock\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.051883 kubelet[2080]: I0317 18:22:03.051861 2080 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cni-path\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.052001 kubelet[2080]: I0317 18:22:03.051980 2080 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-bpf-maps\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.052118 kubelet[2080]: I0317 18:22:03.052097 2080 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-etc-cni-netd\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.052235 kubelet[2080]: I0317 18:22:03.052208 2080 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-host-proc-sys-kernel\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.052365 kubelet[2080]: I0317 18:22:03.052343 2080 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5bd26170-28c9-48f8-a100-0f124a02c2a8-hubble-tls\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.052475 kubelet[2080]: I0317 18:22:03.052453 2080 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5bd26170-28c9-48f8-a100-0f124a02c2a8-clustermesh-secrets\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.052592 kubelet[2080]: I0317 18:22:03.052571 2080 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-host-proc-sys-net\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.052756 kubelet[2080]: I0317 18:22:03.052735 2080 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-cgroup\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.052887 kubelet[2080]: I0317 18:22:03.052865 2080 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-hostproc\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.053008 kubelet[2080]: I0317 18:22:03.052986 2080 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bd26170-28c9-48f8-a100-0f124a02c2a8-cilium-config-path\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.053124 kubelet[2080]: I0317 18:22:03.053103 2080 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bd26170-28c9-48f8-a100-0f124a02c2a8-lib-modules\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.053234 kubelet[2080]: I0317 18:22:03.053213 2080 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rslp8\" (UniqueName: \"kubernetes.io/projected/5bd26170-28c9-48f8-a100-0f124a02c2a8-kube-api-access-rslp8\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:03.139434 kubelet[2080]: E0317 18:22:03.139404 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:03.237980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc-rootfs.mount: Deactivated successfully. Mar 17 18:22:03.238148 systemd[1]: var-lib-kubelet-pods-5bd26170\x2d28c9\x2d48f8\x2da100\x2d0f124a02c2a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drslp8.mount: Deactivated successfully. Mar 17 18:22:03.238289 systemd[1]: var-lib-kubelet-pods-5bd26170\x2d28c9\x2d48f8\x2da100\x2d0f124a02c2a8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:22:03.238427 systemd[1]: var-lib-kubelet-pods-5bd26170\x2d28c9\x2d48f8\x2da100\x2d0f124a02c2a8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:22:03.595469 kubelet[2080]: I0317 18:22:03.595433 2080 scope.go:117] "RemoveContainer" containerID="60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff" Mar 17 18:22:03.601769 env[1727]: time="2025-03-17T18:22:03.601282407Z" level=info msg="RemoveContainer for \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\"" Mar 17 18:22:03.604953 systemd[1]: Removed slice kubepods-burstable-pod5bd26170_28c9_48f8_a100_0f124a02c2a8.slice. Mar 17 18:22:03.605135 systemd[1]: kubepods-burstable-pod5bd26170_28c9_48f8_a100_0f124a02c2a8.slice: Consumed 14.204s CPU time. Mar 17 18:22:03.609196 env[1727]: time="2025-03-17T18:22:03.609113679Z" level=info msg="RemoveContainer for \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\" returns successfully" Mar 17 18:22:03.610062 kubelet[2080]: I0317 18:22:03.609876 2080 scope.go:117] "RemoveContainer" containerID="2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5" Mar 17 18:22:03.614546 env[1727]: time="2025-03-17T18:22:03.614101935Z" level=info msg="RemoveContainer for \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\"" Mar 17 18:22:03.618013 env[1727]: time="2025-03-17T18:22:03.617956910Z" level=info msg="RemoveContainer for \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\" returns successfully" Mar 17 18:22:03.618913 kubelet[2080]: I0317 18:22:03.618874 2080 scope.go:117] "RemoveContainer" containerID="d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d" Mar 17 18:22:03.621017 env[1727]: time="2025-03-17T18:22:03.620950562Z" level=info msg="RemoveContainer for \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\"" Mar 17 18:22:03.625706 env[1727]: time="2025-03-17T18:22:03.625594418Z" level=info msg="RemoveContainer for \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\" returns successfully" Mar 17 18:22:03.626253 kubelet[2080]: I0317 18:22:03.626202 2080 scope.go:117] "RemoveContainer" containerID="5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8" Mar 17 18:22:03.629113 env[1727]: time="2025-03-17T18:22:03.629063798Z" level=info msg="RemoveContainer for \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\"" Mar 17 18:22:03.633445 env[1727]: time="2025-03-17T18:22:03.633387530Z" level=info msg="RemoveContainer for \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\" returns successfully" Mar 17 18:22:03.633956 kubelet[2080]: I0317 18:22:03.633912 2080 scope.go:117] "RemoveContainer" containerID="1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22" Mar 17 18:22:03.636161 env[1727]: time="2025-03-17T18:22:03.636104714Z" level=info msg="RemoveContainer for \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\"" Mar 17 18:22:03.640913 env[1727]: time="2025-03-17T18:22:03.640816285Z" level=info msg="RemoveContainer for \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\" returns successfully" Mar 17 18:22:03.641225 kubelet[2080]: I0317 18:22:03.641189 2080 scope.go:117] "RemoveContainer" containerID="60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff" Mar 17 18:22:03.641694 env[1727]: time="2025-03-17T18:22:03.641560909Z" level=error msg="ContainerStatus for \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\": not found" Mar 17 18:22:03.642111 kubelet[2080]: E0317 18:22:03.642068 2080 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\": not found" containerID="60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff" Mar 17 18:22:03.642248 kubelet[2080]: I0317 18:22:03.642127 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff"} err="failed to get container status \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\": rpc error: code = NotFound desc = an error occurred when try to find container \"60dee22abe2da40657917e8c3303c75767a9dbb48a2e4e9c2e42c505f4fc5bff\": not found" Mar 17 18:22:03.642337 kubelet[2080]: I0317 18:22:03.642253 2080 scope.go:117] "RemoveContainer" containerID="2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5" Mar 17 18:22:03.642825 env[1727]: time="2025-03-17T18:22:03.642714925Z" level=error msg="ContainerStatus for \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\": not found" Mar 17 18:22:03.643266 kubelet[2080]: E0317 18:22:03.643215 2080 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\": not found" containerID="2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5" Mar 17 18:22:03.643451 kubelet[2080]: I0317 18:22:03.643266 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5"} err="failed to get container status \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"2804f8abc0cd153edd5823d597c9c28c140fe0b44d72b81ff8fbfd0e116437f5\": not found" Mar 17 18:22:03.643451 kubelet[2080]: I0317 18:22:03.643298 2080 scope.go:117] "RemoveContainer" containerID="d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d" Mar 17 18:22:03.643805 env[1727]: time="2025-03-17T18:22:03.643692193Z" level=error msg="ContainerStatus for \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\": not found" Mar 17 18:22:03.644100 kubelet[2080]: E0317 18:22:03.644061 2080 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\": not found" containerID="d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d" Mar 17 18:22:03.644207 kubelet[2080]: I0317 18:22:03.644111 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d"} err="failed to get container status \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9e945772b4949d6c42b3b00443e01461d235202d0d65f80ea2f75d62ae0ed2d\": not found" Mar 17 18:22:03.644207 kubelet[2080]: I0317 18:22:03.644159 2080 scope.go:117] "RemoveContainer" containerID="5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8" Mar 17 18:22:03.644683 env[1727]: time="2025-03-17T18:22:03.644566045Z" level=error msg="ContainerStatus for \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\": not found" Mar 17 18:22:03.645121 kubelet[2080]: E0317 18:22:03.645080 2080 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\": not found" containerID="5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8" Mar 17 18:22:03.645230 kubelet[2080]: I0317 18:22:03.645144 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8"} err="failed to get container status \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"5da6b64b9cda0ae1a56e5604aec8e0d4009df08e18f9a7335cc5b0fa13f7b1e8\": not found" Mar 17 18:22:03.645230 kubelet[2080]: I0317 18:22:03.645179 2080 scope.go:117] "RemoveContainer" containerID="1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22" Mar 17 18:22:03.645693 env[1727]: time="2025-03-17T18:22:03.645530881Z" level=error msg="ContainerStatus for \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\": not found" Mar 17 18:22:03.645945 kubelet[2080]: E0317 18:22:03.645907 2080 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\": not found" containerID="1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22" Mar 17 18:22:03.646061 kubelet[2080]: I0317 18:22:03.645956 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22"} err="failed to get container status \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e6e782522b3e26f840a504ea69a22efebc32460ea2d80294b01bb7dd23dcd22\": not found" Mar 17 18:22:04.140795 kubelet[2080]: E0317 18:22:04.140731 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:04.310414 kubelet[2080]: I0317 18:22:04.310371 2080 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd26170-28c9-48f8-a100-0f124a02c2a8" path="/var/lib/kubelet/pods/5bd26170-28c9-48f8-a100-0f124a02c2a8/volumes" Mar 17 18:22:05.141756 kubelet[2080]: E0317 18:22:05.141722 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:05.771984 kubelet[2080]: E0317 18:22:05.771926 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5bd26170-28c9-48f8-a100-0f124a02c2a8" containerName="cilium-agent" Mar 17 18:22:05.771984 kubelet[2080]: E0317 18:22:05.771972 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5bd26170-28c9-48f8-a100-0f124a02c2a8" containerName="mount-bpf-fs" Mar 17 18:22:05.771984 kubelet[2080]: E0317 18:22:05.771989 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5bd26170-28c9-48f8-a100-0f124a02c2a8" containerName="clean-cilium-state" Mar 17 18:22:05.772297 kubelet[2080]: E0317 18:22:05.772007 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5bd26170-28c9-48f8-a100-0f124a02c2a8" containerName="mount-cgroup" Mar 17 18:22:05.772297 kubelet[2080]: E0317 18:22:05.772022 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5bd26170-28c9-48f8-a100-0f124a02c2a8" containerName="apply-sysctl-overwrites" Mar 17 18:22:05.772297 kubelet[2080]: I0317 18:22:05.772059 2080 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd26170-28c9-48f8-a100-0f124a02c2a8" containerName="cilium-agent" Mar 17 18:22:05.780969 systemd[1]: Created slice kubepods-besteffort-pod66d8a739_5cbd_4d9a_93e7_c4d9961a9804.slice. Mar 17 18:22:05.832189 systemd[1]: Created slice kubepods-burstable-pod75d01c63_7df1_4a4a_92c3_521a01457c3d.slice. Mar 17 18:22:05.969753 kubelet[2080]: I0317 18:22:05.969692 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-run\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.969753 kubelet[2080]: I0317 18:22:05.969753 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-lib-modules\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.969998 kubelet[2080]: I0317 18:22:05.969795 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-xtables-lock\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.969998 kubelet[2080]: I0317 18:22:05.969837 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66d8a739-5cbd-4d9a-93e7-c4d9961a9804-cilium-config-path\") pod \"cilium-operator-5d85765b45-ldfsk\" (UID: \"66d8a739-5cbd-4d9a-93e7-c4d9961a9804\") " pod="kube-system/cilium-operator-5d85765b45-ldfsk" Mar 17 18:22:05.969998 kubelet[2080]: I0317 18:22:05.969873 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfml6\" (UniqueName: \"kubernetes.io/projected/66d8a739-5cbd-4d9a-93e7-c4d9961a9804-kube-api-access-mfml6\") pod \"cilium-operator-5d85765b45-ldfsk\" (UID: \"66d8a739-5cbd-4d9a-93e7-c4d9961a9804\") " pod="kube-system/cilium-operator-5d85765b45-ldfsk" Mar 17 18:22:05.969998 kubelet[2080]: I0317 18:22:05.969910 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-etc-cni-netd\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.969998 kubelet[2080]: I0317 18:22:05.969947 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-config-path\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970299 kubelet[2080]: I0317 18:22:05.969981 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-host-proc-sys-kernel\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970299 kubelet[2080]: I0317 18:22:05.970027 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75d01c63-7df1-4a4a-92c3-521a01457c3d-hubble-tls\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970299 kubelet[2080]: I0317 18:22:05.970064 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9n8q\" (UniqueName: \"kubernetes.io/projected/75d01c63-7df1-4a4a-92c3-521a01457c3d-kube-api-access-j9n8q\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970299 kubelet[2080]: I0317 18:22:05.970102 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cni-path\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970299 kubelet[2080]: I0317 18:22:05.970138 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-ipsec-secrets\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970596 kubelet[2080]: I0317 18:22:05.970176 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-host-proc-sys-net\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970596 kubelet[2080]: I0317 18:22:05.970217 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-bpf-maps\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970596 kubelet[2080]: I0317 18:22:05.970249 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-hostproc\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970596 kubelet[2080]: I0317 18:22:05.970281 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-cgroup\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:05.970596 kubelet[2080]: I0317 18:22:05.970319 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75d01c63-7df1-4a4a-92c3-521a01457c3d-clustermesh-secrets\") pod \"cilium-rr5cf\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " pod="kube-system/cilium-rr5cf" Mar 17 18:22:06.095386 kubelet[2080]: E0317 18:22:06.095346 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:06.141498 env[1727]: time="2025-03-17T18:22:06.141428945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rr5cf,Uid:75d01c63-7df1-4a4a-92c3-521a01457c3d,Namespace:kube-system,Attempt:0,}" Mar 17 18:22:06.142472 kubelet[2080]: E0317 18:22:06.142441 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:06.162739 env[1727]: time="2025-03-17T18:22:06.162595768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:22:06.162952 env[1727]: time="2025-03-17T18:22:06.162737392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:22:06.162952 env[1727]: time="2025-03-17T18:22:06.162765940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:22:06.164526 env[1727]: time="2025-03-17T18:22:06.163006036Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c pid=3777 runtime=io.containerd.runc.v2 Mar 17 18:22:06.185927 systemd[1]: Started cri-containerd-748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c.scope. Mar 17 18:22:06.241706 env[1727]: time="2025-03-17T18:22:06.241606824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rr5cf,Uid:75d01c63-7df1-4a4a-92c3-521a01457c3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\"" Mar 17 18:22:06.242295 kubelet[2080]: E0317 18:22:06.242199 2080 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:22:06.247396 env[1727]: time="2025-03-17T18:22:06.247340891Z" level=info msg="CreateContainer within sandbox \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:22:06.266461 env[1727]: time="2025-03-17T18:22:06.266366507Z" level=info msg="CreateContainer within sandbox \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8\"" Mar 17 18:22:06.267396 env[1727]: time="2025-03-17T18:22:06.267330394Z" level=info msg="StartContainer for \"2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8\"" Mar 17 18:22:06.298627 systemd[1]: Started cri-containerd-2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8.scope. Mar 17 18:22:06.322222 systemd[1]: cri-containerd-2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8.scope: Deactivated successfully. Mar 17 18:22:06.350514 env[1727]: time="2025-03-17T18:22:06.348717342Z" level=info msg="shim disconnected" id=2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8 Mar 17 18:22:06.350514 env[1727]: time="2025-03-17T18:22:06.348787338Z" level=warning msg="cleaning up after shim disconnected" id=2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8 namespace=k8s.io Mar 17 18:22:06.350514 env[1727]: time="2025-03-17T18:22:06.348807138Z" level=info msg="cleaning up dead shim" Mar 17 18:22:06.363929 env[1727]: time="2025-03-17T18:22:06.363851190Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3836 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:22:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:22:06.364441 env[1727]: time="2025-03-17T18:22:06.364298118Z" level=error msg="copy shim log" error="read /proc/self/fd/49: file already closed" Mar 17 18:22:06.364875 env[1727]: time="2025-03-17T18:22:06.364817058Z" level=error msg="Failed to pipe stdout of container \"2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8\"" error="reading from a closed fifo" Mar 17 18:22:06.365099 env[1727]: time="2025-03-17T18:22:06.365041914Z" level=error msg="Failed to pipe stderr of container \"2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8\"" error="reading from a closed fifo" Mar 17 18:22:06.368508 env[1727]: time="2025-03-17T18:22:06.368420993Z" level=error msg="StartContainer for \"2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:22:06.368844 kubelet[2080]: E0317 18:22:06.368782 2080 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8" Mar 17 18:22:06.371078 kubelet[2080]: E0317 18:22:06.370982 2080 kuberuntime_manager.go:1272] "Unhandled Error" err=< Mar 17 18:22:06.371078 kubelet[2080]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:22:06.371078 kubelet[2080]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:22:06.371078 kubelet[2080]: rm /hostbin/cilium-mount Mar 17 18:22:06.371461 kubelet[2080]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9n8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-rr5cf_kube-system(75d01c63-7df1-4a4a-92c3-521a01457c3d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:22:06.371461 kubelet[2080]: > logger="UnhandledError" Mar 17 18:22:06.372255 kubelet[2080]: E0317 18:22:06.372208 2080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rr5cf" podUID="75d01c63-7df1-4a4a-92c3-521a01457c3d" Mar 17 18:22:06.387520 env[1727]: time="2025-03-17T18:22:06.387446860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ldfsk,Uid:66d8a739-5cbd-4d9a-93e7-c4d9961a9804,Namespace:kube-system,Attempt:0,}" Mar 17 18:22:06.412791 env[1727]: time="2025-03-17T18:22:06.412574475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:22:06.412791 env[1727]: time="2025-03-17T18:22:06.412648335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:22:06.412791 env[1727]: time="2025-03-17T18:22:06.412710267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:22:06.413357 env[1727]: time="2025-03-17T18:22:06.413184147Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79b9d8037560a5925dc672484825c134e8ad6667454cb078cec5488f825a47e0 pid=3857 runtime=io.containerd.runc.v2 Mar 17 18:22:06.436022 systemd[1]: Started cri-containerd-79b9d8037560a5925dc672484825c134e8ad6667454cb078cec5488f825a47e0.scope. Mar 17 18:22:06.502781 env[1727]: time="2025-03-17T18:22:06.502708451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ldfsk,Uid:66d8a739-5cbd-4d9a-93e7-c4d9961a9804,Namespace:kube-system,Attempt:0,} returns sandbox id \"79b9d8037560a5925dc672484825c134e8ad6667454cb078cec5488f825a47e0\"" Mar 17 18:22:06.505914 env[1727]: time="2025-03-17T18:22:06.505836790Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:22:06.610827 env[1727]: time="2025-03-17T18:22:06.608868053Z" level=info msg="StopPodSandbox for \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\"" Mar 17 18:22:06.610827 env[1727]: time="2025-03-17T18:22:06.609791669Z" level=info msg="Container to stop \"2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:06.621998 systemd[1]: cri-containerd-748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c.scope: Deactivated successfully. Mar 17 18:22:06.674240 env[1727]: time="2025-03-17T18:22:06.674160398Z" level=info msg="shim disconnected" id=748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c Mar 17 18:22:06.674240 env[1727]: time="2025-03-17T18:22:06.674236802Z" level=warning msg="cleaning up after shim disconnected" id=748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c namespace=k8s.io Mar 17 18:22:06.674690 env[1727]: time="2025-03-17T18:22:06.674260214Z" level=info msg="cleaning up dead shim" Mar 17 18:22:06.689505 env[1727]: time="2025-03-17T18:22:06.689430889Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3909 runtime=io.containerd.runc.v2\n" Mar 17 18:22:06.690091 env[1727]: time="2025-03-17T18:22:06.690042061Z" level=info msg="TearDown network for sandbox \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\" successfully" Mar 17 18:22:06.690238 env[1727]: time="2025-03-17T18:22:06.690092053Z" level=info msg="StopPodSandbox for \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\" returns successfully" Mar 17 18:22:06.878725 kubelet[2080]: I0317 18:22:06.877812 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-config-path\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.878905 kubelet[2080]: I0317 18:22:06.878761 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-host-proc-sys-kernel\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.878905 kubelet[2080]: I0317 18:22:06.878804 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-bpf-maps\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.878905 kubelet[2080]: I0317 18:22:06.878901 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-hostproc\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879100 kubelet[2080]: I0317 18:22:06.878936 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cni-path\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879100 kubelet[2080]: I0317 18:22:06.878970 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-lib-modules\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879100 kubelet[2080]: I0317 18:22:06.879003 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-etc-cni-netd\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879100 kubelet[2080]: I0317 18:22:06.879040 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75d01c63-7df1-4a4a-92c3-521a01457c3d-hubble-tls\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879100 kubelet[2080]: I0317 18:22:06.879072 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-run\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879388 kubelet[2080]: I0317 18:22:06.879115 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-ipsec-secrets\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879388 kubelet[2080]: I0317 18:22:06.879148 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-xtables-lock\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879388 kubelet[2080]: I0317 18:22:06.879185 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9n8q\" (UniqueName: \"kubernetes.io/projected/75d01c63-7df1-4a4a-92c3-521a01457c3d-kube-api-access-j9n8q\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879388 kubelet[2080]: I0317 18:22:06.879219 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-host-proc-sys-net\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879388 kubelet[2080]: I0317 18:22:06.879252 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-cgroup\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879388 kubelet[2080]: I0317 18:22:06.879289 2080 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75d01c63-7df1-4a4a-92c3-521a01457c3d-clustermesh-secrets\") pod \"75d01c63-7df1-4a4a-92c3-521a01457c3d\" (UID: \"75d01c63-7df1-4a4a-92c3-521a01457c3d\") " Mar 17 18:22:06.879928 kubelet[2080]: I0317 18:22:06.879860 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.885277 kubelet[2080]: I0317 18:22:06.885205 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.887904 kubelet[2080]: I0317 18:22:06.887848 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:22:06.888195 kubelet[2080]: I0317 18:22:06.888158 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.888367 kubelet[2080]: I0317 18:22:06.888339 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.888517 kubelet[2080]: I0317 18:22:06.888490 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-hostproc" (OuterVolumeSpecName: "hostproc") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.888691 kubelet[2080]: I0317 18:22:06.888635 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cni-path" (OuterVolumeSpecName: "cni-path") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.888831 kubelet[2080]: I0317 18:22:06.888805 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.894386 kubelet[2080]: I0317 18:22:06.894294 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d01c63-7df1-4a4a-92c3-521a01457c3d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:06.894916 kubelet[2080]: I0317 18:22:06.894875 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d01c63-7df1-4a4a-92c3-521a01457c3d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:22:06.897213 kubelet[2080]: I0317 18:22:06.897135 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:22:06.897390 kubelet[2080]: I0317 18:22:06.897246 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.897390 kubelet[2080]: I0317 18:22:06.897293 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.897390 kubelet[2080]: I0317 18:22:06.897341 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.900992 kubelet[2080]: I0317 18:22:06.900919 2080 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d01c63-7df1-4a4a-92c3-521a01457c3d-kube-api-access-j9n8q" (OuterVolumeSpecName: "kube-api-access-j9n8q") pod "75d01c63-7df1-4a4a-92c3-521a01457c3d" (UID: "75d01c63-7df1-4a4a-92c3-521a01457c3d"). InnerVolumeSpecName "kube-api-access-j9n8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:06.980342 kubelet[2080]: I0317 18:22:06.980269 2080 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-host-proc-sys-net\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980352 2080 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-cgroup\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980375 2080 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75d01c63-7df1-4a4a-92c3-521a01457c3d-clustermesh-secrets\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980423 2080 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-xtables-lock\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980453 2080 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j9n8q\" (UniqueName: \"kubernetes.io/projected/75d01c63-7df1-4a4a-92c3-521a01457c3d-kube-api-access-j9n8q\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980474 2080 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-config-path\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980521 2080 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-host-proc-sys-kernel\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980544 2080 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cni-path\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980564 2080 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-bpf-maps\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980611 2080 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-hostproc\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980634 2080 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-lib-modules\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980686 2080 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-etc-cni-netd\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980712 2080 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75d01c63-7df1-4a4a-92c3-521a01457c3d-hubble-tls\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.980724 kubelet[2080]: I0317 18:22:06.980730 2080 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-run\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:06.981615 kubelet[2080]: I0317 18:22:06.980778 2080 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75d01c63-7df1-4a4a-92c3-521a01457c3d-cilium-ipsec-secrets\") on node \"172.31.30.28\" DevicePath \"\"" Mar 17 18:22:07.095210 systemd[1]: var-lib-kubelet-pods-75d01c63\x2d7df1\x2d4a4a\x2d92c3\x2d521a01457c3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9n8q.mount: Deactivated successfully. Mar 17 18:22:07.095377 systemd[1]: var-lib-kubelet-pods-75d01c63\x2d7df1\x2d4a4a\x2d92c3\x2d521a01457c3d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:22:07.095508 systemd[1]: var-lib-kubelet-pods-75d01c63\x2d7df1\x2d4a4a\x2d92c3\x2d521a01457c3d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:22:07.095640 systemd[1]: var-lib-kubelet-pods-75d01c63\x2d7df1\x2d4a4a\x2d92c3\x2d521a01457c3d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:22:07.145720 kubelet[2080]: E0317 18:22:07.143390 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:07.612354 kubelet[2080]: I0317 18:22:07.612190 2080 scope.go:117] "RemoveContainer" containerID="2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8" Mar 17 18:22:07.615355 env[1727]: time="2025-03-17T18:22:07.615300983Z" level=info msg="RemoveContainer for \"2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8\"" Mar 17 18:22:07.621073 systemd[1]: Removed slice kubepods-burstable-pod75d01c63_7df1_4a4a_92c3_521a01457c3d.slice. Mar 17 18:22:07.625630 env[1727]: time="2025-03-17T18:22:07.625572322Z" level=info msg="RemoveContainer for \"2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8\" returns successfully" Mar 17 18:22:07.688852 kubelet[2080]: E0317 18:22:07.688812 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75d01c63-7df1-4a4a-92c3-521a01457c3d" containerName="mount-cgroup" Mar 17 18:22:07.689077 kubelet[2080]: I0317 18:22:07.689053 2080 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d01c63-7df1-4a4a-92c3-521a01457c3d" containerName="mount-cgroup" Mar 17 18:22:07.700740 systemd[1]: Created slice kubepods-burstable-podd7081b54_b7c6_42ac_8aeb_f69fa06d4b41.slice. Mar 17 18:22:07.748160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382405094.mount: Deactivated successfully. Mar 17 18:22:07.787867 kubelet[2080]: I0317 18:22:07.787813 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-cilium-config-path\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788033 kubelet[2080]: I0317 18:22:07.787906 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-host-proc-sys-kernel\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788033 kubelet[2080]: I0317 18:22:07.787947 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-cilium-run\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788033 kubelet[2080]: I0317 18:22:07.788011 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-etc-cni-netd\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788232 kubelet[2080]: I0317 18:22:07.788070 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-lib-modules\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788232 kubelet[2080]: I0317 18:22:07.788114 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqs9l\" (UniqueName: \"kubernetes.io/projected/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-kube-api-access-cqs9l\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788232 kubelet[2080]: I0317 18:22:07.788175 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-bpf-maps\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788427 kubelet[2080]: I0317 18:22:07.788215 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-cilium-cgroup\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788427 kubelet[2080]: I0317 18:22:07.788276 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-cni-path\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788427 kubelet[2080]: I0317 18:22:07.788340 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-hubble-tls\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788427 kubelet[2080]: I0317 18:22:07.788385 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-hostproc\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788835 kubelet[2080]: I0317 18:22:07.788446 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-cilium-ipsec-secrets\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788835 kubelet[2080]: I0317 18:22:07.788484 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-xtables-lock\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788835 kubelet[2080]: I0317 18:22:07.788544 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-clustermesh-secrets\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:07.788835 kubelet[2080]: I0317 18:22:07.788584 2080 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7081b54-b7c6-42ac-8aeb-f69fa06d4b41-host-proc-sys-net\") pod \"cilium-mmfzm\" (UID: \"d7081b54-b7c6-42ac-8aeb-f69fa06d4b41\") " pod="kube-system/cilium-mmfzm" Mar 17 18:22:08.013565 env[1727]: time="2025-03-17T18:22:08.013428719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmfzm,Uid:d7081b54-b7c6-42ac-8aeb-f69fa06d4b41,Namespace:kube-system,Attempt:0,}" Mar 17 18:22:08.049212 env[1727]: time="2025-03-17T18:22:08.049040373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:22:08.049362 env[1727]: time="2025-03-17T18:22:08.049235709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:22:08.049362 env[1727]: time="2025-03-17T18:22:08.049324089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:22:08.049821 env[1727]: time="2025-03-17T18:22:08.049734057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef pid=3942 runtime=io.containerd.runc.v2 Mar 17 18:22:08.070767 systemd[1]: Started cri-containerd-e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef.scope. Mar 17 18:22:08.135947 env[1727]: time="2025-03-17T18:22:08.135876493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmfzm,Uid:d7081b54-b7c6-42ac-8aeb-f69fa06d4b41,Namespace:kube-system,Attempt:0,} returns sandbox id \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\"" Mar 17 18:22:08.140746 env[1727]: time="2025-03-17T18:22:08.140645720Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:22:08.143744 kubelet[2080]: E0317 18:22:08.143690 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:08.163938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2637352672.mount: Deactivated successfully. Mar 17 18:22:08.177750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414468676.mount: Deactivated successfully. Mar 17 18:22:08.178437 env[1727]: time="2025-03-17T18:22:08.178344418Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90a9ab917cfa1ec211421f562f3f5885147ca9d775e345fc7a179492dcdea45c\"" Mar 17 18:22:08.179614 env[1727]: time="2025-03-17T18:22:08.179535419Z" level=info msg="StartContainer for \"90a9ab917cfa1ec211421f562f3f5885147ca9d775e345fc7a179492dcdea45c\"" Mar 17 18:22:08.210122 systemd[1]: Started cri-containerd-90a9ab917cfa1ec211421f562f3f5885147ca9d775e345fc7a179492dcdea45c.scope. Mar 17 18:22:08.262393 env[1727]: time="2025-03-17T18:22:08.262328906Z" level=info msg="StartContainer for \"90a9ab917cfa1ec211421f562f3f5885147ca9d775e345fc7a179492dcdea45c\" returns successfully" Mar 17 18:22:08.275735 systemd[1]: cri-containerd-90a9ab917cfa1ec211421f562f3f5885147ca9d775e345fc7a179492dcdea45c.scope: Deactivated successfully. Mar 17 18:22:08.311095 kubelet[2080]: I0317 18:22:08.311033 2080 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d01c63-7df1-4a4a-92c3-521a01457c3d" path="/var/lib/kubelet/pods/75d01c63-7df1-4a4a-92c3-521a01457c3d/volumes" Mar 17 18:22:08.326556 kubelet[2080]: I0317 18:22:08.326476 2080 setters.go:600] "Node became not ready" node="172.31.30.28" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:22:08Z","lastTransitionTime":"2025-03-17T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:22:08.332726 env[1727]: time="2025-03-17T18:22:08.332619320Z" level=info msg="shim disconnected" id=90a9ab917cfa1ec211421f562f3f5885147ca9d775e345fc7a179492dcdea45c Mar 17 18:22:08.333120 env[1727]: time="2025-03-17T18:22:08.333073557Z" level=warning msg="cleaning up after shim disconnected" id=90a9ab917cfa1ec211421f562f3f5885147ca9d775e345fc7a179492dcdea45c namespace=k8s.io Mar 17 18:22:08.333272 env[1727]: time="2025-03-17T18:22:08.333243825Z" level=info msg="cleaning up dead shim" Mar 17 18:22:08.348621 env[1727]: time="2025-03-17T18:22:08.348562450Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4024 runtime=io.containerd.runc.v2\n" Mar 17 18:22:08.624535 env[1727]: time="2025-03-17T18:22:08.624466910Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:22:08.649074 env[1727]: time="2025-03-17T18:22:08.648985998Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0fe0b1db3d980da2081737e726779e88262ddd717f9a111a12a1536643487bb5\"" Mar 17 18:22:08.650178 env[1727]: time="2025-03-17T18:22:08.650118428Z" level=info msg="StartContainer for \"0fe0b1db3d980da2081737e726779e88262ddd717f9a111a12a1536643487bb5\"" Mar 17 18:22:08.678900 systemd[1]: Started cri-containerd-0fe0b1db3d980da2081737e726779e88262ddd717f9a111a12a1536643487bb5.scope. Mar 17 18:22:08.743716 env[1727]: time="2025-03-17T18:22:08.743627548Z" level=info msg="StartContainer for \"0fe0b1db3d980da2081737e726779e88262ddd717f9a111a12a1536643487bb5\" returns successfully" Mar 17 18:22:08.755028 systemd[1]: cri-containerd-0fe0b1db3d980da2081737e726779e88262ddd717f9a111a12a1536643487bb5.scope: Deactivated successfully. Mar 17 18:22:08.795480 env[1727]: time="2025-03-17T18:22:08.795398272Z" level=info msg="shim disconnected" id=0fe0b1db3d980da2081737e726779e88262ddd717f9a111a12a1536643487bb5 Mar 17 18:22:08.795480 env[1727]: time="2025-03-17T18:22:08.795469336Z" level=warning msg="cleaning up after shim disconnected" id=0fe0b1db3d980da2081737e726779e88262ddd717f9a111a12a1536643487bb5 namespace=k8s.io Mar 17 18:22:08.795913 env[1727]: time="2025-03-17T18:22:08.795492376Z" level=info msg="cleaning up dead shim" Mar 17 18:22:08.811011 env[1727]: time="2025-03-17T18:22:08.810947189Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4088 runtime=io.containerd.runc.v2\n" Mar 17 18:22:09.144547 kubelet[2080]: E0317 18:22:09.144485 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:09.458070 kubelet[2080]: W0317 18:22:09.456553 2080 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75d01c63_7df1_4a4a_92c3_521a01457c3d.slice/cri-containerd-2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8.scope WatchSource:0}: container "2c0c1af56c2f5e3682693887403b6257eaf0ed718066d68b1f65eae37240ddf8" in namespace "k8s.io": not found Mar 17 18:22:09.627628 env[1727]: time="2025-03-17T18:22:09.627541514Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:22:09.663567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588622741.mount: Deactivated successfully. Mar 17 18:22:09.679541 env[1727]: time="2025-03-17T18:22:09.679479830Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"19f2a53154b48b7de4b06060866dccdc9c1c62f37ab773dc7c9a51a1d98e5e57\"" Mar 17 18:22:09.680675 env[1727]: time="2025-03-17T18:22:09.680602759Z" level=info msg="StartContainer for \"19f2a53154b48b7de4b06060866dccdc9c1c62f37ab773dc7c9a51a1d98e5e57\"" Mar 17 18:22:09.711607 systemd[1]: Started cri-containerd-19f2a53154b48b7de4b06060866dccdc9c1c62f37ab773dc7c9a51a1d98e5e57.scope. Mar 17 18:22:09.775325 env[1727]: time="2025-03-17T18:22:09.775264718Z" level=info msg="StartContainer for \"19f2a53154b48b7de4b06060866dccdc9c1c62f37ab773dc7c9a51a1d98e5e57\" returns successfully" Mar 17 18:22:09.775949 systemd[1]: cri-containerd-19f2a53154b48b7de4b06060866dccdc9c1c62f37ab773dc7c9a51a1d98e5e57.scope: Deactivated successfully. Mar 17 18:22:09.824600 env[1727]: time="2025-03-17T18:22:09.824530402Z" level=info msg="shim disconnected" id=19f2a53154b48b7de4b06060866dccdc9c1c62f37ab773dc7c9a51a1d98e5e57 Mar 17 18:22:09.824921 env[1727]: time="2025-03-17T18:22:09.824599715Z" level=warning msg="cleaning up after shim disconnected" id=19f2a53154b48b7de4b06060866dccdc9c1c62f37ab773dc7c9a51a1d98e5e57 namespace=k8s.io Mar 17 18:22:09.824921 env[1727]: time="2025-03-17T18:22:09.824622299Z" level=info msg="cleaning up dead shim" Mar 17 18:22:09.839782 env[1727]: time="2025-03-17T18:22:09.839709382Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4147 runtime=io.containerd.runc.v2\n" Mar 17 18:22:10.145636 kubelet[2080]: E0317 18:22:10.145571 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:10.632209 env[1727]: time="2025-03-17T18:22:10.632148719Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:22:10.658587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283597221.mount: Deactivated successfully. Mar 17 18:22:10.671626 env[1727]: time="2025-03-17T18:22:10.671562479Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144\"" Mar 17 18:22:10.673081 env[1727]: time="2025-03-17T18:22:10.673014584Z" level=info msg="StartContainer for \"5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144\"" Mar 17 18:22:10.710987 systemd[1]: Started cri-containerd-5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144.scope. Mar 17 18:22:10.768252 systemd[1]: cri-containerd-5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144.scope: Deactivated successfully. Mar 17 18:22:10.771176 env[1727]: time="2025-03-17T18:22:10.770985047Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7081b54_b7c6_42ac_8aeb_f69fa06d4b41.slice/cri-containerd-5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144.scope/memory.events\": no such file or directory" Mar 17 18:22:10.775957 env[1727]: time="2025-03-17T18:22:10.775896215Z" level=info msg="StartContainer for \"5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144\" returns successfully" Mar 17 18:22:10.818669 env[1727]: time="2025-03-17T18:22:10.818589587Z" level=info msg="shim disconnected" id=5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144 Mar 17 18:22:10.819087 env[1727]: time="2025-03-17T18:22:10.819038669Z" level=warning msg="cleaning up after shim disconnected" id=5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144 namespace=k8s.io Mar 17 18:22:10.819217 env[1727]: time="2025-03-17T18:22:10.819187544Z" level=info msg="cleaning up dead shim" Mar 17 18:22:10.832698 env[1727]: time="2025-03-17T18:22:10.832625736Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4204 runtime=io.containerd.runc.v2\n" Mar 17 18:22:11.095308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144-rootfs.mount: Deactivated successfully. Mar 17 18:22:11.146701 kubelet[2080]: E0317 18:22:11.146579 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:11.244077 kubelet[2080]: E0317 18:22:11.243990 2080 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:22:11.638305 env[1727]: time="2025-03-17T18:22:11.638249522Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:22:11.675932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175114023.mount: Deactivated successfully. Mar 17 18:22:11.689614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289313219.mount: Deactivated successfully. Mar 17 18:22:11.697521 env[1727]: time="2025-03-17T18:22:11.697453287Z" level=info msg="CreateContainer within sandbox \"e00fce69843ad8cb8d427028c2bf89e66880f994f3b66ee61ea8ffa40faa24ef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8d1104392761d5a4c66e83f1b211dee5da81e0902e8a7419156225efc4845a36\"" Mar 17 18:22:11.698932 env[1727]: time="2025-03-17T18:22:11.698854019Z" level=info msg="StartContainer for \"8d1104392761d5a4c66e83f1b211dee5da81e0902e8a7419156225efc4845a36\"" Mar 17 18:22:11.746086 systemd[1]: Started cri-containerd-8d1104392761d5a4c66e83f1b211dee5da81e0902e8a7419156225efc4845a36.scope. Mar 17 18:22:11.828051 env[1727]: time="2025-03-17T18:22:11.827953497Z" level=info msg="StartContainer for \"8d1104392761d5a4c66e83f1b211dee5da81e0902e8a7419156225efc4845a36\" returns successfully" Mar 17 18:22:12.147099 kubelet[2080]: E0317 18:22:12.147007 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:12.413227 env[1727]: time="2025-03-17T18:22:12.412752541Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:22:12.418088 env[1727]: time="2025-03-17T18:22:12.418036850Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:22:12.422050 env[1727]: time="2025-03-17T18:22:12.421999100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:22:12.423575 env[1727]: time="2025-03-17T18:22:12.422740291Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:22:12.428959 env[1727]: time="2025-03-17T18:22:12.428902700Z" level=info msg="CreateContainer within sandbox \"79b9d8037560a5925dc672484825c134e8ad6667454cb078cec5488f825a47e0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:22:12.456302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224889988.mount: Deactivated successfully. Mar 17 18:22:12.482630 env[1727]: time="2025-03-17T18:22:12.482534296Z" level=info msg="CreateContainer within sandbox \"79b9d8037560a5925dc672484825c134e8ad6667454cb078cec5488f825a47e0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"501b3b7a18f76ac2ef9ed80ae237e129d4e6d613ad63ffc2313bb90a171ab1d3\"" Mar 17 18:22:12.484225 env[1727]: time="2025-03-17T18:22:12.484175643Z" level=info msg="StartContainer for \"501b3b7a18f76ac2ef9ed80ae237e129d4e6d613ad63ffc2313bb90a171ab1d3\"" Mar 17 18:22:12.533910 systemd[1]: Started cri-containerd-501b3b7a18f76ac2ef9ed80ae237e129d4e6d613ad63ffc2313bb90a171ab1d3.scope. Mar 17 18:22:12.593695 kubelet[2080]: W0317 18:22:12.591993 2080 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7081b54_b7c6_42ac_8aeb_f69fa06d4b41.slice/cri-containerd-90a9ab917cfa1ec211421f562f3f5885147ca9d775e345fc7a179492dcdea45c.scope WatchSource:0}: task 90a9ab917cfa1ec211421f562f3f5885147ca9d775e345fc7a179492dcdea45c not found: not found Mar 17 18:22:12.634888 env[1727]: time="2025-03-17T18:22:12.634745634Z" level=info msg="StartContainer for \"501b3b7a18f76ac2ef9ed80ae237e129d4e6d613ad63ffc2313bb90a171ab1d3\" returns successfully" Mar 17 18:22:12.702713 kubelet[2080]: I0317 18:22:12.701547 2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ldfsk" podStartSLOduration=1.7809796709999999 podStartE2EDuration="7.701523617s" podCreationTimestamp="2025-03-17 18:22:05 +0000 UTC" firstStartedPulling="2025-03-17 18:22:06.505167034 +0000 UTC m=+82.197751264" lastFinishedPulling="2025-03-17 18:22:12.42571098 +0000 UTC m=+88.118295210" observedRunningTime="2025-03-17 18:22:12.659765296 +0000 UTC m=+88.352349550" watchObservedRunningTime="2025-03-17 18:22:12.701523617 +0000 UTC m=+88.394107847" Mar 17 18:22:12.809718 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:22:13.148148 kubelet[2080]: E0317 18:22:13.148076 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:14.149247 kubelet[2080]: E0317 18:22:14.149176 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:15.149578 kubelet[2080]: E0317 18:22:15.149511 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:15.391628 systemd[1]: run-containerd-runc-k8s.io-8d1104392761d5a4c66e83f1b211dee5da81e0902e8a7419156225efc4845a36-runc.nM5NJw.mount: Deactivated successfully. Mar 17 18:22:15.710199 kubelet[2080]: W0317 18:22:15.710106 2080 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7081b54_b7c6_42ac_8aeb_f69fa06d4b41.slice/cri-containerd-0fe0b1db3d980da2081737e726779e88262ddd717f9a111a12a1536643487bb5.scope WatchSource:0}: task 0fe0b1db3d980da2081737e726779e88262ddd717f9a111a12a1536643487bb5 not found: not found Mar 17 18:22:16.150261 kubelet[2080]: E0317 18:22:16.150180 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:16.857493 systemd-networkd[1457]: lxc_health: Link UP Mar 17 18:22:16.866969 (udev-worker)[4809]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:22:16.887180 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:22:16.888305 systemd-networkd[1457]: lxc_health: Gained carrier Mar 17 18:22:17.151189 kubelet[2080]: E0317 18:22:17.151030 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:17.653804 systemd[1]: run-containerd-runc-k8s.io-8d1104392761d5a4c66e83f1b211dee5da81e0902e8a7419156225efc4845a36-runc.ZTMYoo.mount: Deactivated successfully. Mar 17 18:22:18.051355 kubelet[2080]: I0317 18:22:18.050781 2080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mmfzm" podStartSLOduration=11.050261112 podStartE2EDuration="11.050261112s" podCreationTimestamp="2025-03-17 18:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:22:12.703689263 +0000 UTC m=+88.396273505" watchObservedRunningTime="2025-03-17 18:22:18.050261112 +0000 UTC m=+93.742845354" Mar 17 18:22:18.151474 kubelet[2080]: E0317 18:22:18.151415 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:18.820012 kubelet[2080]: W0317 18:22:18.819940 2080 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7081b54_b7c6_42ac_8aeb_f69fa06d4b41.slice/cri-containerd-19f2a53154b48b7de4b06060866dccdc9c1c62f37ab773dc7c9a51a1d98e5e57.scope WatchSource:0}: task 19f2a53154b48b7de4b06060866dccdc9c1c62f37ab773dc7c9a51a1d98e5e57 not found: not found Mar 17 18:22:18.952038 systemd-networkd[1457]: lxc_health: Gained IPv6LL Mar 17 18:22:19.152801 kubelet[2080]: E0317 18:22:19.152725 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:19.946801 systemd[1]: run-containerd-runc-k8s.io-8d1104392761d5a4c66e83f1b211dee5da81e0902e8a7419156225efc4845a36-runc.FOczqu.mount: Deactivated successfully. Mar 17 18:22:20.154014 kubelet[2080]: E0317 18:22:20.153928 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:21.154132 kubelet[2080]: E0317 18:22:21.154082 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:21.930173 kubelet[2080]: W0317 18:22:21.930108 2080 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7081b54_b7c6_42ac_8aeb_f69fa06d4b41.slice/cri-containerd-5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144.scope WatchSource:0}: task 5d652b17295079b43b9efa055fdf44a6853471ac7780be9d3d22776f4781a144 not found: not found Mar 17 18:22:22.161765 kubelet[2080]: E0317 18:22:22.161682 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:23.162386 kubelet[2080]: E0317 18:22:23.162314 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:24.163137 kubelet[2080]: E0317 18:22:24.163083 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:24.508266 systemd[1]: run-containerd-runc-k8s.io-8d1104392761d5a4c66e83f1b211dee5da81e0902e8a7419156225efc4845a36-runc.30N23Q.mount: Deactivated successfully. Mar 17 18:22:25.164735 kubelet[2080]: E0317 18:22:25.164674 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:26.073018 kubelet[2080]: E0317 18:22:26.072970 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:26.165855 kubelet[2080]: E0317 18:22:26.165793 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:27.167372 kubelet[2080]: E0317 18:22:27.167307 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:28.168018 kubelet[2080]: E0317 18:22:28.167957 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:29.168985 kubelet[2080]: E0317 18:22:29.168936 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:30.169334 kubelet[2080]: E0317 18:22:30.169293 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:31.171102 kubelet[2080]: E0317 18:22:31.171063 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:32.172317 kubelet[2080]: E0317 18:22:32.172276 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:33.173626 kubelet[2080]: E0317 18:22:33.173565 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:34.174797 kubelet[2080]: E0317 18:22:34.174734 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:35.175728 kubelet[2080]: E0317 18:22:35.175689 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:36.176854 kubelet[2080]: E0317 18:22:36.176813 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:37.177867 kubelet[2080]: E0317 18:22:37.177827 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:38.179096 kubelet[2080]: E0317 18:22:38.179037 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:38.442537 kubelet[2080]: E0317 18:22:38.442163 2080 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:22:39.180127 kubelet[2080]: E0317 18:22:39.180092 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:40.181007 kubelet[2080]: E0317 18:22:40.180948 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:41.181110 kubelet[2080]: E0317 18:22:41.181071 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:42.182050 kubelet[2080]: E0317 18:22:42.181990 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:43.183027 kubelet[2080]: E0317 18:22:43.182952 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:44.184101 kubelet[2080]: E0317 18:22:44.184060 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:45.185157 kubelet[2080]: E0317 18:22:45.185115 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:46.073003 kubelet[2080]: E0317 18:22:46.072934 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:46.122769 env[1727]: time="2025-03-17T18:22:46.122706905Z" level=info msg="StopPodSandbox for \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\"" Mar 17 18:22:46.123360 env[1727]: time="2025-03-17T18:22:46.122861814Z" level=info msg="TearDown network for sandbox \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\" successfully" Mar 17 18:22:46.123360 env[1727]: time="2025-03-17T18:22:46.122933094Z" level=info msg="StopPodSandbox for \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\" returns successfully" Mar 17 18:22:46.124173 env[1727]: time="2025-03-17T18:22:46.124112173Z" level=info msg="RemovePodSandbox for \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\"" Mar 17 18:22:46.124334 env[1727]: time="2025-03-17T18:22:46.124172677Z" level=info msg="Forcibly stopping sandbox \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\"" Mar 17 18:22:46.124334 env[1727]: time="2025-03-17T18:22:46.124302194Z" level=info msg="TearDown network for sandbox \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\" successfully" Mar 17 18:22:46.133275 env[1727]: time="2025-03-17T18:22:46.133055092Z" level=info msg="RemovePodSandbox \"748925a78874de0962d1d903023bcb3b618d3aec5a6068565a9ba781095d339c\" returns successfully" Mar 17 18:22:46.134389 env[1727]: time="2025-03-17T18:22:46.134044510Z" level=info msg="StopPodSandbox for \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\"" Mar 17 18:22:46.134389 env[1727]: time="2025-03-17T18:22:46.134212979Z" level=info msg="TearDown network for sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" successfully" Mar 17 18:22:46.134389 env[1727]: time="2025-03-17T18:22:46.134280743Z" level=info msg="StopPodSandbox for \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" returns successfully" Mar 17 18:22:46.137189 env[1727]: time="2025-03-17T18:22:46.135201653Z" level=info msg="RemovePodSandbox for \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\"" Mar 17 18:22:46.137189 env[1727]: time="2025-03-17T18:22:46.135259373Z" level=info msg="Forcibly stopping sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\"" Mar 17 18:22:46.137189 env[1727]: time="2025-03-17T18:22:46.135380262Z" level=info msg="TearDown network for sandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" successfully" Mar 17 18:22:46.142424 env[1727]: time="2025-03-17T18:22:46.142369078Z" level=info msg="RemovePodSandbox \"3c766ff22d13fb84256814eaca02e2a141fe3ece89087aec0809bc238239bacc\" returns successfully" Mar 17 18:22:46.186222 kubelet[2080]: E0317 18:22:46.186159 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:47.186908 kubelet[2080]: E0317 18:22:47.186869 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:48.188260 kubelet[2080]: E0317 18:22:48.188221 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:48.443396 kubelet[2080]: E0317 18:22:48.443240 2080 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:22:49.189298 kubelet[2080]: E0317 18:22:49.189233 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:50.189628 kubelet[2080]: E0317 18:22:50.189562 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:51.190484 kubelet[2080]: E0317 18:22:51.190443 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:52.192230 kubelet[2080]: E0317 18:22:52.192165 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:53.192381 kubelet[2080]: E0317 18:22:53.192295 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:54.192933 kubelet[2080]: E0317 18:22:54.192865 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:55.194030 kubelet[2080]: E0317 18:22:55.193986 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:56.195257 kubelet[2080]: E0317 18:22:56.195194 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:57.195857 kubelet[2080]: E0317 18:22:57.195812 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:58.197420 kubelet[2080]: E0317 18:22:58.197362 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:22:58.444450 kubelet[2080]: E0317 18:22:58.444374 2080 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:22:59.198212 kubelet[2080]: E0317 18:22:59.198143 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:00.199069 kubelet[2080]: E0317 18:23:00.199012 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:01.199973 kubelet[2080]: E0317 18:23:01.199908 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:02.200899 kubelet[2080]: E0317 18:23:02.200841 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:03.201692 kubelet[2080]: E0317 18:23:03.201625 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:04.203275 kubelet[2080]: E0317 18:23:04.203229 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:05.204859 kubelet[2080]: E0317 18:23:05.204821 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:06.072496 kubelet[2080]: E0317 18:23:06.072458 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:06.205815 kubelet[2080]: E0317 18:23:06.205776 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:07.207267 kubelet[2080]: E0317 18:23:07.207197 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:08.208306 kubelet[2080]: E0317 18:23:08.208244 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:08.445335 kubelet[2080]: E0317 18:23:08.445265 2080 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:23:08.483251 kubelet[2080]: E0317 18:23:08.479636 2080 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.28?timeout=10s\": unexpected EOF" Mar 17 18:23:08.483251 kubelet[2080]: I0317 18:23:08.479739 2080 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 17 18:23:09.209046 kubelet[2080]: E0317 18:23:09.208983 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:09.491063 kubelet[2080]: E0317 18:23:09.490219 2080 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.28?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.30.28:51144->172.31.23.140:6443: read: connection reset by peer" interval="200ms" Mar 17 18:23:10.209718 kubelet[2080]: E0317 18:23:10.209680 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:11.211376 kubelet[2080]: E0317 18:23:11.211336 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:12.213162 kubelet[2080]: E0317 18:23:12.213122 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:13.214597 kubelet[2080]: E0317 18:23:13.214541 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:14.215530 kubelet[2080]: E0317 18:23:14.215490 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:15.216737 kubelet[2080]: E0317 18:23:15.216616 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:16.217512 kubelet[2080]: E0317 18:23:16.217455 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:17.218379 kubelet[2080]: E0317 18:23:17.218287 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:18.218899 kubelet[2080]: E0317 18:23:18.218839 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:19.220056 kubelet[2080]: E0317 18:23:19.219979 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:19.695448 kubelet[2080]: E0317 18:23:19.695381 2080 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.28?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 17 18:23:20.220617 kubelet[2080]: E0317 18:23:20.220580 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:21.221972 kubelet[2080]: E0317 18:23:21.221928 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:22.223431 kubelet[2080]: E0317 18:23:22.223363 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:23.224459 kubelet[2080]: E0317 18:23:23.224390 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:24.224587 kubelet[2080]: E0317 18:23:24.224515 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:25.225183 kubelet[2080]: E0317 18:23:25.225145 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:26.072497 kubelet[2080]: E0317 18:23:26.072458 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:26.226205 kubelet[2080]: E0317 18:23:26.226135 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:27.226500 kubelet[2080]: E0317 18:23:27.226442 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:28.226783 kubelet[2080]: E0317 18:23:28.226732 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:29.227106 kubelet[2080]: E0317 18:23:29.226859 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:30.096401 kubelet[2080]: E0317 18:23:30.096347 2080 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.28?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 17 18:23:30.227792 kubelet[2080]: E0317 18:23:30.227755 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:23:31.228977 kubelet[2080]: E0317 18:23:31.228941 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"