Mar 17 18:19:53.953110 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 18:19:53.953168 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:19:53.953196 kernel: efi: EFI v2.70 by EDK II Mar 17 18:19:53.953212 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Mar 17 18:19:53.953226 kernel: ACPI: Early table checksum verification disabled Mar 17 18:19:53.953241 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 18:19:53.953257 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 18:19:53.953272 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 18:19:53.953286 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 18:19:53.953301 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 18:19:53.954795 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 18:19:53.954816 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 18:19:53.954831 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 18:19:53.954846 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 18:19:53.954863 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 18:19:53.954885 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 18:19:53.954900 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 18:19:53.954915 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 18:19:53.954929 kernel: printk: bootconsole [uart0] enabled Mar 17 18:19:53.954944 kernel: NUMA: Failed to initialise from firmware Mar 17 18:19:53.954959 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 18:19:53.954974 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Mar 17 18:19:53.954989 kernel: Zone ranges: Mar 17 18:19:53.955004 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 18:19:53.955019 kernel: DMA32 empty Mar 17 18:19:53.955033 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 18:19:53.955052 kernel: Movable zone start for each node Mar 17 18:19:53.955067 kernel: Early memory node ranges Mar 17 18:19:53.955081 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 18:19:53.955096 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 18:19:53.955110 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 18:19:53.955125 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 18:19:53.955140 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 18:19:53.955178 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 18:19:53.955196 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 18:19:53.955211 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 18:19:53.955226 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 18:19:53.955241 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 18:19:53.955260 kernel: psci: probing for conduit method from ACPI. Mar 17 18:19:53.955275 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 18:19:53.955296 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:19:53.955312 kernel: psci: Trusted OS migration not required Mar 17 18:19:53.955327 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:19:53.955346 kernel: ACPI: SRAT not present Mar 17 18:19:53.955362 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:19:53.955378 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:19:53.955394 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:19:53.955410 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:19:53.955425 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:19:53.955440 kernel: CPU features: detected: Spectre-v2 Mar 17 18:19:53.955456 kernel: CPU features: detected: Spectre-v3a Mar 17 18:19:53.955472 kernel: CPU features: detected: Spectre-BHB Mar 17 18:19:53.955487 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:19:53.955503 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:19:53.955522 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 18:19:53.955537 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 18:19:53.955553 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 18:19:53.955568 kernel: Policy zone: Normal Mar 17 18:19:53.955587 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:19:53.955603 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:19:53.955619 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:19:53.955635 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:19:53.955651 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:19:53.955667 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 18:19:53.955687 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Mar 17 18:19:53.955703 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:19:53.955718 kernel: trace event string verifier disabled Mar 17 18:19:53.955733 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:19:53.955750 kernel: rcu: RCU event tracing is enabled. Mar 17 18:19:53.955766 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:19:53.955782 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:19:53.955798 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:19:53.955814 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:19:53.955829 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:19:53.955844 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:19:53.955860 kernel: GICv3: 96 SPIs implemented Mar 17 18:19:53.955879 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:19:53.955894 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:19:53.955909 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:19:53.955925 kernel: GICv3: 16 PPIs implemented Mar 17 18:19:53.955940 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 18:19:53.955956 kernel: ACPI: SRAT not present Mar 17 18:19:53.955970 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 18:19:53.955986 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:19:53.956002 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:19:53.956017 kernel: GICv3: using LPI property table @0x00000004000b0000 Mar 17 18:19:53.956032 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 18:19:53.956052 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Mar 17 18:19:53.956067 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 18:19:53.956083 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 18:19:53.956099 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 18:19:53.956114 kernel: Console: colour dummy device 80x25 Mar 17 18:19:53.956130 kernel: printk: console [tty1] enabled Mar 17 18:19:53.956146 kernel: ACPI: Core revision 20210730 Mar 17 18:19:53.956182 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 18:19:53.956199 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:19:53.956215 kernel: LSM: Security Framework initializing Mar 17 18:19:53.956236 kernel: SELinux: Initializing. Mar 17 18:19:53.956253 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:19:53.956269 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:19:53.956285 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:19:53.956301 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 18:19:53.956316 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 18:19:53.956332 kernel: Remapping and enabling EFI services. Mar 17 18:19:53.956348 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:19:53.956363 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:19:53.956383 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 18:19:53.956400 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Mar 17 18:19:53.956416 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 18:19:53.956431 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:19:53.956447 kernel: SMP: Total of 2 processors activated. Mar 17 18:19:53.956463 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:19:53.956479 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 18:19:53.956495 kernel: CPU features: detected: CRC32 instructions Mar 17 18:19:53.956510 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:19:53.956526 kernel: alternatives: patching kernel code Mar 17 18:19:53.956545 kernel: devtmpfs: initialized Mar 17 18:19:53.956561 kernel: KASLR disabled due to lack of seed Mar 17 18:19:53.956587 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:19:53.956607 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:19:53.956624 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:19:53.956639 kernel: SMBIOS 3.0.0 present. Mar 17 18:19:53.956656 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 18:19:53.956672 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:19:53.956688 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:19:53.956705 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:19:53.956722 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:19:53.956742 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:19:53.956759 kernel: audit: type=2000 audit(0.249:1): state=initialized audit_enabled=0 res=1 Mar 17 18:19:53.956775 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:19:53.956792 kernel: cpuidle: using governor menu Mar 17 18:19:53.956808 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:19:53.956828 kernel: ASID allocator initialised with 32768 entries Mar 17 18:19:53.956845 kernel: ACPI: bus type PCI registered Mar 17 18:19:53.956862 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:19:53.956878 kernel: Serial: AMBA PL011 UART driver Mar 17 18:19:53.956895 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:19:53.956912 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:19:53.956928 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:19:53.956945 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:19:53.956961 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:19:53.956981 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:19:53.956998 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:19:53.957015 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:19:53.957031 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:19:53.957047 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:19:53.957064 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:19:53.957080 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:19:53.957097 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:19:53.957114 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:19:53.957134 kernel: ACPI: Interpreter enabled Mar 17 18:19:53.957165 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:19:53.957187 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:19:53.957204 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 18:19:53.957489 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:19:53.957686 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:19:53.957900 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:19:53.958092 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 18:19:53.958313 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 18:19:53.958338 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 18:19:53.958355 kernel: acpiphp: Slot [1] registered Mar 17 18:19:53.958372 kernel: acpiphp: Slot [2] registered Mar 17 18:19:53.958388 kernel: acpiphp: Slot [3] registered Mar 17 18:19:53.958405 kernel: acpiphp: Slot [4] registered Mar 17 18:19:53.958421 kernel: acpiphp: Slot [5] registered Mar 17 18:19:53.958438 kernel: acpiphp: Slot [6] registered Mar 17 18:19:53.958454 kernel: acpiphp: Slot [7] registered Mar 17 18:19:53.958476 kernel: acpiphp: Slot [8] registered Mar 17 18:19:53.958492 kernel: acpiphp: Slot [9] registered Mar 17 18:19:53.958509 kernel: acpiphp: Slot [10] registered Mar 17 18:19:53.958525 kernel: acpiphp: Slot [11] registered Mar 17 18:19:53.958541 kernel: acpiphp: Slot [12] registered Mar 17 18:19:53.958558 kernel: acpiphp: Slot [13] registered Mar 17 18:19:53.958574 kernel: acpiphp: Slot [14] registered Mar 17 18:19:53.958591 kernel: acpiphp: Slot [15] registered Mar 17 18:19:53.958607 kernel: acpiphp: Slot [16] registered Mar 17 18:19:53.958627 kernel: acpiphp: Slot [17] registered Mar 17 18:19:53.958643 kernel: acpiphp: Slot [18] registered Mar 17 18:19:53.958660 kernel: acpiphp: Slot [19] registered Mar 17 18:19:53.958676 kernel: acpiphp: Slot [20] registered Mar 17 18:19:53.958693 kernel: acpiphp: Slot [21] registered Mar 17 18:19:53.958709 kernel: acpiphp: Slot [22] registered Mar 17 18:19:53.958725 kernel: acpiphp: Slot [23] registered Mar 17 18:19:53.958742 kernel: acpiphp: Slot [24] registered Mar 17 18:19:53.958758 kernel: acpiphp: Slot [25] registered Mar 17 18:19:53.958774 kernel: acpiphp: Slot [26] registered Mar 17 18:19:53.958795 kernel: acpiphp: Slot [27] registered Mar 17 18:19:53.958811 kernel: acpiphp: Slot [28] registered Mar 17 18:19:53.958827 kernel: acpiphp: Slot [29] registered Mar 17 18:19:53.958844 kernel: acpiphp: Slot [30] registered Mar 17 18:19:53.958860 kernel: acpiphp: Slot [31] registered Mar 17 18:19:53.958877 kernel: PCI host bridge to bus 0000:00 Mar 17 18:19:53.959070 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 18:19:53.959280 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:19:53.959464 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 18:19:53.959647 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 18:19:53.959859 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 18:19:53.960086 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 18:19:53.960315 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 18:19:53.960531 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 18:19:53.960742 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 18:19:53.960942 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 18:19:53.965211 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 18:19:53.965476 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 18:19:53.965681 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 18:19:53.965908 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 18:19:53.966112 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 18:19:53.969968 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 18:19:53.970240 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 18:19:53.970453 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 18:19:53.970652 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 18:19:53.978542 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 18:19:53.978746 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 18:19:53.978921 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:19:53.979104 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 18:19:53.979128 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:19:53.979146 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:19:53.979186 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:19:53.979204 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:19:53.979222 kernel: iommu: Default domain type: Translated Mar 17 18:19:53.979239 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:19:53.979256 kernel: vgaarb: loaded Mar 17 18:19:53.979273 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:19:53.979296 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:19:53.979313 kernel: PTP clock support registered Mar 17 18:19:53.979329 kernel: Registered efivars operations Mar 17 18:19:53.979346 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:19:53.979363 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:19:53.979380 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:19:53.979397 kernel: pnp: PnP ACPI init Mar 17 18:19:53.979600 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 18:19:53.979630 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:19:53.979648 kernel: NET: Registered PF_INET protocol family Mar 17 18:19:53.979665 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:19:53.979683 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:19:53.979700 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:19:53.979717 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:19:53.979734 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:19:53.979751 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:19:53.979768 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:19:53.979789 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:19:53.979806 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:19:53.979822 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:19:53.979839 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 18:19:53.979856 kernel: kvm [1]: HYP mode not available Mar 17 18:19:53.979873 kernel: Initialise system trusted keyrings Mar 17 18:19:53.979891 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:19:53.979907 kernel: Key type asymmetric registered Mar 17 18:19:53.979924 kernel: Asymmetric key parser 'x509' registered Mar 17 18:19:53.979946 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:19:53.979963 kernel: io scheduler mq-deadline registered Mar 17 18:19:53.979981 kernel: io scheduler kyber registered Mar 17 18:19:53.979997 kernel: io scheduler bfq registered Mar 17 18:19:53.980348 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 18:19:53.980378 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:19:53.980395 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:19:53.980413 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 18:19:53.980435 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 18:19:53.980452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:19:53.980469 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 18:19:53.980665 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 18:19:53.980688 kernel: printk: console [ttyS0] disabled Mar 17 18:19:53.980705 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 18:19:53.980722 kernel: printk: console [ttyS0] enabled Mar 17 18:19:53.980738 kernel: printk: bootconsole [uart0] disabled Mar 17 18:19:53.980755 kernel: thunder_xcv, ver 1.0 Mar 17 18:19:53.980776 kernel: thunder_bgx, ver 1.0 Mar 17 18:19:53.980793 kernel: nicpf, ver 1.0 Mar 17 18:19:53.980809 kernel: nicvf, ver 1.0 Mar 17 18:19:53.981025 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:19:53.981276 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:19:53 UTC (1742235593) Mar 17 18:19:53.981302 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:19:53.981319 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:19:53.981336 kernel: Segment Routing with IPv6 Mar 17 18:19:53.981352 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:19:53.981375 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:19:53.981392 kernel: Key type dns_resolver registered Mar 17 18:19:53.981409 kernel: registered taskstats version 1 Mar 17 18:19:53.981425 kernel: Loading compiled-in X.509 certificates Mar 17 18:19:53.981442 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:19:53.981459 kernel: Key type .fscrypt registered Mar 17 18:19:53.981475 kernel: Key type fscrypt-provisioning registered Mar 17 18:19:53.981492 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:19:53.981509 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:19:53.981529 kernel: ima: No architecture policies found Mar 17 18:19:53.981546 kernel: clk: Disabling unused clocks Mar 17 18:19:53.981563 kernel: Freeing unused kernel memory: 36416K Mar 17 18:19:53.981579 kernel: Run /init as init process Mar 17 18:19:53.981595 kernel: with arguments: Mar 17 18:19:53.981612 kernel: /init Mar 17 18:19:53.981628 kernel: with environment: Mar 17 18:19:53.981644 kernel: HOME=/ Mar 17 18:19:53.981660 kernel: TERM=linux Mar 17 18:19:53.981680 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:19:53.981702 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:19:53.981724 systemd[1]: Detected virtualization amazon. Mar 17 18:19:53.981743 systemd[1]: Detected architecture arm64. Mar 17 18:19:53.981761 systemd[1]: Running in initrd. Mar 17 18:19:53.981778 systemd[1]: No hostname configured, using default hostname. Mar 17 18:19:53.981816 systemd[1]: Hostname set to . Mar 17 18:19:53.981840 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:19:53.981859 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:19:53.981877 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:19:53.981895 systemd[1]: Reached target cryptsetup.target. Mar 17 18:19:53.981913 systemd[1]: Reached target paths.target. Mar 17 18:19:53.981934 systemd[1]: Reached target slices.target. Mar 17 18:19:53.981956 systemd[1]: Reached target swap.target. Mar 17 18:19:53.981998 systemd[1]: Reached target timers.target. Mar 17 18:19:53.982027 systemd[1]: Listening on iscsid.socket. Mar 17 18:19:53.982046 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:19:53.982063 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:19:53.982082 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:19:53.982099 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:19:53.982117 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:19:53.982135 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:19:53.982170 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:19:53.982222 systemd[1]: Reached target sockets.target. Mar 17 18:19:53.982241 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:19:53.982260 systemd[1]: Finished network-cleanup.service. Mar 17 18:19:53.982278 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:19:53.982296 systemd[1]: Starting systemd-journald.service... Mar 17 18:19:53.982314 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:19:53.982332 systemd[1]: Starting systemd-resolved.service... Mar 17 18:19:53.982351 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:19:53.982369 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:19:53.982393 kernel: audit: type=1130 audit(1742235593.943:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:53.982412 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:19:53.982431 kernel: audit: type=1130 audit(1742235593.953:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:53.982450 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:19:53.982468 kernel: audit: type=1130 audit(1742235593.968:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:53.982486 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:19:53.982508 systemd-journald[309]: Journal started Mar 17 18:19:53.982598 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2001f1db2210a5455c8cc583ed437c) is 8.0M, max 75.4M, 67.4M free. Mar 17 18:19:53.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:53.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:53.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:53.963711 systemd-modules-load[310]: Inserted module 'overlay' Mar 17 18:19:53.994201 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:19:53.999839 systemd[1]: Started systemd-journald.service. Mar 17 18:19:54.014212 kernel: audit: type=1130 audit(1742235594.005:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.021299 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:19:54.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.036188 kernel: audit: type=1130 audit(1742235594.020:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.047028 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:19:54.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.057471 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:19:54.068332 kernel: audit: type=1130 audit(1742235594.047:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.068368 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:19:54.069277 systemd-resolved[311]: Positive Trust Anchors: Mar 17 18:19:54.069303 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:19:54.069357 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:19:54.109200 kernel: Bridge firewalling registered Mar 17 18:19:54.105829 systemd-modules-load[310]: Inserted module 'br_netfilter' Mar 17 18:19:54.129196 kernel: SCSI subsystem initialized Mar 17 18:19:54.150654 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:19:54.150723 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:19:54.153925 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:19:54.154893 dracut-cmdline[326]: dracut-dracut-053 Mar 17 18:19:54.160012 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:19:54.174966 systemd-modules-load[310]: Inserted module 'dm_multipath' Mar 17 18:19:54.176865 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:19:54.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.183737 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:19:54.199210 kernel: audit: type=1130 audit(1742235594.181:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.216830 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:19:54.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.230190 kernel: audit: type=1130 audit(1742235594.217:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.318190 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:19:54.339200 kernel: iscsi: registered transport (tcp) Mar 17 18:19:54.366361 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:19:54.366444 kernel: QLogic iSCSI HBA Driver Mar 17 18:19:54.530733 systemd-resolved[311]: Defaulting to hostname 'linux'. Mar 17 18:19:54.532990 kernel: random: crng init done Mar 17 18:19:54.534389 systemd[1]: Started systemd-resolved.service. Mar 17 18:19:54.537251 systemd[1]: Reached target nss-lookup.target. Mar 17 18:19:54.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.546963 kernel: audit: type=1130 audit(1742235594.535:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.564064 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:19:54.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:54.568517 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:19:54.634199 kernel: raid6: neonx8 gen() 6364 MB/s Mar 17 18:19:54.652183 kernel: raid6: neonx8 xor() 4749 MB/s Mar 17 18:19:54.670194 kernel: raid6: neonx4 gen() 6495 MB/s Mar 17 18:19:54.688196 kernel: raid6: neonx4 xor() 4936 MB/s Mar 17 18:19:54.706194 kernel: raid6: neonx2 gen() 5729 MB/s Mar 17 18:19:54.724194 kernel: raid6: neonx2 xor() 4529 MB/s Mar 17 18:19:54.742193 kernel: raid6: neonx1 gen() 4457 MB/s Mar 17 18:19:54.760194 kernel: raid6: neonx1 xor() 3687 MB/s Mar 17 18:19:54.778193 kernel: raid6: int64x8 gen() 3411 MB/s Mar 17 18:19:54.796193 kernel: raid6: int64x8 xor() 2099 MB/s Mar 17 18:19:54.814190 kernel: raid6: int64x4 gen() 3777 MB/s Mar 17 18:19:54.832195 kernel: raid6: int64x4 xor() 2201 MB/s Mar 17 18:19:54.850192 kernel: raid6: int64x2 gen() 3555 MB/s Mar 17 18:19:54.868195 kernel: raid6: int64x2 xor() 1954 MB/s Mar 17 18:19:54.886185 kernel: raid6: int64x1 gen() 2761 MB/s Mar 17 18:19:54.905270 kernel: raid6: int64x1 xor() 1456 MB/s Mar 17 18:19:54.905304 kernel: raid6: using algorithm neonx4 gen() 6495 MB/s Mar 17 18:19:54.905328 kernel: raid6: .... xor() 4936 MB/s, rmw enabled Mar 17 18:19:54.906863 kernel: raid6: using neon recovery algorithm Mar 17 18:19:54.926638 kernel: xor: measuring software checksum speed Mar 17 18:19:54.926704 kernel: 8regs : 9351 MB/sec Mar 17 18:19:54.928415 kernel: 32regs : 11138 MB/sec Mar 17 18:19:54.930246 kernel: arm64_neon : 9622 MB/sec Mar 17 18:19:54.930276 kernel: xor: using function: 32regs (11138 MB/sec) Mar 17 18:19:55.022195 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:19:55.039108 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:19:55.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:55.043000 audit: BPF prog-id=7 op=LOAD Mar 17 18:19:55.043000 audit: BPF prog-id=8 op=LOAD Mar 17 18:19:55.045802 systemd[1]: Starting systemd-udevd.service... Mar 17 18:19:55.072748 systemd-udevd[509]: Using default interface naming scheme 'v252'. Mar 17 18:19:55.082496 systemd[1]: Started systemd-udevd.service. Mar 17 18:19:55.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:55.091302 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:19:55.121797 dracut-pre-trigger[521]: rd.md=0: removing MD RAID activation Mar 17 18:19:55.180124 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:19:55.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:55.184439 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:19:55.288622 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:19:55.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:55.411056 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:19:55.411128 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 18:19:55.428960 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 18:19:55.429199 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 18:19:55.429419 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 18:19:55.429445 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 18:19:55.429769 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:fb:f3:f0:e0:c3 Mar 17 18:19:55.434186 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 18:19:55.443450 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:19:55.443498 kernel: GPT:9289727 != 16777215 Mar 17 18:19:55.443522 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:19:55.445503 kernel: GPT:9289727 != 16777215 Mar 17 18:19:55.446710 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:19:55.449907 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:19:55.454382 (udev-worker)[556]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:19:55.525205 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (555) Mar 17 18:19:55.574009 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:19:55.623103 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:19:55.628302 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:19:55.642340 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:19:55.656971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:19:55.678827 systemd[1]: Starting disk-uuid.service... Mar 17 18:19:55.689881 disk-uuid[669]: Primary Header is updated. Mar 17 18:19:55.689881 disk-uuid[669]: Secondary Entries is updated. Mar 17 18:19:55.689881 disk-uuid[669]: Secondary Header is updated. Mar 17 18:19:55.699190 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:19:55.709195 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:19:55.716204 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:19:56.716600 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:19:56.716669 disk-uuid[670]: The operation has completed successfully. Mar 17 18:19:56.891795 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:19:56.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:56.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:56.891999 systemd[1]: Finished disk-uuid.service. Mar 17 18:19:56.897735 systemd[1]: Starting verity-setup.service... Mar 17 18:19:56.930189 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:19:57.025712 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:19:57.030881 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:19:57.033848 systemd[1]: Finished verity-setup.service. Mar 17 18:19:57.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.124191 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:19:57.125505 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:19:57.128307 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:19:57.131049 systemd[1]: Starting ignition-setup.service... Mar 17 18:19:57.150459 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:19:57.180470 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:19:57.180550 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:19:57.180574 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:19:57.192195 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:19:57.210026 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:19:57.227612 systemd[1]: Finished ignition-setup.service. Mar 17 18:19:57.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.231974 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:19:57.304887 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:19:57.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.308000 audit: BPF prog-id=9 op=LOAD Mar 17 18:19:57.310328 systemd[1]: Starting systemd-networkd.service... Mar 17 18:19:57.357982 systemd-networkd[1182]: lo: Link UP Mar 17 18:19:57.358006 systemd-networkd[1182]: lo: Gained carrier Mar 17 18:19:57.362097 systemd-networkd[1182]: Enumeration completed Mar 17 18:19:57.362299 systemd[1]: Started systemd-networkd.service. Mar 17 18:19:57.364042 systemd-networkd[1182]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:19:57.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.372288 systemd-networkd[1182]: eth0: Link UP Mar 17 18:19:57.372308 systemd-networkd[1182]: eth0: Gained carrier Mar 17 18:19:57.372627 systemd[1]: Reached target network.target. Mar 17 18:19:57.378930 systemd[1]: Starting iscsiuio.service... Mar 17 18:19:57.387322 systemd-networkd[1182]: eth0: DHCPv4 address 172.31.18.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 18:19:57.393037 systemd[1]: Started iscsiuio.service. Mar 17 18:19:57.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.397562 systemd[1]: Starting iscsid.service... Mar 17 18:19:57.405835 iscsid[1187]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:19:57.405835 iscsid[1187]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:19:57.405835 iscsid[1187]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:19:57.405835 iscsid[1187]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:19:57.405835 iscsid[1187]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:19:57.424557 iscsid[1187]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:19:57.429965 systemd[1]: Started iscsid.service. Mar 17 18:19:57.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.446760 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:19:57.469567 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:19:57.472708 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:19:57.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.475736 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:19:57.479043 systemd[1]: Reached target remote-fs.target. Mar 17 18:19:57.483278 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:19:57.502131 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:19:57.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.940710 ignition[1118]: Ignition 2.14.0 Mar 17 18:19:57.940738 ignition[1118]: Stage: fetch-offline Mar 17 18:19:57.941089 ignition[1118]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:57.941211 ignition[1118]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:57.965757 ignition[1118]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:57.968134 ignition[1118]: Ignition finished successfully Mar 17 18:19:57.971530 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:19:57.986324 kernel: kauditd_printk_skb: 18 callbacks suppressed Mar 17 18:19:57.986362 kernel: audit: type=1130 audit(1742235597.972:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:57.976689 systemd[1]: Starting ignition-fetch.service... Mar 17 18:19:57.994890 ignition[1206]: Ignition 2.14.0 Mar 17 18:19:57.994920 ignition[1206]: Stage: fetch Mar 17 18:19:57.995256 ignition[1206]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:57.995316 ignition[1206]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:58.014954 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:58.017178 ignition[1206]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:58.029220 ignition[1206]: INFO : PUT result: OK Mar 17 18:19:58.032809 ignition[1206]: DEBUG : parsed url from cmdline: "" Mar 17 18:19:58.032809 ignition[1206]: INFO : no config URL provided Mar 17 18:19:58.032809 ignition[1206]: INFO : reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:19:58.032809 ignition[1206]: INFO : no config at "/usr/lib/ignition/user.ign" Mar 17 18:19:58.040359 ignition[1206]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:58.040359 ignition[1206]: INFO : PUT result: OK Mar 17 18:19:58.040359 ignition[1206]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 18:19:58.046273 ignition[1206]: INFO : GET result: OK Mar 17 18:19:58.047902 ignition[1206]: DEBUG : parsing config with SHA512: 4992241d4247cc36787169448df070175665c348362e1bd1d2cab71ed665bb44ec0d2ff80b1e98c10e058769d108ce9cfeee8cb84f87eac9a9a276b0a760bf6a Mar 17 18:19:58.058427 unknown[1206]: fetched base config from "system" Mar 17 18:19:58.058456 unknown[1206]: fetched base config from "system" Mar 17 18:19:58.058470 unknown[1206]: fetched user config from "aws" Mar 17 18:19:58.065629 ignition[1206]: fetch: fetch complete Mar 17 18:19:58.066522 ignition[1206]: fetch: fetch passed Mar 17 18:19:58.068458 ignition[1206]: Ignition finished successfully Mar 17 18:19:58.072448 systemd[1]: Finished ignition-fetch.service. Mar 17 18:19:58.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.076650 systemd[1]: Starting ignition-kargs.service... Mar 17 18:19:58.085273 kernel: audit: type=1130 audit(1742235598.074:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.098810 ignition[1212]: Ignition 2.14.0 Mar 17 18:19:58.098837 ignition[1212]: Stage: kargs Mar 17 18:19:58.099136 ignition[1212]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:58.099220 ignition[1212]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:58.112964 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:58.115446 ignition[1212]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:58.118510 ignition[1212]: INFO : PUT result: OK Mar 17 18:19:58.123843 ignition[1212]: kargs: kargs passed Mar 17 18:19:58.125307 ignition[1212]: Ignition finished successfully Mar 17 18:19:58.128526 systemd[1]: Finished ignition-kargs.service. Mar 17 18:19:58.138299 kernel: audit: type=1130 audit(1742235598.129:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.131656 systemd[1]: Starting ignition-disks.service... Mar 17 18:19:58.148125 ignition[1218]: Ignition 2.14.0 Mar 17 18:19:58.148169 ignition[1218]: Stage: disks Mar 17 18:19:58.148487 ignition[1218]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:58.148546 ignition[1218]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:58.167547 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:58.169796 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:58.172732 ignition[1218]: INFO : PUT result: OK Mar 17 18:19:58.177811 ignition[1218]: disks: disks passed Mar 17 18:19:58.177954 ignition[1218]: Ignition finished successfully Mar 17 18:19:58.181231 systemd[1]: Finished ignition-disks.service. Mar 17 18:19:58.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.185389 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:19:58.197035 kernel: audit: type=1130 audit(1742235598.182:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.192338 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:19:58.193930 systemd[1]: Reached target local-fs.target. Mar 17 18:19:58.195452 systemd[1]: Reached target sysinit.target. Mar 17 18:19:58.196945 systemd[1]: Reached target basic.target. Mar 17 18:19:58.206866 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:19:58.255533 systemd-fsck[1226]: ROOT: clean, 623/553520 files, 56021/553472 blocks Mar 17 18:19:58.264093 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:19:58.275366 kernel: audit: type=1130 audit(1742235598.264:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.267241 systemd[1]: Mounting sysroot.mount... Mar 17 18:19:58.297196 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:19:58.299674 systemd[1]: Mounted sysroot.mount. Mar 17 18:19:58.301350 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:19:58.315722 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:19:58.317921 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:19:58.318004 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:19:58.318056 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:19:58.333301 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:19:58.357126 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:19:58.364993 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:19:58.384908 initrd-setup-root[1248]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:19:58.388194 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1243) Mar 17 18:19:58.395410 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:19:58.395488 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:19:58.395513 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:19:58.404549 initrd-setup-root[1272]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:19:58.412858 initrd-setup-root[1280]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:19:58.422295 initrd-setup-root[1288]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:19:58.431203 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:19:58.442279 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:19:58.640682 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:19:58.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.645085 systemd[1]: Starting ignition-mount.service... Mar 17 18:19:58.654200 kernel: audit: type=1130 audit(1742235598.642:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.655512 systemd[1]: Starting sysroot-boot.service... Mar 17 18:19:58.667491 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:19:58.667660 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:19:58.704005 systemd[1]: Finished sysroot-boot.service. Mar 17 18:19:58.713570 kernel: audit: type=1130 audit(1742235598.703:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.745380 ignition[1311]: INFO : Ignition 2.14.0 Mar 17 18:19:58.745380 ignition[1311]: INFO : Stage: mount Mar 17 18:19:58.748737 ignition[1311]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:58.748737 ignition[1311]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:58.765465 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:58.767924 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:58.771253 ignition[1311]: INFO : PUT result: OK Mar 17 18:19:58.775900 ignition[1311]: INFO : mount: mount passed Mar 17 18:19:58.775900 ignition[1311]: INFO : Ignition finished successfully Mar 17 18:19:58.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.778187 systemd[1]: Finished ignition-mount.service. Mar 17 18:19:58.792354 kernel: audit: type=1130 audit(1742235598.779:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:58.792138 systemd[1]: Starting ignition-files.service... Mar 17 18:19:58.809965 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:19:58.835200 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1318) Mar 17 18:19:58.841212 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:19:58.841288 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:19:58.841313 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:19:58.858196 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:19:58.863605 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:19:58.883667 ignition[1337]: INFO : Ignition 2.14.0 Mar 17 18:19:58.883667 ignition[1337]: INFO : Stage: files Mar 17 18:19:58.887413 ignition[1337]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:58.887413 ignition[1337]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:58.901633 ignition[1337]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:58.904388 ignition[1337]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:58.907733 ignition[1337]: INFO : PUT result: OK Mar 17 18:19:58.914006 ignition[1337]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:19:58.919514 ignition[1337]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:19:58.919514 ignition[1337]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:19:58.965133 ignition[1337]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:19:58.967950 ignition[1337]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:19:58.973094 unknown[1337]: wrote ssh authorized keys file for user: core Mar 17 18:19:58.975404 ignition[1337]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:19:58.979116 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:19:58.982769 ignition[1337]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:19:59.029507 systemd-networkd[1182]: eth0: Gained IPv6LL Mar 17 18:19:59.094805 ignition[1337]: INFO : GET result: OK Mar 17 18:19:59.251650 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:19:59.256127 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:19:59.256127 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:19:59.256127 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Mar 17 18:19:59.256127 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:19:59.275563 ignition[1337]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2036433748" Mar 17 18:19:59.278455 ignition[1337]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2036433748": device or resource busy Mar 17 18:19:59.278455 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2036433748", trying btrfs: device or resource busy Mar 17 18:19:59.278455 ignition[1337]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2036433748" Mar 17 18:19:59.288547 ignition[1337]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2036433748" Mar 17 18:19:59.288547 ignition[1337]: INFO : op(3): [started] unmounting "/mnt/oem2036433748" Mar 17 18:19:59.288547 ignition[1337]: INFO : op(3): [finished] unmounting "/mnt/oem2036433748" Mar 17 18:19:59.288547 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Mar 17 18:19:59.288547 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:19:59.288547 ignition[1337]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 18:19:59.784236 ignition[1337]: INFO : GET result: OK Mar 17 18:19:59.957730 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:19:59.961203 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:19:59.964523 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:19:59.967780 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:19:59.972821 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:19:59.972821 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:19:59.980071 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:19:59.980071 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:19:59.986818 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:19:59.986818 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:19:59.995038 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:19:59.999819 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:20:00.003379 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:20:00.015795 ignition[1337]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2390836016" Mar 17 18:20:00.015795 ignition[1337]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2390836016": device or resource busy Mar 17 18:20:00.015795 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2390836016", trying btrfs: device or resource busy Mar 17 18:20:00.015795 ignition[1337]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2390836016" Mar 17 18:20:00.015795 ignition[1337]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2390836016" Mar 17 18:20:00.015795 ignition[1337]: INFO : op(6): [started] unmounting "/mnt/oem2390836016" Mar 17 18:20:00.015795 ignition[1337]: INFO : op(6): [finished] unmounting "/mnt/oem2390836016" Mar 17 18:20:00.015795 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:20:00.015795 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Mar 17 18:20:00.015795 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:20:00.050484 systemd[1]: mnt-oem2390836016.mount: Deactivated successfully. Mar 17 18:20:00.066440 ignition[1337]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3363834919" Mar 17 18:20:00.069257 ignition[1337]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3363834919": device or resource busy Mar 17 18:20:00.069257 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3363834919", trying btrfs: device or resource busy Mar 17 18:20:00.069257 ignition[1337]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3363834919" Mar 17 18:20:00.078500 ignition[1337]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3363834919" Mar 17 18:20:00.078500 ignition[1337]: INFO : op(9): [started] unmounting "/mnt/oem3363834919" Mar 17 18:20:00.078500 ignition[1337]: INFO : op(9): [finished] unmounting "/mnt/oem3363834919" Mar 17 18:20:00.085324 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Mar 17 18:20:00.085324 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:20:00.085324 ignition[1337]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 18:20:00.352252 ignition[1337]: INFO : GET result: OK Mar 17 18:20:00.845378 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:20:00.852254 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Mar 17 18:20:00.852254 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:20:00.869061 ignition[1337]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3582337178" Mar 17 18:20:00.871970 ignition[1337]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3582337178": device or resource busy Mar 17 18:20:00.875114 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3582337178", trying btrfs: device or resource busy Mar 17 18:20:00.878618 ignition[1337]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3582337178" Mar 17 18:20:00.884207 ignition[1337]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3582337178" Mar 17 18:20:00.884207 ignition[1337]: INFO : op(c): [started] unmounting "/mnt/oem3582337178" Mar 17 18:20:00.884207 ignition[1337]: INFO : op(c): [finished] unmounting "/mnt/oem3582337178" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(13): [started] processing unit "nvidia.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(13): [finished] processing unit "nvidia.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Mar 17 18:20:00.884207 ignition[1337]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Mar 17 18:20:00.983744 ignition[1337]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:20:00.983744 ignition[1337]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:20:00.983744 ignition[1337]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:20:00.983744 ignition[1337]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:20:00.983744 ignition[1337]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:20:00.983744 ignition[1337]: INFO : files: files passed Mar 17 18:20:00.983744 ignition[1337]: INFO : Ignition finished successfully Mar 17 18:20:00.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:00.918257 systemd[1]: mnt-oem3582337178.mount: Deactivated successfully. Mar 17 18:20:00.995755 systemd[1]: Finished ignition-files.service. Mar 17 18:20:01.010143 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:20:01.027344 kernel: audit: type=1130 audit(1742235600.997:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.027498 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:20:01.032502 systemd[1]: Starting ignition-quench.service... Mar 17 18:20:01.039599 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:20:01.041647 systemd[1]: Finished ignition-quench.service. Mar 17 18:20:01.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.051259 kernel: audit: type=1130 audit(1742235601.040:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.055696 initrd-setup-root-after-ignition[1363]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:20:01.060113 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:20:01.063072 systemd[1]: Reached target ignition-complete.target. Mar 17 18:20:01.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.067222 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:20:01.098258 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:20:01.099721 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:20:01.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.103601 systemd[1]: Reached target initrd-fs.target. Mar 17 18:20:01.105348 systemd[1]: Reached target initrd.target. Mar 17 18:20:01.106645 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:20:01.108526 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:20:01.141214 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:20:01.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.145855 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:20:01.167782 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:20:01.171426 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:20:01.175385 systemd[1]: Stopped target timers.target. Mar 17 18:20:01.178733 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:20:01.181028 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:20:01.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.184854 systemd[1]: Stopped target initrd.target. Mar 17 18:20:01.187961 systemd[1]: Stopped target basic.target. Mar 17 18:20:01.190924 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:20:01.194414 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:20:01.197789 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:20:01.201282 systemd[1]: Stopped target remote-fs.target. Mar 17 18:20:01.204442 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:20:01.207805 systemd[1]: Stopped target sysinit.target. Mar 17 18:20:01.211003 systemd[1]: Stopped target local-fs.target. Mar 17 18:20:01.214176 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:20:01.217492 systemd[1]: Stopped target swap.target. Mar 17 18:20:01.220450 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:20:01.222627 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:20:01.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.226080 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:20:01.229356 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:20:01.231484 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:20:01.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.234935 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:20:01.237469 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:20:01.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.241505 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:20:01.241823 systemd[1]: Stopped ignition-files.service. Mar 17 18:20:01.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.248639 systemd[1]: Stopping ignition-mount.service... Mar 17 18:20:01.262057 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:20:01.263825 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:20:01.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.275915 ignition[1376]: INFO : Ignition 2.14.0 Mar 17 18:20:01.275915 ignition[1376]: INFO : Stage: umount Mar 17 18:20:01.275915 ignition[1376]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:20:01.275915 ignition[1376]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:20:01.270448 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:20:01.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.271925 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:20:01.272281 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:20:01.277109 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:20:01.277712 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:20:01.305286 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:20:01.305893 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:20:01.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.319399 ignition[1376]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:20:01.322148 ignition[1376]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:20:01.325326 ignition[1376]: INFO : PUT result: OK Mar 17 18:20:01.331761 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:20:01.334640 ignition[1376]: INFO : umount: umount passed Mar 17 18:20:01.334640 ignition[1376]: INFO : Ignition finished successfully Mar 17 18:20:01.339942 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:20:01.340300 systemd[1]: Stopped ignition-mount.service. Mar 17 18:20:01.345369 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:20:01.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.345484 systemd[1]: Stopped ignition-disks.service. Mar 17 18:20:01.347253 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:20:01.347356 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:20:01.349275 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:20:01.349383 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:20:01.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.359929 systemd[1]: Stopped target network.target. Mar 17 18:20:01.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.361471 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:20:01.361591 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:20:01.363824 systemd[1]: Stopped target paths.target. Mar 17 18:20:01.366229 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:20:01.373138 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:20:01.376512 systemd[1]: Stopped target slices.target. Mar 17 18:20:01.379334 systemd[1]: Stopped target sockets.target. Mar 17 18:20:01.382360 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:20:01.382448 systemd[1]: Closed iscsid.socket. Mar 17 18:20:01.386489 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:20:01.386577 systemd[1]: Closed iscsiuio.socket. Mar 17 18:20:01.389685 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:20:01.393086 systemd[1]: Stopped ignition-setup.service. Mar 17 18:20:01.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.396366 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:20:01.399818 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:20:01.403272 systemd-networkd[1182]: eth0: DHCPv6 lease lost Mar 17 18:20:01.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.403456 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:20:01.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.403697 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:20:01.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.415000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:20:01.409387 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:20:01.419000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:20:01.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.409683 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:20:01.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.413652 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:20:01.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.414005 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:20:01.416882 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:20:01.416960 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:20:01.420138 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:20:01.421359 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:20:01.425232 systemd[1]: Stopping network-cleanup.service... Mar 17 18:20:01.428660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:20:01.428803 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:20:01.430657 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:20:01.430774 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:20:01.433605 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:20:01.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.433723 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:20:01.444102 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:20:01.447732 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:20:01.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.470002 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:20:01.471763 systemd[1]: Stopped network-cleanup.service. Mar 17 18:20:01.478084 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:20:01.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.478810 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:20:01.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.491558 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:20:01.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.491656 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:20:01.495089 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:20:01.495372 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:20:01.498144 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:20:01.498283 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:20:01.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:01.500605 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:20:01.500714 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:20:01.503496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:20:01.503605 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:20:01.507768 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:20:01.515321 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:20:01.515447 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:20:01.527668 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:20:01.527890 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:20:01.530541 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:20:01.535246 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:20:01.596390 systemd-journald[309]: Received SIGTERM from PID 1 (n/a). Mar 17 18:20:01.596451 iscsid[1187]: iscsid shutting down. Mar 17 18:20:01.554311 systemd[1]: Switching root. Mar 17 18:20:01.599390 systemd-journald[309]: Journal stopped Mar 17 18:20:07.952629 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:20:07.952764 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:20:07.952798 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:20:07.952829 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:20:07.952860 kernel: SELinux: policy capability open_perms=1 Mar 17 18:20:07.952892 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:20:07.952925 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:20:07.952955 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:20:07.952990 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:20:07.953022 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:20:07.953055 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:20:07.953089 systemd[1]: Successfully loaded SELinux policy in 128.735ms. Mar 17 18:20:07.953176 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.519ms. Mar 17 18:20:07.953219 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:20:07.953282 systemd[1]: Detected virtualization amazon. Mar 17 18:20:07.953319 systemd[1]: Detected architecture arm64. Mar 17 18:20:07.953351 systemd[1]: Detected first boot. Mar 17 18:20:07.953387 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:20:07.953420 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:20:07.953451 kernel: kauditd_printk_skb: 45 callbacks suppressed Mar 17 18:20:07.953488 kernel: audit: type=1400 audit(1742235603.056:84): avc: denied { associate } for pid=1409 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:20:07.953524 kernel: audit: type=1300 audit(1742235603.056:84): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b2 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1392 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:07.953558 kernel: audit: type=1327 audit(1742235603.056:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:20:07.953592 kernel: audit: type=1400 audit(1742235603.072:85): avc: denied { associate } for pid=1409 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:20:07.953626 kernel: audit: type=1300 audit(1742235603.072:85): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145989 a2=1ed a3=0 items=2 ppid=1392 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:07.953657 kernel: audit: type=1307 audit(1742235603.072:85): cwd="/" Mar 17 18:20:07.953689 kernel: audit: type=1302 audit(1742235603.072:85): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:20:07.953741 kernel: audit: type=1302 audit(1742235603.072:85): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:20:07.953774 kernel: audit: type=1327 audit(1742235603.072:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:20:07.953809 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:20:07.953844 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:20:07.953877 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:20:07.953919 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:20:07.953951 kernel: audit: type=1334 audit(1742235607.511:86): prog-id=12 op=LOAD Mar 17 18:20:07.953989 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:20:07.954021 systemd[1]: Stopped iscsiuio.service. Mar 17 18:20:07.954050 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:20:07.954090 systemd[1]: Stopped iscsid.service. Mar 17 18:20:07.954124 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:20:07.954210 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:20:07.954251 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:20:07.954287 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:20:07.954319 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:20:07.954352 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 18:20:07.954384 systemd[1]: Created slice system-getty.slice. Mar 17 18:20:07.954418 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:20:07.954449 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:20:07.954482 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:20:07.954514 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:20:07.954543 systemd[1]: Created slice user.slice. Mar 17 18:20:07.954575 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:20:07.954604 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:20:07.954636 systemd[1]: Set up automount boot.automount. Mar 17 18:20:07.954677 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:20:07.954711 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:20:07.954741 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:20:07.954771 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:20:07.954800 systemd[1]: Reached target integritysetup.target. Mar 17 18:20:07.954831 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:20:07.954862 systemd[1]: Reached target remote-fs.target. Mar 17 18:20:07.954894 systemd[1]: Reached target slices.target. Mar 17 18:20:07.954924 systemd[1]: Reached target swap.target. Mar 17 18:20:07.954954 systemd[1]: Reached target torcx.target. Mar 17 18:20:07.954989 systemd[1]: Reached target veritysetup.target. Mar 17 18:20:07.955020 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:20:07.955049 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:20:07.955080 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:20:07.955109 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:20:07.955141 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:20:07.955228 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:20:07.955264 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:20:07.955296 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:20:07.955329 systemd[1]: Mounting media.mount... Mar 17 18:20:07.955367 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:20:07.955396 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:20:07.955425 systemd[1]: Mounting tmp.mount... Mar 17 18:20:07.955459 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:20:07.955488 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:20:07.955519 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:20:07.955550 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:20:07.955582 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:20:07.955612 systemd[1]: Starting modprobe@drm.service... Mar 17 18:20:07.955647 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:20:07.955680 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:20:07.955710 systemd[1]: Starting modprobe@loop.service... Mar 17 18:20:07.960709 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:20:07.960755 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:20:07.960787 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:20:07.960818 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:20:07.960848 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:20:07.960881 systemd[1]: Stopped systemd-journald.service. Mar 17 18:20:07.960919 systemd[1]: Starting systemd-journald.service... Mar 17 18:20:07.960948 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:20:07.960978 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:20:07.961007 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:20:07.961038 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:20:07.961070 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:20:07.961101 kernel: loop: module loaded Mar 17 18:20:07.961130 systemd[1]: Stopped verity-setup.service. Mar 17 18:20:07.961184 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:20:07.961221 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:20:07.961252 systemd[1]: Mounted media.mount. Mar 17 18:20:07.961280 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:20:07.961312 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:20:07.961344 systemd[1]: Mounted tmp.mount. Mar 17 18:20:07.961374 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:20:07.961408 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:20:07.961438 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:20:07.961466 kernel: fuse: init (API version 7.34) Mar 17 18:20:07.961496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:20:07.961526 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:20:07.961555 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:20:07.961585 systemd[1]: Finished modprobe@drm.service. Mar 17 18:20:07.961617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:20:07.961650 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:20:07.961680 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:20:07.961723 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:20:07.961758 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:20:07.961788 systemd[1]: Finished modprobe@loop.service. Mar 17 18:20:07.961825 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:20:07.961854 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:20:07.961885 systemd[1]: Reached target network-pre.target. Mar 17 18:20:07.961917 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:20:07.961946 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:20:07.961976 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:20:07.962009 systemd-journald[1486]: Journal started Mar 17 18:20:07.962108 systemd-journald[1486]: Runtime Journal (/run/log/journal/ec2001f1db2210a5455c8cc583ed437c) is 8.0M, max 75.4M, 67.4M free. Mar 17 18:20:02.571000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:20:02.796000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:20:02.796000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:20:02.797000 audit: BPF prog-id=10 op=LOAD Mar 17 18:20:02.797000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:20:02.797000 audit: BPF prog-id=11 op=LOAD Mar 17 18:20:02.797000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:20:03.056000 audit[1409]: AVC avc: denied { associate } for pid=1409 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:20:03.056000 audit[1409]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b2 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1392 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:03.056000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:20:03.072000 audit[1409]: AVC avc: denied { associate } for pid=1409 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:20:03.072000 audit[1409]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145989 a2=1ed a3=0 items=2 ppid=1392 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:03.072000 audit: CWD cwd="/" Mar 17 18:20:03.072000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:20:03.072000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:20:03.072000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:20:07.511000 audit: BPF prog-id=12 op=LOAD Mar 17 18:20:07.513000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:20:07.513000 audit: BPF prog-id=13 op=LOAD Mar 17 18:20:07.513000 audit: BPF prog-id=14 op=LOAD Mar 17 18:20:07.513000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:20:07.513000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:20:07.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.527000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:20:07.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.779000 audit: BPF prog-id=15 op=LOAD Mar 17 18:20:07.780000 audit: BPF prog-id=16 op=LOAD Mar 17 18:20:07.780000 audit: BPF prog-id=17 op=LOAD Mar 17 18:20:07.780000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:20:07.780000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:20:07.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:07.941000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:20:07.941000 audit[1486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff8e5cfd0 a2=4000 a3=1 items=0 ppid=1 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:07.941000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:20:03.044851 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:20:07.508199 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:20:03.054420 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:20:07.508221 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Mar 17 18:20:03.054484 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:20:07.518982 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:20:03.054574 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:20:03.054602 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:20:03.054681 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:20:03.054717 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:20:03.055247 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:20:03.055351 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:20:03.055388 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:20:03.056545 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:20:03.056630 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:20:03.056677 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:20:03.056720 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:20:03.056770 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:20:03.056822 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:20:06.591453 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:06Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:20:06.591978 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:06Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:20:06.592247 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:06Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:20:06.592686 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:06Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:20:06.592796 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:06Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:20:06.592931 /usr/lib/systemd/system-generators/torcx-generator[1409]: time="2025-03-17T18:20:06Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:20:07.981261 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:20:07.988215 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:20:07.995642 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:20:07.999210 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:20:08.007246 systemd[1]: Started systemd-journald.service. Mar 17 18:20:08.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.009416 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:20:08.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.011782 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:20:08.013684 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:20:08.018057 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:20:08.024080 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:20:08.034032 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:20:08.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.036578 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:20:08.046732 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:20:08.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.050804 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:20:08.075551 systemd-journald[1486]: Time spent on flushing to /var/log/journal/ec2001f1db2210a5455c8cc583ed437c is 75.561ms for 1137 entries. Mar 17 18:20:08.075551 systemd-journald[1486]: System Journal (/var/log/journal/ec2001f1db2210a5455c8cc583ed437c) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:20:08.167876 systemd-journald[1486]: Received client request to flush runtime journal. Mar 17 18:20:08.167991 kernel: kauditd_printk_skb: 43 callbacks suppressed Mar 17 18:20:08.168053 kernel: audit: type=1130 audit(1742235608.107:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.106434 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:20:08.170320 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:20:08.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.181195 kernel: audit: type=1130 audit(1742235608.171:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.181611 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:20:08.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.185642 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:20:08.196285 kernel: audit: type=1130 audit(1742235608.182:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.204444 udevadm[1529]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:20:08.306030 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:20:08.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.316190 kernel: audit: type=1130 audit(1742235608.306:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.863000 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:20:08.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.878441 kernel: audit: type=1130 audit(1742235608.863:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.878549 kernel: audit: type=1334 audit(1742235608.871:133): prog-id=18 op=LOAD Mar 17 18:20:08.878616 kernel: audit: type=1334 audit(1742235608.873:134): prog-id=19 op=LOAD Mar 17 18:20:08.878664 kernel: audit: type=1334 audit(1742235608.873:135): prog-id=7 op=UNLOAD Mar 17 18:20:08.878710 kernel: audit: type=1334 audit(1742235608.873:136): prog-id=8 op=UNLOAD Mar 17 18:20:08.871000 audit: BPF prog-id=18 op=LOAD Mar 17 18:20:08.873000 audit: BPF prog-id=19 op=LOAD Mar 17 18:20:08.873000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:20:08.873000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:20:08.875853 systemd[1]: Starting systemd-udevd.service... Mar 17 18:20:08.917922 systemd-udevd[1530]: Using default interface naming scheme 'v252'. Mar 17 18:20:08.984995 systemd[1]: Started systemd-udevd.service. Mar 17 18:20:08.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.994180 kernel: audit: type=1130 audit(1742235608.985:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:08.994000 audit: BPF prog-id=20 op=LOAD Mar 17 18:20:08.996868 systemd[1]: Starting systemd-networkd.service... Mar 17 18:20:09.005000 audit: BPF prog-id=21 op=LOAD Mar 17 18:20:09.005000 audit: BPF prog-id=22 op=LOAD Mar 17 18:20:09.005000 audit: BPF prog-id=23 op=LOAD Mar 17 18:20:09.007908 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:20:09.075913 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:20:09.086192 systemd[1]: Started systemd-userdbd.service. Mar 17 18:20:09.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:09.104257 (udev-worker)[1532]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:20:09.259457 systemd-networkd[1534]: lo: Link UP Mar 17 18:20:09.259483 systemd-networkd[1534]: lo: Gained carrier Mar 17 18:20:09.260429 systemd-networkd[1534]: Enumeration completed Mar 17 18:20:09.260597 systemd[1]: Started systemd-networkd.service. Mar 17 18:20:09.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:09.264568 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:20:09.267835 systemd-networkd[1534]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:20:09.273198 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:20:09.274429 systemd-networkd[1534]: eth0: Link UP Mar 17 18:20:09.274771 systemd-networkd[1534]: eth0: Gained carrier Mar 17 18:20:09.281407 systemd-networkd[1534]: eth0: DHCPv4 address 172.31.18.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 18:20:09.470635 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:20:09.473319 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:20:09.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:09.477567 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:20:09.546090 lvm[1641]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:20:09.587821 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:20:09.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:09.589824 systemd[1]: Reached target cryptsetup.target. Mar 17 18:20:09.593748 systemd[1]: Starting lvm2-activation.service... Mar 17 18:20:09.602455 lvm[1642]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:20:09.643010 systemd[1]: Finished lvm2-activation.service. Mar 17 18:20:09.644839 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:20:09.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:09.646506 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:20:09.646552 systemd[1]: Reached target local-fs.target. Mar 17 18:20:09.648302 systemd[1]: Reached target machines.target. Mar 17 18:20:09.652050 systemd[1]: Starting ldconfig.service... Mar 17 18:20:09.654288 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:20:09.654431 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:09.656763 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:20:09.661342 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:20:09.665639 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:20:09.669907 systemd[1]: Starting systemd-sysext.service... Mar 17 18:20:09.679149 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1644 (bootctl) Mar 17 18:20:09.681877 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:20:09.698053 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:20:09.722100 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:20:09.722578 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:20:09.748231 kernel: loop0: detected capacity change from 0 to 194096 Mar 17 18:20:09.754370 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:20:09.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:09.849242 systemd-fsck[1655]: fsck.fat 4.2 (2021-01-31) Mar 17 18:20:09.849242 systemd-fsck[1655]: /dev/nvme0n1p1: 236 files, 117179/258078 clusters Mar 17 18:20:09.854787 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:20:09.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:09.859514 systemd[1]: Mounting boot.mount... Mar 17 18:20:09.880815 systemd[1]: Mounted boot.mount. Mar 17 18:20:09.923728 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:20:09.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.025191 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:20:10.053182 kernel: loop1: detected capacity change from 0 to 194096 Mar 17 18:20:10.067954 (sd-sysext)[1670]: Using extensions 'kubernetes'. Mar 17 18:20:10.069497 (sd-sysext)[1670]: Merged extensions into '/usr'. Mar 17 18:20:10.094145 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:20:10.095360 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:20:10.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.107007 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:20:10.112736 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:20:10.116539 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:20:10.121513 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:20:10.125604 systemd[1]: Starting modprobe@loop.service... Mar 17 18:20:10.127411 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:20:10.127721 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:10.134451 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:20:10.136926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:20:10.137297 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:20:10.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.140084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:20:10.140573 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:20:10.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.143314 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:20:10.143622 systemd[1]: Finished modprobe@loop.service. Mar 17 18:20:10.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.146516 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:20:10.146709 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:20:10.148574 systemd[1]: Finished systemd-sysext.service. Mar 17 18:20:10.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.153645 systemd[1]: Starting ensure-sysext.service... Mar 17 18:20:10.157537 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:20:10.170594 systemd[1]: Reloading. Mar 17 18:20:10.241603 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:20:10.278909 /usr/lib/systemd/system-generators/torcx-generator[1697]: time="2025-03-17T18:20:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:20:10.278977 /usr/lib/systemd/system-generators/torcx-generator[1697]: time="2025-03-17T18:20:10Z" level=info msg="torcx already run" Mar 17 18:20:10.300706 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:20:10.332239 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:20:10.495517 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:20:10.495555 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:20:10.536098 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:20:10.682000 audit: BPF prog-id=24 op=LOAD Mar 17 18:20:10.682000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:20:10.685000 audit: BPF prog-id=25 op=LOAD Mar 17 18:20:10.685000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:20:10.685000 audit: BPF prog-id=26 op=LOAD Mar 17 18:20:10.685000 audit: BPF prog-id=27 op=LOAD Mar 17 18:20:10.686000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:20:10.686000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:20:10.691000 audit: BPF prog-id=28 op=LOAD Mar 17 18:20:10.691000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:20:10.691000 audit: BPF prog-id=29 op=LOAD Mar 17 18:20:10.691000 audit: BPF prog-id=30 op=LOAD Mar 17 18:20:10.691000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:20:10.691000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:20:10.693000 audit: BPF prog-id=31 op=LOAD Mar 17 18:20:10.693000 audit: BPF prog-id=32 op=LOAD Mar 17 18:20:10.693000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:20:10.693000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:20:10.728051 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:20:10.732289 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:20:10.736329 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:20:10.741389 systemd[1]: Starting modprobe@loop.service... Mar 17 18:20:10.744012 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:20:10.744395 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:10.746433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:20:10.746783 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:20:10.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.749741 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:20:10.750057 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:20:10.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.754190 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:20:10.754505 systemd[1]: Finished modprobe@loop.service. Mar 17 18:20:10.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.757413 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:20:10.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.774319 systemd[1]: Finished ensure-sysext.service. Mar 17 18:20:10.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.780639 systemd[1]: Starting audit-rules.service... Mar 17 18:20:10.785094 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:20:10.787838 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:20:10.790410 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:20:10.795082 systemd[1]: Starting modprobe@drm.service... Mar 17 18:20:10.800962 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:20:10.815000 audit: BPF prog-id=33 op=LOAD Mar 17 18:20:10.807542 systemd[1]: Starting modprobe@loop.service... Mar 17 18:20:10.809622 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:20:10.809775 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:10.812482 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:20:10.820120 systemd[1]: Starting systemd-resolved.service... Mar 17 18:20:10.822000 audit: BPF prog-id=34 op=LOAD Mar 17 18:20:10.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.825826 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:20:10.832489 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:20:10.856000 audit[1768]: SYSTEM_BOOT pid=1768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.836013 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:20:10.838605 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:20:10.838937 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:20:10.843375 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:20:10.843715 systemd[1]: Finished modprobe@drm.service. Mar 17 18:20:10.846087 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:20:10.846421 systemd[1]: Finished modprobe@loop.service. Mar 17 18:20:10.849057 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:20:10.849112 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:20:10.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.872033 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:20:10.875608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:20:10.875931 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:20:10.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.878242 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:20:10.924658 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:20:10.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:20:10.998275 systemd-resolved[1765]: Positive Trust Anchors: Mar 17 18:20:10.998855 systemd-resolved[1765]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:20:10.999027 systemd-resolved[1765]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:20:11.008000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:20:11.008000 audit[1780]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc567ef70 a2=420 a3=0 items=0 ppid=1756 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:20:11.008000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:20:11.010865 augenrules[1780]: No rules Mar 17 18:20:11.013354 systemd[1]: Finished audit-rules.service. Mar 17 18:20:11.024248 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:20:11.026087 systemd[1]: Reached target time-set.target. Mar 17 18:20:11.059818 systemd-resolved[1765]: Defaulting to hostname 'linux'. Mar 17 18:20:11.063363 systemd[1]: Started systemd-resolved.service. Mar 17 18:20:11.065288 systemd[1]: Reached target network.target. Mar 17 18:20:11.066900 systemd[1]: Reached target nss-lookup.target. Mar 17 18:20:11.126202 systemd-timesyncd[1767]: Contacted time server 208.67.72.50:123 (0.flatcar.pool.ntp.org). Mar 17 18:20:11.126332 systemd-timesyncd[1767]: Initial clock synchronization to Mon 2025-03-17 18:20:10.850534 UTC. Mar 17 18:20:11.189345 systemd-networkd[1534]: eth0: Gained IPv6LL Mar 17 18:20:11.192122 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:20:11.194286 systemd[1]: Reached target network-online.target. Mar 17 18:20:11.249428 ldconfig[1643]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:20:11.259235 systemd[1]: Finished ldconfig.service. Mar 17 18:20:11.263355 systemd[1]: Starting systemd-update-done.service... Mar 17 18:20:11.279125 systemd[1]: Finished systemd-update-done.service. Mar 17 18:20:11.281186 systemd[1]: Reached target sysinit.target. Mar 17 18:20:11.282973 systemd[1]: Started motdgen.path. Mar 17 18:20:11.284448 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:20:11.286912 systemd[1]: Started logrotate.timer. Mar 17 18:20:11.288683 systemd[1]: Started mdadm.timer. Mar 17 18:20:11.290103 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:20:11.291816 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:20:11.291877 systemd[1]: Reached target paths.target. Mar 17 18:20:11.293302 systemd[1]: Reached target timers.target. Mar 17 18:20:11.295963 systemd[1]: Listening on dbus.socket. Mar 17 18:20:11.299583 systemd[1]: Starting docker.socket... Mar 17 18:20:11.306545 systemd[1]: Listening on sshd.socket. Mar 17 18:20:11.308345 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:11.309261 systemd[1]: Listening on docker.socket. Mar 17 18:20:11.310946 systemd[1]: Reached target sockets.target. Mar 17 18:20:11.312502 systemd[1]: Reached target basic.target. Mar 17 18:20:11.314100 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:20:11.314203 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:20:11.316438 systemd[1]: Started amazon-ssm-agent.service. Mar 17 18:20:11.320684 systemd[1]: Starting containerd.service... Mar 17 18:20:11.326109 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 18:20:11.332338 systemd[1]: Starting dbus.service... Mar 17 18:20:11.343775 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:20:11.348834 systemd[1]: Starting extend-filesystems.service... Mar 17 18:20:11.358892 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:20:11.365777 systemd[1]: Starting kubelet.service... Mar 17 18:20:11.370596 systemd[1]: Starting motdgen.service... Mar 17 18:20:11.379143 systemd[1]: Started nvidia.service. Mar 17 18:20:11.394566 systemd[1]: Starting prepare-helm.service... Mar 17 18:20:11.401370 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:20:11.406049 systemd[1]: Starting sshd-keygen.service... Mar 17 18:20:11.434993 systemd[1]: Starting systemd-logind.service... Mar 17 18:20:11.438403 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:20:11.438536 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:20:11.440077 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:20:11.455037 systemd[1]: Starting update-engine.service... Mar 17 18:20:11.462287 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:20:11.481724 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:20:11.485575 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:20:11.520680 jq[1806]: true Mar 17 18:20:11.532634 jq[1794]: false Mar 17 18:20:11.536386 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:20:11.536775 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:20:11.561801 tar[1811]: linux-arm64/helm Mar 17 18:20:11.568533 jq[1819]: true Mar 17 18:20:11.668105 dbus-daemon[1792]: [system] SELinux support is enabled Mar 17 18:20:11.671657 extend-filesystems[1795]: Found loop1 Mar 17 18:20:11.674061 systemd[1]: Started dbus.service. Mar 17 18:20:11.679068 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:20:11.679144 systemd[1]: Reached target system-config.target. Mar 17 18:20:11.681010 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:20:11.681071 systemd[1]: Reached target user-config.target. Mar 17 18:20:11.686962 extend-filesystems[1795]: Found nvme0n1 Mar 17 18:20:11.690360 extend-filesystems[1795]: Found nvme0n1p1 Mar 17 18:20:11.693781 extend-filesystems[1795]: Found nvme0n1p2 Mar 17 18:20:11.701430 extend-filesystems[1795]: Found nvme0n1p3 Mar 17 18:20:11.706061 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:20:11.706597 systemd[1]: Finished motdgen.service. Mar 17 18:20:11.709241 extend-filesystems[1795]: Found usr Mar 17 18:20:11.711054 extend-filesystems[1795]: Found nvme0n1p4 Mar 17 18:20:11.713904 extend-filesystems[1795]: Found nvme0n1p6 Mar 17 18:20:11.721972 dbus-daemon[1792]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1534 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 18:20:11.726195 extend-filesystems[1795]: Found nvme0n1p7 Mar 17 18:20:11.733585 dbus-daemon[1792]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:20:11.736211 extend-filesystems[1795]: Found nvme0n1p9 Mar 17 18:20:11.749298 extend-filesystems[1795]: Checking size of /dev/nvme0n1p9 Mar 17 18:20:11.777969 systemd[1]: Starting systemd-hostnamed.service... Mar 17 18:20:11.798465 bash[1847]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:20:11.799984 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:20:11.827374 extend-filesystems[1795]: Resized partition /dev/nvme0n1p9 Mar 17 18:20:11.835955 amazon-ssm-agent[1789]: 2025/03/17 18:20:11 Failed to load instance info from vault. RegistrationKey does not exist. Mar 17 18:20:11.839752 extend-filesystems[1861]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:20:11.865885 amazon-ssm-agent[1789]: Initializing new seelog logger Mar 17 18:20:11.866215 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 18:20:11.867201 amazon-ssm-agent[1789]: New Seelog Logger Creation Complete Mar 17 18:20:11.869969 amazon-ssm-agent[1789]: 2025/03/17 18:20:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 18:20:11.870189 amazon-ssm-agent[1789]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 18:20:11.871821 amazon-ssm-agent[1789]: 2025/03/17 18:20:11 processing appconfig overrides Mar 17 18:20:11.900851 update_engine[1805]: I0317 18:20:11.900307 1805 main.cc:92] Flatcar Update Engine starting Mar 17 18:20:11.917355 systemd[1]: Started update-engine.service. Mar 17 18:20:11.919372 update_engine[1805]: I0317 18:20:11.917477 1805 update_check_scheduler.cc:74] Next update check in 7m35s Mar 17 18:20:11.922621 systemd[1]: Started locksmithd.service. Mar 17 18:20:11.926326 env[1818]: time="2025-03-17T18:20:11.924930771Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:20:11.965063 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 18:20:11.980370 extend-filesystems[1861]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 18:20:11.980370 extend-filesystems[1861]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:20:11.980370 extend-filesystems[1861]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 18:20:11.990030 extend-filesystems[1795]: Resized filesystem in /dev/nvme0n1p9 Mar 17 18:20:11.986756 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:20:11.987141 systemd[1]: Finished extend-filesystems.service. Mar 17 18:20:12.027204 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:20:12.117499 systemd-logind[1804]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:20:12.123234 systemd-logind[1804]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 18:20:12.124178 systemd-logind[1804]: New seat seat0. Mar 17 18:20:12.141637 systemd[1]: Started systemd-logind.service. Mar 17 18:20:12.148834 dbus-daemon[1792]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 18:20:12.149062 systemd[1]: Started systemd-hostnamed.service. Mar 17 18:20:12.151461 dbus-daemon[1792]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1854 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 18:20:12.156499 systemd[1]: Starting polkit.service... Mar 17 18:20:12.200556 env[1818]: time="2025-03-17T18:20:12.200476600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:20:12.200805 env[1818]: time="2025-03-17T18:20:12.200756467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:12.213681 env[1818]: time="2025-03-17T18:20:12.213584537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:20:12.213681 env[1818]: time="2025-03-17T18:20:12.213663049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:12.215172 env[1818]: time="2025-03-17T18:20:12.215038820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:20:12.216672 env[1818]: time="2025-03-17T18:20:12.215137261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:12.216815 env[1818]: time="2025-03-17T18:20:12.216673068Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:20:12.216815 env[1818]: time="2025-03-17T18:20:12.216708222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:12.218652 polkitd[1877]: Started polkitd version 121 Mar 17 18:20:12.224798 env[1818]: time="2025-03-17T18:20:12.220095090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:12.225485 env[1818]: time="2025-03-17T18:20:12.225397348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:20:12.230954 env[1818]: time="2025-03-17T18:20:12.230867928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:20:12.230954 env[1818]: time="2025-03-17T18:20:12.230944355Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:20:12.231178 env[1818]: time="2025-03-17T18:20:12.231106500Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:20:12.231178 env[1818]: time="2025-03-17T18:20:12.231170216Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:20:12.246324 env[1818]: time="2025-03-17T18:20:12.246251710Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:20:12.246473 env[1818]: time="2025-03-17T18:20:12.246325124Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:20:12.246473 env[1818]: time="2025-03-17T18:20:12.246358262Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:20:12.246473 env[1818]: time="2025-03-17T18:20:12.246433136Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:20:12.246667 env[1818]: time="2025-03-17T18:20:12.246468348Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:20:12.246667 env[1818]: time="2025-03-17T18:20:12.246501116Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:20:12.246667 env[1818]: time="2025-03-17T18:20:12.246534555Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:20:12.247146 env[1818]: time="2025-03-17T18:20:12.247084766Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:20:12.247276 env[1818]: time="2025-03-17T18:20:12.247172883Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:20:12.247276 env[1818]: time="2025-03-17T18:20:12.247207829Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:20:12.247276 env[1818]: time="2025-03-17T18:20:12.247238187Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:20:12.247448 env[1818]: time="2025-03-17T18:20:12.247273874Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:20:12.247543 env[1818]: time="2025-03-17T18:20:12.247497291Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:20:12.247743 env[1818]: time="2025-03-17T18:20:12.247691717Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:20:12.248360 env[1818]: time="2025-03-17T18:20:12.248299688Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:20:12.248504 env[1818]: time="2025-03-17T18:20:12.248370078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.248504 env[1818]: time="2025-03-17T18:20:12.248407144Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:20:12.248613 env[1818]: time="2025-03-17T18:20:12.248522050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.249911 polkitd[1877]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 18:20:12.250260 polkitd[1877]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 18:20:12.253011 polkitd[1877]: Finished loading, compiling and executing 2 rules Mar 17 18:20:12.257428 env[1818]: time="2025-03-17T18:20:12.248554273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.257428 env[1818]: time="2025-03-17T18:20:12.257435381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.257631 env[1818]: time="2025-03-17T18:20:12.257472876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.257631 env[1818]: time="2025-03-17T18:20:12.257506223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.257631 env[1818]: time="2025-03-17T18:20:12.257536302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.257631 env[1818]: time="2025-03-17T18:20:12.257566822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.257631 env[1818]: time="2025-03-17T18:20:12.257601524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.257903 env[1818]: time="2025-03-17T18:20:12.257638034Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:20:12.257961 env[1818]: time="2025-03-17T18:20:12.257928028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.258024 env[1818]: time="2025-03-17T18:20:12.257966103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.258024 env[1818]: time="2025-03-17T18:20:12.258010538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.258138 env[1818]: time="2025-03-17T18:20:12.258041046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:20:12.258138 env[1818]: time="2025-03-17T18:20:12.258073686Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:20:12.258138 env[1818]: time="2025-03-17T18:20:12.258101043Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:20:12.258307 env[1818]: time="2025-03-17T18:20:12.258171328Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:20:12.258307 env[1818]: time="2025-03-17T18:20:12.258238821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:20:12.258716 env[1818]: time="2025-03-17T18:20:12.258592126Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:20:12.263148 env[1818]: time="2025-03-17T18:20:12.258711748Z" level=info msg="Connect containerd service" Mar 17 18:20:12.263148 env[1818]: time="2025-03-17T18:20:12.258776275Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:20:12.259910 systemd[1]: Started polkit.service. Mar 17 18:20:12.259663 dbus-daemon[1792]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 18:20:12.265061 polkitd[1877]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 18:20:12.267947 env[1818]: time="2025-03-17T18:20:12.267397259Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:20:12.267947 env[1818]: time="2025-03-17T18:20:12.267880405Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:20:12.268111 env[1818]: time="2025-03-17T18:20:12.267987745Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:20:12.268253 systemd[1]: Started containerd.service. Mar 17 18:20:12.277176 env[1818]: time="2025-03-17T18:20:12.274128065Z" level=info msg="Start subscribing containerd event" Mar 17 18:20:12.277176 env[1818]: time="2025-03-17T18:20:12.274281556Z" level=info msg="Start recovering state" Mar 17 18:20:12.277176 env[1818]: time="2025-03-17T18:20:12.274457501Z" level=info msg="Start event monitor" Mar 17 18:20:12.277176 env[1818]: time="2025-03-17T18:20:12.274523604Z" level=info msg="Start snapshots syncer" Mar 17 18:20:12.277176 env[1818]: time="2025-03-17T18:20:12.274550010Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:20:12.277176 env[1818]: time="2025-03-17T18:20:12.274570322Z" level=info msg="Start streaming server" Mar 17 18:20:12.279340 env[1818]: time="2025-03-17T18:20:12.279279371Z" level=info msg="containerd successfully booted in 0.374175s" Mar 17 18:20:12.334546 systemd-hostnamed[1854]: Hostname set to (transient) Mar 17 18:20:12.334720 systemd-resolved[1765]: System hostname changed to 'ip-172-31-18-98'. Mar 17 18:20:12.583720 coreos-metadata[1791]: Mar 17 18:20:12.583 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 18:20:12.585486 coreos-metadata[1791]: Mar 17 18:20:12.585 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Mar 17 18:20:12.586537 coreos-metadata[1791]: Mar 17 18:20:12.586 INFO Fetch successful Mar 17 18:20:12.586537 coreos-metadata[1791]: Mar 17 18:20:12.586 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 18:20:12.587917 coreos-metadata[1791]: Mar 17 18:20:12.587 INFO Fetch successful Mar 17 18:20:12.593582 unknown[1791]: wrote ssh authorized keys file for user: core Mar 17 18:20:12.620970 update-ssh-keys[1923]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:20:12.622138 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 18:20:12.747529 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Create new startup processor Mar 17 18:20:12.749282 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [LongRunningPluginsManager] registered plugins: {} Mar 17 18:20:12.755969 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Initializing bookkeeping folders Mar 17 18:20:12.756197 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO removing the completed state files Mar 17 18:20:12.756332 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Initializing bookkeeping folders for long running plugins Mar 17 18:20:12.756452 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Mar 17 18:20:12.758338 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Initializing healthcheck folders for long running plugins Mar 17 18:20:12.758577 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Initializing locations for inventory plugin Mar 17 18:20:12.758738 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Initializing default location for custom inventory Mar 17 18:20:12.758855 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Initializing default location for file inventory Mar 17 18:20:12.759014 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Initializing default location for role inventory Mar 17 18:20:12.759143 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Init the cloudwatchlogs publisher Mar 17 18:20:12.759291 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform independent plugin aws:softwareInventory Mar 17 18:20:12.759412 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform independent plugin aws:runPowerShellScript Mar 17 18:20:12.759527 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform independent plugin aws:configureDocker Mar 17 18:20:12.759662 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform independent plugin aws:runDockerAction Mar 17 18:20:12.759774 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform independent plugin aws:downloadContent Mar 17 18:20:12.760502 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform independent plugin aws:updateSsmAgent Mar 17 18:20:12.760672 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform independent plugin aws:refreshAssociation Mar 17 18:20:12.760788 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform independent plugin aws:configurePackage Mar 17 18:20:12.760899 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform independent plugin aws:runDocument Mar 17 18:20:12.761039 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Successfully loaded platform dependent plugin aws:runShellScript Mar 17 18:20:12.761201 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Mar 17 18:20:12.761326 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO OS: linux, Arch: arm64 Mar 17 18:20:12.769324 amazon-ssm-agent[1789]: datastore file /var/lib/amazon/ssm/i-0073c7ecb6df6a007/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Mar 17 18:20:12.849421 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessageGatewayService] Starting session document processing engine... Mar 17 18:20:12.944105 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessageGatewayService] [EngineProcessor] Starting Mar 17 18:20:13.038507 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Mar 17 18:20:13.133079 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0073c7ecb6df6a007, requestId: 897199e7-a313-4194-91f1-e75f826b83b3 Mar 17 18:20:13.227915 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] Starting document processing engine... Mar 17 18:20:13.322728 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] [EngineProcessor] Starting Mar 17 18:20:13.374206 tar[1811]: linux-arm64/LICENSE Mar 17 18:20:13.374206 tar[1811]: linux-arm64/README.md Mar 17 18:20:13.384601 systemd[1]: Finished prepare-helm.service. Mar 17 18:20:13.417976 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Mar 17 18:20:13.513284 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] Starting message polling Mar 17 18:20:13.608778 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] Starting send replies to MDS Mar 17 18:20:13.643141 locksmithd[1870]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:20:13.704461 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [instanceID=i-0073c7ecb6df6a007] Starting association polling Mar 17 18:20:13.800453 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Mar 17 18:20:13.896537 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] [Association] Launching response handler Mar 17 18:20:13.992830 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Mar 17 18:20:14.089286 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Mar 17 18:20:14.186078 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Mar 17 18:20:14.253087 systemd[1]: Started kubelet.service. Mar 17 18:20:14.282941 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessageGatewayService] listening reply. Mar 17 18:20:14.380173 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [HealthCheck] HealthCheck reporting agent health. Mar 17 18:20:14.477473 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [OfflineService] Starting document processing engine... Mar 17 18:20:14.574944 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [OfflineService] [EngineProcessor] Starting Mar 17 18:20:14.672595 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [OfflineService] [EngineProcessor] Initial processing Mar 17 18:20:14.770553 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [OfflineService] Starting message polling Mar 17 18:20:14.868588 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [OfflineService] Starting send replies to MDS Mar 17 18:20:14.966868 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [LongRunningPluginsManager] starting long running plugin manager Mar 17 18:20:15.065411 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Mar 17 18:20:15.164064 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Mar 17 18:20:15.262947 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [StartupProcessor] Executing startup processor tasks Mar 17 18:20:15.362100 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Mar 17 18:20:15.461286 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Mar 17 18:20:15.560719 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 Mar 17 18:20:15.660485 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0073c7ecb6df6a007?role=subscribe&stream=input Mar 17 18:20:15.760296 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0073c7ecb6df6a007?role=subscribe&stream=input Mar 17 18:20:15.860314 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessageGatewayService] Starting receiving message from control channel Mar 17 18:20:15.960557 amazon-ssm-agent[1789]: 2025-03-17 18:20:12 INFO [MessageGatewayService] [EngineProcessor] Initial processing Mar 17 18:20:16.357344 kubelet[2003]: E0317 18:20:16.357200 2003 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:20:16.362697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:20:16.363027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:20:16.363491 systemd[1]: kubelet.service: Consumed 1.552s CPU time. Mar 17 18:20:16.593048 sshd_keygen[1826]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:20:16.629648 systemd[1]: Finished sshd-keygen.service. Mar 17 18:20:16.634046 systemd[1]: Starting issuegen.service... Mar 17 18:20:16.644855 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:20:16.645278 systemd[1]: Finished issuegen.service. Mar 17 18:20:16.649872 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:20:16.663903 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:20:16.668743 systemd[1]: Started getty@tty1.service. Mar 17 18:20:16.673078 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:20:16.675229 systemd[1]: Reached target getty.target. Mar 17 18:20:16.676982 systemd[1]: Reached target multi-user.target. Mar 17 18:20:16.681278 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:20:16.697583 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:20:16.697930 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:20:16.699966 systemd[1]: Startup finished in 1.119s (kernel) + 8.835s (initrd) + 14.273s (userspace) = 24.229s. Mar 17 18:20:19.908287 systemd[1]: Created slice system-sshd.slice. Mar 17 18:20:19.911535 systemd[1]: Started sshd@0-172.31.18.98:22-139.178.89.65:45114.service. Mar 17 18:20:20.210231 sshd[2024]: Accepted publickey for core from 139.178.89.65 port 45114 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:20.215503 sshd[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:20.234532 systemd[1]: Created slice user-500.slice. Mar 17 18:20:20.237583 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:20:20.247294 systemd-logind[1804]: New session 1 of user core. Mar 17 18:20:20.259235 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:20:20.262804 systemd[1]: Starting user@500.service... Mar 17 18:20:20.270734 (systemd)[2027]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:20.452640 systemd[2027]: Queued start job for default target default.target. Mar 17 18:20:20.453723 systemd[2027]: Reached target paths.target. Mar 17 18:20:20.453775 systemd[2027]: Reached target sockets.target. Mar 17 18:20:20.453807 systemd[2027]: Reached target timers.target. Mar 17 18:20:20.453836 systemd[2027]: Reached target basic.target. Mar 17 18:20:20.453929 systemd[2027]: Reached target default.target. Mar 17 18:20:20.453999 systemd[2027]: Startup finished in 171ms. Mar 17 18:20:20.454179 systemd[1]: Started user@500.service. Mar 17 18:20:20.456248 systemd[1]: Started session-1.scope. Mar 17 18:20:20.603478 systemd[1]: Started sshd@1-172.31.18.98:22-139.178.89.65:45126.service. Mar 17 18:20:20.770091 sshd[2036]: Accepted publickey for core from 139.178.89.65 port 45126 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:20.773443 sshd[2036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:20.780815 systemd-logind[1804]: New session 2 of user core. Mar 17 18:20:20.782985 systemd[1]: Started session-2.scope. Mar 17 18:20:20.911816 sshd[2036]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:20.917571 systemd-logind[1804]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:20:20.920064 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:20:20.921573 systemd[1]: sshd@1-172.31.18.98:22-139.178.89.65:45126.service: Deactivated successfully. Mar 17 18:20:20.923801 systemd-logind[1804]: Removed session 2. Mar 17 18:20:20.940277 systemd[1]: Started sshd@2-172.31.18.98:22-139.178.89.65:45134.service. Mar 17 18:20:21.110655 sshd[2042]: Accepted publickey for core from 139.178.89.65 port 45134 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:21.113595 sshd[2042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:21.121942 systemd[1]: Started session-3.scope. Mar 17 18:20:21.123285 systemd-logind[1804]: New session 3 of user core. Mar 17 18:20:21.244802 sshd[2042]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:21.249113 systemd[1]: sshd@2-172.31.18.98:22-139.178.89.65:45134.service: Deactivated successfully. Mar 17 18:20:21.250451 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:20:21.251604 systemd-logind[1804]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:20:21.254279 systemd-logind[1804]: Removed session 3. Mar 17 18:20:21.272899 systemd[1]: Started sshd@3-172.31.18.98:22-139.178.89.65:54886.service. Mar 17 18:20:21.443999 sshd[2048]: Accepted publickey for core from 139.178.89.65 port 54886 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:21.446475 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:21.454811 systemd-logind[1804]: New session 4 of user core. Mar 17 18:20:21.455843 systemd[1]: Started session-4.scope. Mar 17 18:20:21.585318 sshd[2048]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:21.591411 systemd[1]: sshd@3-172.31.18.98:22-139.178.89.65:54886.service: Deactivated successfully. Mar 17 18:20:21.591737 systemd-logind[1804]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:20:21.592596 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:20:21.594404 systemd-logind[1804]: Removed session 4. Mar 17 18:20:21.613314 systemd[1]: Started sshd@4-172.31.18.98:22-139.178.89.65:54900.service. Mar 17 18:20:21.787645 sshd[2054]: Accepted publickey for core from 139.178.89.65 port 54900 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:21.790651 sshd[2054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:21.798213 systemd-logind[1804]: New session 5 of user core. Mar 17 18:20:21.799122 systemd[1]: Started session-5.scope. Mar 17 18:20:21.939898 sudo[2057]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:20:21.940968 sudo[2057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:20:21.987237 systemd[1]: Starting docker.service... Mar 17 18:20:22.064600 env[2067]: time="2025-03-17T18:20:22.064510937Z" level=info msg="Starting up" Mar 17 18:20:22.066660 env[2067]: time="2025-03-17T18:20:22.066614662Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:20:22.066855 env[2067]: time="2025-03-17T18:20:22.066827040Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:20:22.066998 env[2067]: time="2025-03-17T18:20:22.066967393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:20:22.067113 env[2067]: time="2025-03-17T18:20:22.067085985Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:20:22.070331 env[2067]: time="2025-03-17T18:20:22.070288299Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:20:22.070498 env[2067]: time="2025-03-17T18:20:22.070469713Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:20:22.070652 env[2067]: time="2025-03-17T18:20:22.070621838Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:20:22.070778 env[2067]: time="2025-03-17T18:20:22.070738849Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:20:22.121442 env[2067]: time="2025-03-17T18:20:22.121363751Z" level=info msg="Loading containers: start." Mar 17 18:20:22.405204 kernel: Initializing XFRM netlink socket Mar 17 18:20:22.483661 env[2067]: time="2025-03-17T18:20:22.483579554Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:20:22.486418 (udev-worker)[2078]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:20:22.593750 systemd-networkd[1534]: docker0: Link UP Mar 17 18:20:22.618068 env[2067]: time="2025-03-17T18:20:22.618023758Z" level=info msg="Loading containers: done." Mar 17 18:20:22.638479 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck465547477-merged.mount: Deactivated successfully. Mar 17 18:20:22.648840 env[2067]: time="2025-03-17T18:20:22.648768647Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:20:22.649267 env[2067]: time="2025-03-17T18:20:22.649217972Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:20:22.649525 env[2067]: time="2025-03-17T18:20:22.649481352Z" level=info msg="Daemon has completed initialization" Mar 17 18:20:22.672004 systemd[1]: Started docker.service. Mar 17 18:20:22.682120 env[2067]: time="2025-03-17T18:20:22.682018902Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:20:24.725755 env[1818]: time="2025-03-17T18:20:24.725700657Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:20:25.454129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281794564.mount: Deactivated successfully. Mar 17 18:20:26.426001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:20:26.426816 systemd[1]: Stopped kubelet.service. Mar 17 18:20:26.427014 systemd[1]: kubelet.service: Consumed 1.552s CPU time. Mar 17 18:20:26.431924 systemd[1]: Starting kubelet.service... Mar 17 18:20:26.754012 systemd[1]: Started kubelet.service. Mar 17 18:20:26.869961 kubelet[2200]: E0317 18:20:26.869865 2200 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:20:26.877228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:20:26.877547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:20:27.916806 env[1818]: time="2025-03-17T18:20:27.916727537Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:27.921291 env[1818]: time="2025-03-17T18:20:27.921242688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:27.927279 env[1818]: time="2025-03-17T18:20:27.927231557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:27.932291 env[1818]: time="2025-03-17T18:20:27.932243111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:27.933886 env[1818]: time="2025-03-17T18:20:27.933839806Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 18:20:27.952753 env[1818]: time="2025-03-17T18:20:27.952691420Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:20:30.477569 env[1818]: time="2025-03-17T18:20:30.477510282Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:30.483307 env[1818]: time="2025-03-17T18:20:30.483254889Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:30.488386 env[1818]: time="2025-03-17T18:20:30.488320329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:30.493746 env[1818]: time="2025-03-17T18:20:30.493671031Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:30.498009 env[1818]: time="2025-03-17T18:20:30.496373405Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 18:20:30.516525 env[1818]: time="2025-03-17T18:20:30.516469399Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:20:32.320607 env[1818]: time="2025-03-17T18:20:32.320548923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:32.326566 env[1818]: time="2025-03-17T18:20:32.326512824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:32.331801 env[1818]: time="2025-03-17T18:20:32.331735397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:32.337205 env[1818]: time="2025-03-17T18:20:32.337121893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:32.339076 env[1818]: time="2025-03-17T18:20:32.339013423Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 18:20:32.368540 env[1818]: time="2025-03-17T18:20:32.368473082Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:20:33.738512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984556973.mount: Deactivated successfully. Mar 17 18:20:34.547823 env[1818]: time="2025-03-17T18:20:34.547739529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:34.555950 env[1818]: time="2025-03-17T18:20:34.555874375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:34.558380 env[1818]: time="2025-03-17T18:20:34.558317549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:34.560918 env[1818]: time="2025-03-17T18:20:34.560847193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:34.562011 env[1818]: time="2025-03-17T18:20:34.561963385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 18:20:34.577806 env[1818]: time="2025-03-17T18:20:34.577746551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:20:35.169726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1566271144.mount: Deactivated successfully. Mar 17 18:20:36.607346 env[1818]: time="2025-03-17T18:20:36.607285690Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:36.611970 env[1818]: time="2025-03-17T18:20:36.611913808Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:36.617449 env[1818]: time="2025-03-17T18:20:36.617383532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:36.620597 env[1818]: time="2025-03-17T18:20:36.620522625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:36.622415 env[1818]: time="2025-03-17T18:20:36.622367408Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:20:36.638778 env[1818]: time="2025-03-17T18:20:36.638729416Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:20:36.926073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:20:36.926448 systemd[1]: Stopped kubelet.service. Mar 17 18:20:36.929192 systemd[1]: Starting kubelet.service... Mar 17 18:20:37.225371 systemd[1]: Started kubelet.service. Mar 17 18:20:37.248123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322560629.mount: Deactivated successfully. Mar 17 18:20:37.266904 env[1818]: time="2025-03-17T18:20:37.265493362Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:37.270228 env[1818]: time="2025-03-17T18:20:37.270114426Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:37.274734 env[1818]: time="2025-03-17T18:20:37.273787559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:37.277516 env[1818]: time="2025-03-17T18:20:37.277443337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:37.279008 env[1818]: time="2025-03-17T18:20:37.278950121Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 18:20:37.297854 env[1818]: time="2025-03-17T18:20:37.297452909Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:20:37.341891 kubelet[2236]: E0317 18:20:37.341801 2236 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:20:37.345582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:20:37.345911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:20:37.888184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349684792.mount: Deactivated successfully. Mar 17 18:20:41.023008 amazon-ssm-agent[1789]: 2025-03-17 18:20:41 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Mar 17 18:20:41.208034 env[1818]: time="2025-03-17T18:20:41.207970891Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:41.214143 env[1818]: time="2025-03-17T18:20:41.214081937Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:41.219042 env[1818]: time="2025-03-17T18:20:41.218982476Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:41.223130 env[1818]: time="2025-03-17T18:20:41.223069661Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:41.225021 env[1818]: time="2025-03-17T18:20:41.224961796Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 18:20:42.367398 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 18:20:47.426112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:20:47.426514 systemd[1]: Stopped kubelet.service. Mar 17 18:20:47.432607 systemd[1]: Starting kubelet.service... Mar 17 18:20:47.746248 systemd[1]: Started kubelet.service. Mar 17 18:20:47.858782 kubelet[2313]: E0317 18:20:47.858697 2313 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:20:47.862363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:20:47.862691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:20:49.043559 systemd[1]: Stopped kubelet.service. Mar 17 18:20:49.050858 systemd[1]: Starting kubelet.service... Mar 17 18:20:49.101601 systemd[1]: Reloading. Mar 17 18:20:49.280551 /usr/lib/systemd/system-generators/torcx-generator[2347]: time="2025-03-17T18:20:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:20:49.280636 /usr/lib/systemd/system-generators/torcx-generator[2347]: time="2025-03-17T18:20:49Z" level=info msg="torcx already run" Mar 17 18:20:49.471431 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:20:49.471470 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:20:49.509634 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:20:49.735064 systemd[1]: Started kubelet.service. Mar 17 18:20:49.749266 systemd[1]: Stopping kubelet.service... Mar 17 18:20:49.750417 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:20:49.750808 systemd[1]: Stopped kubelet.service. Mar 17 18:20:49.754559 systemd[1]: Starting kubelet.service... Mar 17 18:20:50.021624 systemd[1]: Started kubelet.service. Mar 17 18:20:50.109974 kubelet[2412]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:20:50.109974 kubelet[2412]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:20:50.110672 kubelet[2412]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:20:50.112128 kubelet[2412]: I0317 18:20:50.112041 2412 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:20:51.247238 kubelet[2412]: I0317 18:20:51.247175 2412 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:20:51.247238 kubelet[2412]: I0317 18:20:51.247239 2412 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:20:51.247876 kubelet[2412]: I0317 18:20:51.247581 2412 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:20:51.291963 kubelet[2412]: E0317 18:20:51.291922 2412 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:51.292466 kubelet[2412]: I0317 18:20:51.292434 2412 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:20:51.313591 kubelet[2412]: I0317 18:20:51.313547 2412 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:20:51.314054 kubelet[2412]: I0317 18:20:51.314010 2412 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:20:51.314375 kubelet[2412]: I0317 18:20:51.314058 2412 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:20:51.314548 kubelet[2412]: I0317 18:20:51.314406 2412 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:20:51.314548 kubelet[2412]: I0317 18:20:51.314428 2412 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:20:51.314672 kubelet[2412]: I0317 18:20:51.314647 2412 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:20:51.316432 kubelet[2412]: I0317 18:20:51.316388 2412 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:20:51.316432 kubelet[2412]: I0317 18:20:51.316432 2412 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:20:51.316620 kubelet[2412]: I0317 18:20:51.316509 2412 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:20:51.316620 kubelet[2412]: I0317 18:20:51.316578 2412 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:20:51.320353 kubelet[2412]: I0317 18:20:51.320296 2412 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:20:51.320700 kubelet[2412]: I0317 18:20:51.320663 2412 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:20:51.320784 kubelet[2412]: W0317 18:20:51.320747 2412 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:20:51.321871 kubelet[2412]: I0317 18:20:51.321823 2412 server.go:1264] "Started kubelet" Mar 17 18:20:51.322124 kubelet[2412]: W0317 18:20:51.322049 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:51.322240 kubelet[2412]: E0317 18:20:51.322179 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:51.322369 kubelet[2412]: W0317 18:20:51.322305 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-98&limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:51.322448 kubelet[2412]: E0317 18:20:51.322380 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-98&limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:51.334618 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:20:51.334947 kubelet[2412]: I0317 18:20:51.334895 2412 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:20:51.342109 kubelet[2412]: I0317 18:20:51.342036 2412 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:20:51.343950 kubelet[2412]: I0317 18:20:51.343907 2412 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:20:51.344914 kubelet[2412]: I0317 18:20:51.344867 2412 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:20:51.345723 kubelet[2412]: I0317 18:20:51.345662 2412 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:20:51.346270 kubelet[2412]: I0317 18:20:51.346190 2412 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:20:51.351335 kubelet[2412]: I0317 18:20:51.351297 2412 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:20:51.352918 kubelet[2412]: I0317 18:20:51.347868 2412 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:20:51.355729 kubelet[2412]: E0317 18:20:51.355384 2412 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.98:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-98.182daa18180e7ffb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-98,UID:ip-172-31-18-98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-98,},FirstTimestamp:2025-03-17 18:20:51.321790459 +0000 UTC m=+1.292652355,LastTimestamp:2025-03-17 18:20:51.321790459 +0000 UTC m=+1.292652355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-98,}" Mar 17 18:20:51.356843 kubelet[2412]: I0317 18:20:51.356788 2412 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:20:51.357023 kubelet[2412]: I0317 18:20:51.357005 2412 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:20:51.358677 kubelet[2412]: E0317 18:20:51.356727 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-98?timeout=10s\": dial tcp 172.31.18.98:6443: connect: connection refused" interval="200ms" Mar 17 18:20:51.359338 kubelet[2412]: W0317 18:20:51.357901 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:51.359485 kubelet[2412]: E0317 18:20:51.359366 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:51.359668 kubelet[2412]: I0317 18:20:51.359622 2412 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:20:51.401679 kubelet[2412]: I0317 18:20:51.401638 2412 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:20:51.401887 kubelet[2412]: I0317 18:20:51.401864 2412 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:20:51.402015 kubelet[2412]: I0317 18:20:51.401995 2412 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:20:51.405617 kubelet[2412]: I0317 18:20:51.405580 2412 policy_none.go:49] "None policy: Start" Mar 17 18:20:51.407040 kubelet[2412]: I0317 18:20:51.407008 2412 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:20:51.407291 kubelet[2412]: I0317 18:20:51.407270 2412 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:20:51.409133 kubelet[2412]: I0317 18:20:51.409056 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:20:51.412077 kubelet[2412]: I0317 18:20:51.412015 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:20:51.412284 kubelet[2412]: I0317 18:20:51.412098 2412 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:20:51.412284 kubelet[2412]: I0317 18:20:51.412132 2412 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:20:51.412284 kubelet[2412]: E0317 18:20:51.412225 2412 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:20:51.417566 systemd[1]: Created slice kubepods.slice. Mar 17 18:20:51.424893 kubelet[2412]: W0317 18:20:51.424822 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:51.424893 kubelet[2412]: E0317 18:20:51.424898 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:51.433233 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:20:51.440356 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:20:51.447546 kubelet[2412]: I0317 18:20:51.447506 2412 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-98" Mar 17 18:20:51.448741 kubelet[2412]: E0317 18:20:51.448688 2412 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.98:6443/api/v1/nodes\": dial tcp 172.31.18.98:6443: connect: connection refused" node="ip-172-31-18-98" Mar 17 18:20:51.449374 kubelet[2412]: I0317 18:20:51.449341 2412 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:20:51.449652 kubelet[2412]: I0317 18:20:51.449590 2412 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:20:51.449803 kubelet[2412]: I0317 18:20:51.449778 2412 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:20:51.455281 kubelet[2412]: E0317 18:20:51.455241 2412 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-98\" not found" Mar 17 18:20:51.514199 kubelet[2412]: I0317 18:20:51.512507 2412 topology_manager.go:215] "Topology Admit Handler" podUID="e91f5fb4badc0320642baf7861bac381" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-98" Mar 17 18:20:51.515470 kubelet[2412]: I0317 18:20:51.515427 2412 topology_manager.go:215] "Topology Admit Handler" podUID="cef8f49194701b50c629121cc76f4256" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-98" Mar 17 18:20:51.517858 kubelet[2412]: I0317 18:20:51.517797 2412 topology_manager.go:215] "Topology Admit Handler" podUID="5b1cbf05becac15ee26c21254f7db074" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-98" Mar 17 18:20:51.528228 systemd[1]: Created slice kubepods-burstable-pode91f5fb4badc0320642baf7861bac381.slice. Mar 17 18:20:51.548609 systemd[1]: Created slice kubepods-burstable-podcef8f49194701b50c629121cc76f4256.slice. Mar 17 18:20:51.554221 kubelet[2412]: I0317 18:20:51.554139 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e91f5fb4badc0320642baf7861bac381-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-98\" (UID: \"e91f5fb4badc0320642baf7861bac381\") " pod="kube-system/kube-apiserver-ip-172-31-18-98" Mar 17 18:20:51.554489 kubelet[2412]: I0317 18:20:51.554460 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e91f5fb4badc0320642baf7861bac381-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-98\" (UID: \"e91f5fb4badc0320642baf7861bac381\") " pod="kube-system/kube-apiserver-ip-172-31-18-98" Mar 17 18:20:51.554696 kubelet[2412]: I0317 18:20:51.554669 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:20:51.554851 kubelet[2412]: I0317 18:20:51.554818 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:20:51.555025 kubelet[2412]: I0317 18:20:51.555000 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b1cbf05becac15ee26c21254f7db074-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-98\" (UID: \"5b1cbf05becac15ee26c21254f7db074\") " pod="kube-system/kube-scheduler-ip-172-31-18-98" Mar 17 18:20:51.555689 kubelet[2412]: I0317 18:20:51.555657 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e91f5fb4badc0320642baf7861bac381-ca-certs\") pod \"kube-apiserver-ip-172-31-18-98\" (UID: \"e91f5fb4badc0320642baf7861bac381\") " pod="kube-system/kube-apiserver-ip-172-31-18-98" Mar 17 18:20:51.555869 kubelet[2412]: I0317 18:20:51.555840 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:20:51.556054 kubelet[2412]: I0317 18:20:51.556028 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:20:51.556221 kubelet[2412]: I0317 18:20:51.556193 2412 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:20:51.562613 systemd[1]: Created slice kubepods-burstable-pod5b1cbf05becac15ee26c21254f7db074.slice. Mar 17 18:20:51.564350 kubelet[2412]: E0317 18:20:51.564278 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-98?timeout=10s\": dial tcp 172.31.18.98:6443: connect: connection refused" interval="400ms" Mar 17 18:20:51.651333 kubelet[2412]: I0317 18:20:51.651298 2412 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-98" Mar 17 18:20:51.651971 kubelet[2412]: E0317 18:20:51.651930 2412 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.98:6443/api/v1/nodes\": dial tcp 172.31.18.98:6443: connect: connection refused" node="ip-172-31-18-98" Mar 17 18:20:51.848646 env[1818]: time="2025-03-17T18:20:51.847875119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-98,Uid:e91f5fb4badc0320642baf7861bac381,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:51.855067 env[1818]: time="2025-03-17T18:20:51.854967126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-98,Uid:cef8f49194701b50c629121cc76f4256,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:51.869045 env[1818]: time="2025-03-17T18:20:51.868959387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-98,Uid:5b1cbf05becac15ee26c21254f7db074,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:51.965116 kubelet[2412]: E0317 18:20:51.965018 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-98?timeout=10s\": dial tcp 172.31.18.98:6443: connect: connection refused" interval="800ms" Mar 17 18:20:52.055113 kubelet[2412]: I0317 18:20:52.054758 2412 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-98" Mar 17 18:20:52.055287 kubelet[2412]: E0317 18:20:52.055243 2412 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.98:6443/api/v1/nodes\": dial tcp 172.31.18.98:6443: connect: connection refused" node="ip-172-31-18-98" Mar 17 18:20:52.251892 kubelet[2412]: W0317 18:20:52.251796 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-98&limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:52.252583 kubelet[2412]: E0317 18:20:52.252545 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-98&limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:52.361666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958795777.mount: Deactivated successfully. Mar 17 18:20:52.374996 env[1818]: time="2025-03-17T18:20:52.374918258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.384677 env[1818]: time="2025-03-17T18:20:52.384623080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.387596 env[1818]: time="2025-03-17T18:20:52.387500584Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.389559 env[1818]: time="2025-03-17T18:20:52.389494710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.395773 env[1818]: time="2025-03-17T18:20:52.395725494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.397849 env[1818]: time="2025-03-17T18:20:52.397805074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.400244 env[1818]: time="2025-03-17T18:20:52.400200021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.404373 env[1818]: time="2025-03-17T18:20:52.404325880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.408519 env[1818]: time="2025-03-17T18:20:52.408471996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.413196 env[1818]: time="2025-03-17T18:20:52.413103500Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.415420 env[1818]: time="2025-03-17T18:20:52.415370176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.417443 env[1818]: time="2025-03-17T18:20:52.417395983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:52.460695 env[1818]: time="2025-03-17T18:20:52.460589022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:52.460918 env[1818]: time="2025-03-17T18:20:52.460665092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:52.460918 env[1818]: time="2025-03-17T18:20:52.460691816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:52.461562 env[1818]: time="2025-03-17T18:20:52.461143592Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/865b94eb316f33965339db4d2fe3a5f33c8575af88892b12a2337491aa172bf3 pid=2452 runtime=io.containerd.runc.v2 Mar 17 18:20:52.484300 kubelet[2412]: W0317 18:20:52.484090 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:52.484300 kubelet[2412]: E0317 18:20:52.484231 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:52.507813 env[1818]: time="2025-03-17T18:20:52.507625696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:52.508074 env[1818]: time="2025-03-17T18:20:52.508010102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:52.508304 env[1818]: time="2025-03-17T18:20:52.508246172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:52.509475 env[1818]: time="2025-03-17T18:20:52.509400469Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8da8ee8b03eb9e6472266d9163643a9b6d339bf0d9b28e195ac9db38b814d54 pid=2478 runtime=io.containerd.runc.v2 Mar 17 18:20:52.516528 env[1818]: time="2025-03-17T18:20:52.516208911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:52.516528 env[1818]: time="2025-03-17T18:20:52.516285425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:52.516528 env[1818]: time="2025-03-17T18:20:52.516311561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:52.517788 env[1818]: time="2025-03-17T18:20:52.517626218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5bd873d521580063b739efb689fa8ce43e3f324a068c13ca53cb7d7b062ae0 pid=2493 runtime=io.containerd.runc.v2 Mar 17 18:20:52.522542 systemd[1]: Started cri-containerd-865b94eb316f33965339db4d2fe3a5f33c8575af88892b12a2337491aa172bf3.scope. Mar 17 18:20:52.536117 kubelet[2412]: W0317 18:20:52.535941 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:52.536117 kubelet[2412]: E0317 18:20:52.536056 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:52.563078 systemd[1]: Started cri-containerd-c8da8ee8b03eb9e6472266d9163643a9b6d339bf0d9b28e195ac9db38b814d54.scope. Mar 17 18:20:52.580146 systemd[1]: Started cri-containerd-3a5bd873d521580063b739efb689fa8ce43e3f324a068c13ca53cb7d7b062ae0.scope. Mar 17 18:20:52.671721 env[1818]: time="2025-03-17T18:20:52.671664586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-98,Uid:e91f5fb4badc0320642baf7861bac381,Namespace:kube-system,Attempt:0,} returns sandbox id \"865b94eb316f33965339db4d2fe3a5f33c8575af88892b12a2337491aa172bf3\"" Mar 17 18:20:52.686067 env[1818]: time="2025-03-17T18:20:52.685989643Z" level=info msg="CreateContainer within sandbox \"865b94eb316f33965339db4d2fe3a5f33c8575af88892b12a2337491aa172bf3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:20:52.714843 env[1818]: time="2025-03-17T18:20:52.714677716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-98,Uid:5b1cbf05becac15ee26c21254f7db074,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a5bd873d521580063b739efb689fa8ce43e3f324a068c13ca53cb7d7b062ae0\"" Mar 17 18:20:52.720333 env[1818]: time="2025-03-17T18:20:52.720248639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-98,Uid:cef8f49194701b50c629121cc76f4256,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8da8ee8b03eb9e6472266d9163643a9b6d339bf0d9b28e195ac9db38b814d54\"" Mar 17 18:20:52.721239 env[1818]: time="2025-03-17T18:20:52.721102436Z" level=info msg="CreateContainer within sandbox \"3a5bd873d521580063b739efb689fa8ce43e3f324a068c13ca53cb7d7b062ae0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:20:52.727986 env[1818]: time="2025-03-17T18:20:52.727927103Z" level=info msg="CreateContainer within sandbox \"865b94eb316f33965339db4d2fe3a5f33c8575af88892b12a2337491aa172bf3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"963e57385e88b13f6cb300b8ca81635498df18dd4946270b3917c338a205151b\"" Mar 17 18:20:52.730191 env[1818]: time="2025-03-17T18:20:52.730100345Z" level=info msg="CreateContainer within sandbox \"c8da8ee8b03eb9e6472266d9163643a9b6d339bf0d9b28e195ac9db38b814d54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:20:52.730849 env[1818]: time="2025-03-17T18:20:52.730789606Z" level=info msg="StartContainer for \"963e57385e88b13f6cb300b8ca81635498df18dd4946270b3917c338a205151b\"" Mar 17 18:20:52.759406 env[1818]: time="2025-03-17T18:20:52.759250177Z" level=info msg="CreateContainer within sandbox \"3a5bd873d521580063b739efb689fa8ce43e3f324a068c13ca53cb7d7b062ae0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6\"" Mar 17 18:20:52.761613 env[1818]: time="2025-03-17T18:20:52.761558771Z" level=info msg="StartContainer for \"03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6\"" Mar 17 18:20:52.765885 kubelet[2412]: E0317 18:20:52.765819 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-98?timeout=10s\": dial tcp 172.31.18.98:6443: connect: connection refused" interval="1.6s" Mar 17 18:20:52.776805 systemd[1]: Started cri-containerd-963e57385e88b13f6cb300b8ca81635498df18dd4946270b3917c338a205151b.scope. Mar 17 18:20:52.794103 env[1818]: time="2025-03-17T18:20:52.793983569Z" level=info msg="CreateContainer within sandbox \"c8da8ee8b03eb9e6472266d9163643a9b6d339bf0d9b28e195ac9db38b814d54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189\"" Mar 17 18:20:52.794976 env[1818]: time="2025-03-17T18:20:52.794920576Z" level=info msg="StartContainer for \"9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189\"" Mar 17 18:20:52.820100 systemd[1]: Started cri-containerd-03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6.scope. Mar 17 18:20:52.858804 systemd[1]: Started cri-containerd-9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189.scope. Mar 17 18:20:52.864178 kubelet[2412]: I0317 18:20:52.863426 2412 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-98" Mar 17 18:20:52.864178 kubelet[2412]: E0317 18:20:52.863906 2412 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.98:6443/api/v1/nodes\": dial tcp 172.31.18.98:6443: connect: connection refused" node="ip-172-31-18-98" Mar 17 18:20:52.931289 env[1818]: time="2025-03-17T18:20:52.930704615Z" level=info msg="StartContainer for \"963e57385e88b13f6cb300b8ca81635498df18dd4946270b3917c338a205151b\" returns successfully" Mar 17 18:20:52.967292 env[1818]: time="2025-03-17T18:20:52.967209335Z" level=info msg="StartContainer for \"03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6\" returns successfully" Mar 17 18:20:52.968318 kubelet[2412]: W0317 18:20:52.968237 2412 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:52.968496 kubelet[2412]: E0317 18:20:52.968329 2412 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.98:6443: connect: connection refused Mar 17 18:20:53.013652 env[1818]: time="2025-03-17T18:20:53.013503498Z" level=info msg="StartContainer for \"9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189\" returns successfully" Mar 17 18:20:54.466480 kubelet[2412]: I0317 18:20:54.466444 2412 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-98" Mar 17 18:20:57.476088 update_engine[1805]: I0317 18:20:57.475234 1805 update_attempter.cc:509] Updating boot flags... Mar 17 18:20:58.566371 kubelet[2412]: E0317 18:20:58.566319 2412 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-98\" not found" node="ip-172-31-18-98" Mar 17 18:20:58.635595 kubelet[2412]: E0317 18:20:58.635387 2412 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-98.182daa18180e7ffb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-98,UID:ip-172-31-18-98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-98,},FirstTimestamp:2025-03-17 18:20:51.321790459 +0000 UTC m=+1.292652355,LastTimestamp:2025-03-17 18:20:51.321790459 +0000 UTC m=+1.292652355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-98,}" Mar 17 18:20:58.677809 kubelet[2412]: I0317 18:20:58.677756 2412 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-98" Mar 17 18:20:58.693120 kubelet[2412]: E0317 18:20:58.692900 2412 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-98.182daa181ca017d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-98,UID:ip-172-31-18-98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-18-98 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-18-98,},FirstTimestamp:2025-03-17 18:20:51.398440919 +0000 UTC m=+1.369302803,LastTimestamp:2025-03-17 18:20:51.398440919 +0000 UTC m=+1.369302803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-98,}" Mar 17 18:20:58.769081 kubelet[2412]: E0317 18:20:58.768939 2412 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-98.182daa181ca03d63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-98,UID:ip-172-31-18-98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-18-98 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-18-98,},FirstTimestamp:2025-03-17 18:20:51.398450531 +0000 UTC m=+1.369312415,LastTimestamp:2025-03-17 18:20:51.398450531 +0000 UTC m=+1.369312415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-98,}" Mar 17 18:20:59.335844 kubelet[2412]: I0317 18:20:59.335746 2412 apiserver.go:52] "Watching apiserver" Mar 17 18:20:59.346898 kubelet[2412]: I0317 18:20:59.346834 2412 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:21:00.997065 systemd[1]: Reloading. Mar 17 18:21:01.149428 /usr/lib/systemd/system-generators/torcx-generator[2885]: time="2025-03-17T18:21:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:21:01.149490 /usr/lib/systemd/system-generators/torcx-generator[2885]: time="2025-03-17T18:21:01Z" level=info msg="torcx already run" Mar 17 18:21:01.327059 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:21:01.327099 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:21:01.367551 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:21:01.648794 systemd[1]: Stopping kubelet.service... Mar 17 18:21:01.669428 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:21:01.669815 systemd[1]: Stopped kubelet.service. Mar 17 18:21:01.669900 systemd[1]: kubelet.service: Consumed 2.059s CPU time. Mar 17 18:21:01.674453 systemd[1]: Starting kubelet.service... Mar 17 18:21:01.956110 systemd[1]: Started kubelet.service. Mar 17 18:21:02.065721 kubelet[2945]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:21:02.065721 kubelet[2945]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:21:02.065721 kubelet[2945]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:21:02.066386 kubelet[2945]: I0317 18:21:02.065828 2945 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:21:02.088440 kubelet[2945]: I0317 18:21:02.088329 2945 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:21:02.088440 kubelet[2945]: I0317 18:21:02.088429 2945 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:21:02.088817 kubelet[2945]: I0317 18:21:02.088788 2945 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:21:02.092541 kubelet[2945]: I0317 18:21:02.092491 2945 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:21:02.095587 kubelet[2945]: I0317 18:21:02.095523 2945 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:21:02.105083 sudo[2958]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:21:02.106259 sudo[2958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:21:02.108395 kubelet[2945]: I0317 18:21:02.108290 2945 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:21:02.109938 kubelet[2945]: I0317 18:21:02.109862 2945 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:21:02.110402 kubelet[2945]: I0317 18:21:02.109926 2945 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:21:02.110945 kubelet[2945]: I0317 18:21:02.110666 2945 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:21:02.110945 kubelet[2945]: I0317 18:21:02.110704 2945 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:21:02.110945 kubelet[2945]: I0317 18:21:02.110771 2945 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:21:02.111198 kubelet[2945]: I0317 18:21:02.110989 2945 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:21:02.113268 kubelet[2945]: I0317 18:21:02.113230 2945 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:21:02.113494 kubelet[2945]: I0317 18:21:02.113472 2945 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:21:02.113658 kubelet[2945]: I0317 18:21:02.113637 2945 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:21:02.121261 kubelet[2945]: I0317 18:21:02.120228 2945 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:21:02.121261 kubelet[2945]: I0317 18:21:02.120580 2945 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:21:02.121496 kubelet[2945]: I0317 18:21:02.121286 2945 server.go:1264] "Started kubelet" Mar 17 18:21:02.142051 kubelet[2945]: I0317 18:21:02.141999 2945 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:21:02.147359 kubelet[2945]: I0317 18:21:02.147287 2945 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:21:02.155924 kubelet[2945]: I0317 18:21:02.155868 2945 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:21:02.164541 kubelet[2945]: I0317 18:21:02.164484 2945 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:21:02.165773 kubelet[2945]: I0317 18:21:02.165664 2945 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:21:02.166085 kubelet[2945]: I0317 18:21:02.166046 2945 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:21:02.167929 kubelet[2945]: I0317 18:21:02.167880 2945 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:21:02.169799 kubelet[2945]: I0317 18:21:02.169750 2945 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:21:02.196823 kubelet[2945]: E0317 18:21:02.196760 2945 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:21:02.202005 kubelet[2945]: I0317 18:21:02.201952 2945 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:21:02.203981 kubelet[2945]: I0317 18:21:02.202140 2945 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:21:02.217377 kubelet[2945]: I0317 18:21:02.217248 2945 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:21:02.236381 kubelet[2945]: I0317 18:21:02.235862 2945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:21:02.237876 kubelet[2945]: I0317 18:21:02.237820 2945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:21:02.238027 kubelet[2945]: I0317 18:21:02.237889 2945 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:21:02.238027 kubelet[2945]: I0317 18:21:02.237924 2945 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:21:02.238027 kubelet[2945]: E0317 18:21:02.237995 2945 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:21:02.297370 kubelet[2945]: I0317 18:21:02.296208 2945 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-98" Mar 17 18:21:02.310722 kubelet[2945]: I0317 18:21:02.310683 2945 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-98" Mar 17 18:21:02.312985 kubelet[2945]: I0317 18:21:02.312939 2945 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-98" Mar 17 18:21:02.342457 kubelet[2945]: E0317 18:21:02.342423 2945 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:21:02.403079 kubelet[2945]: I0317 18:21:02.403047 2945 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:21:02.403340 kubelet[2945]: I0317 18:21:02.403315 2945 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:21:02.403478 kubelet[2945]: I0317 18:21:02.403459 2945 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:21:02.403817 kubelet[2945]: I0317 18:21:02.403791 2945 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:21:02.403964 kubelet[2945]: I0317 18:21:02.403921 2945 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:21:02.404076 kubelet[2945]: I0317 18:21:02.404057 2945 policy_none.go:49] "None policy: Start" Mar 17 18:21:02.405660 kubelet[2945]: I0317 18:21:02.405629 2945 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:21:02.405866 kubelet[2945]: I0317 18:21:02.405845 2945 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:21:02.406287 kubelet[2945]: I0317 18:21:02.406265 2945 state_mem.go:75] "Updated machine memory state" Mar 17 18:21:02.414204 kubelet[2945]: I0317 18:21:02.414127 2945 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:21:02.414654 kubelet[2945]: I0317 18:21:02.414602 2945 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:21:02.418061 kubelet[2945]: I0317 18:21:02.418029 2945 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:21:02.543910 kubelet[2945]: I0317 18:21:02.543772 2945 topology_manager.go:215] "Topology Admit Handler" podUID="e91f5fb4badc0320642baf7861bac381" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-98" Mar 17 18:21:02.544322 kubelet[2945]: I0317 18:21:02.544289 2945 topology_manager.go:215] "Topology Admit Handler" podUID="cef8f49194701b50c629121cc76f4256" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-98" Mar 17 18:21:02.545336 kubelet[2945]: I0317 18:21:02.545287 2945 topology_manager.go:215] "Topology Admit Handler" podUID="5b1cbf05becac15ee26c21254f7db074" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-98" Mar 17 18:21:02.561674 kubelet[2945]: E0317 18:21:02.561631 2945 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-18-98\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-98" Mar 17 18:21:02.578218 kubelet[2945]: I0317 18:21:02.578174 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:21:02.578531 kubelet[2945]: I0317 18:21:02.578497 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:21:02.578754 kubelet[2945]: I0317 18:21:02.578722 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b1cbf05becac15ee26c21254f7db074-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-98\" (UID: \"5b1cbf05becac15ee26c21254f7db074\") " pod="kube-system/kube-scheduler-ip-172-31-18-98" Mar 17 18:21:02.578936 kubelet[2945]: I0317 18:21:02.578911 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:21:02.579105 kubelet[2945]: I0317 18:21:02.579080 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:21:02.579318 kubelet[2945]: I0317 18:21:02.579285 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cef8f49194701b50c629121cc76f4256-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-98\" (UID: \"cef8f49194701b50c629121cc76f4256\") " pod="kube-system/kube-controller-manager-ip-172-31-18-98" Mar 17 18:21:02.579488 kubelet[2945]: I0317 18:21:02.579463 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e91f5fb4badc0320642baf7861bac381-ca-certs\") pod \"kube-apiserver-ip-172-31-18-98\" (UID: \"e91f5fb4badc0320642baf7861bac381\") " pod="kube-system/kube-apiserver-ip-172-31-18-98" Mar 17 18:21:02.579666 kubelet[2945]: I0317 18:21:02.579642 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e91f5fb4badc0320642baf7861bac381-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-98\" (UID: \"e91f5fb4badc0320642baf7861bac381\") " pod="kube-system/kube-apiserver-ip-172-31-18-98" Mar 17 18:21:02.579830 kubelet[2945]: I0317 18:21:02.579805 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e91f5fb4badc0320642baf7861bac381-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-98\" (UID: \"e91f5fb4badc0320642baf7861bac381\") " pod="kube-system/kube-apiserver-ip-172-31-18-98" Mar 17 18:21:03.096691 sudo[2958]: pam_unix(sudo:session): session closed for user root Mar 17 18:21:03.136339 kubelet[2945]: I0317 18:21:03.136272 2945 apiserver.go:52] "Watching apiserver" Mar 17 18:21:03.168987 kubelet[2945]: I0317 18:21:03.168927 2945 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:21:03.310955 kubelet[2945]: E0317 18:21:03.307916 2945 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-98\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-98" Mar 17 18:21:03.336635 kubelet[2945]: I0317 18:21:03.336525 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-98" podStartSLOduration=1.33650374 podStartE2EDuration="1.33650374s" podCreationTimestamp="2025-03-17 18:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:21:03.335578551 +0000 UTC m=+1.372017211" watchObservedRunningTime="2025-03-17 18:21:03.33650374 +0000 UTC m=+1.372942388" Mar 17 18:21:03.379508 kubelet[2945]: I0317 18:21:03.379285 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-98" podStartSLOduration=1.379263755 podStartE2EDuration="1.379263755s" podCreationTimestamp="2025-03-17 18:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:21:03.352045585 +0000 UTC m=+1.388484233" watchObservedRunningTime="2025-03-17 18:21:03.379263755 +0000 UTC m=+1.415702427" Mar 17 18:21:07.055027 sudo[2057]: pam_unix(sudo:session): session closed for user root Mar 17 18:21:07.078696 sshd[2054]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:07.085718 systemd[1]: sshd@4-172.31.18.98:22-139.178.89.65:54900.service: Deactivated successfully. Mar 17 18:21:07.085750 systemd-logind[1804]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:21:07.086988 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:21:07.087321 systemd[1]: session-5.scope: Consumed 12.325s CPU time. Mar 17 18:21:07.089524 systemd-logind[1804]: Removed session 5. Mar 17 18:21:11.052383 amazon-ssm-agent[1789]: 2025-03-17 18:21:11 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Mar 17 18:21:16.707843 kubelet[2945]: I0317 18:21:16.707794 2945 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:21:16.708581 env[1818]: time="2025-03-17T18:21:16.708516829Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:21:16.709345 kubelet[2945]: I0317 18:21:16.709315 2945 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:21:17.568545 kubelet[2945]: I0317 18:21:17.568463 2945 topology_manager.go:215] "Topology Admit Handler" podUID="fddb9ec7-b60d-4a4c-8c70-27629c574d0e" podNamespace="kube-system" podName="kube-proxy-wppw2" Mar 17 18:21:17.581235 systemd[1]: Created slice kubepods-besteffort-podfddb9ec7_b60d_4a4c_8c70_27629c574d0e.slice. Mar 17 18:21:17.592689 kubelet[2945]: W0317 18:21:17.592588 2945 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:21:17.592689 kubelet[2945]: E0317 18:21:17.592650 2945 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:21:17.600353 kubelet[2945]: I0317 18:21:17.600290 2945 topology_manager.go:215] "Topology Admit Handler" podUID="6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" podNamespace="kube-system" podName="cilium-2jgvb" Mar 17 18:21:17.611179 systemd[1]: Created slice kubepods-burstable-pod6e6a83be_a0d3_4ef1_b7a9_4a08c0f0bb70.slice. Mar 17 18:21:17.674914 kubelet[2945]: I0317 18:21:17.674852 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-bpf-maps\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675119 kubelet[2945]: I0317 18:21:17.674926 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-clustermesh-secrets\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675119 kubelet[2945]: I0317 18:21:17.674991 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tzm7\" (UniqueName: \"kubernetes.io/projected/fddb9ec7-b60d-4a4c-8c70-27629c574d0e-kube-api-access-6tzm7\") pod \"kube-proxy-wppw2\" (UID: \"fddb9ec7-b60d-4a4c-8c70-27629c574d0e\") " pod="kube-system/kube-proxy-wppw2" Mar 17 18:21:17.675119 kubelet[2945]: I0317 18:21:17.675031 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-hostproc\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675119 kubelet[2945]: I0317 18:21:17.675066 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cni-path\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675119 kubelet[2945]: I0317 18:21:17.675103 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-hubble-tls\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675471 kubelet[2945]: I0317 18:21:17.675139 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fddb9ec7-b60d-4a4c-8c70-27629c574d0e-kube-proxy\") pod \"kube-proxy-wppw2\" (UID: \"fddb9ec7-b60d-4a4c-8c70-27629c574d0e\") " pod="kube-system/kube-proxy-wppw2" Mar 17 18:21:17.675471 kubelet[2945]: I0317 18:21:17.675208 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-host-proc-sys-kernel\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675471 kubelet[2945]: I0317 18:21:17.675246 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-run\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675471 kubelet[2945]: I0317 18:21:17.675282 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-cgroup\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675471 kubelet[2945]: I0317 18:21:17.675316 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-xtables-lock\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675471 kubelet[2945]: I0317 18:21:17.675353 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-config-path\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675812 kubelet[2945]: I0317 18:21:17.675407 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-etc-cni-netd\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675812 kubelet[2945]: I0317 18:21:17.675441 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-host-proc-sys-net\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675812 kubelet[2945]: I0317 18:21:17.675476 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fddb9ec7-b60d-4a4c-8c70-27629c574d0e-lib-modules\") pod \"kube-proxy-wppw2\" (UID: \"fddb9ec7-b60d-4a4c-8c70-27629c574d0e\") " pod="kube-system/kube-proxy-wppw2" Mar 17 18:21:17.675812 kubelet[2945]: I0317 18:21:17.675512 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkttz\" (UniqueName: \"kubernetes.io/projected/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-kube-api-access-nkttz\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.675812 kubelet[2945]: I0317 18:21:17.675546 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fddb9ec7-b60d-4a4c-8c70-27629c574d0e-xtables-lock\") pod \"kube-proxy-wppw2\" (UID: \"fddb9ec7-b60d-4a4c-8c70-27629c574d0e\") " pod="kube-system/kube-proxy-wppw2" Mar 17 18:21:17.676179 kubelet[2945]: I0317 18:21:17.675591 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-lib-modules\") pod \"cilium-2jgvb\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " pod="kube-system/cilium-2jgvb" Mar 17 18:21:17.873110 kubelet[2945]: I0317 18:21:17.872904 2945 topology_manager.go:215] "Topology Admit Handler" podUID="dae35ac2-b690-4fc6-a65e-fba2be1899e6" podNamespace="kube-system" podName="cilium-operator-599987898-rlr9d" Mar 17 18:21:17.900740 systemd[1]: Created slice kubepods-besteffort-poddae35ac2_b690_4fc6_a65e_fba2be1899e6.slice. Mar 17 18:21:17.921434 env[1818]: time="2025-03-17T18:21:17.921331388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jgvb,Uid:6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70,Namespace:kube-system,Attempt:0,}" Mar 17 18:21:17.974662 env[1818]: time="2025-03-17T18:21:17.974440249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:21:17.974662 env[1818]: time="2025-03-17T18:21:17.974551850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:21:17.975098 env[1818]: time="2025-03-17T18:21:17.975025806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:21:17.975856 env[1818]: time="2025-03-17T18:21:17.975773592Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42 pid=3031 runtime=io.containerd.runc.v2 Mar 17 18:21:17.985286 kubelet[2945]: I0317 18:21:17.984876 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dae35ac2-b690-4fc6-a65e-fba2be1899e6-cilium-config-path\") pod \"cilium-operator-599987898-rlr9d\" (UID: \"dae35ac2-b690-4fc6-a65e-fba2be1899e6\") " pod="kube-system/cilium-operator-599987898-rlr9d" Mar 17 18:21:17.985286 kubelet[2945]: I0317 18:21:17.984966 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg7h4\" (UniqueName: \"kubernetes.io/projected/dae35ac2-b690-4fc6-a65e-fba2be1899e6-kube-api-access-wg7h4\") pod \"cilium-operator-599987898-rlr9d\" (UID: \"dae35ac2-b690-4fc6-a65e-fba2be1899e6\") " pod="kube-system/cilium-operator-599987898-rlr9d" Mar 17 18:21:18.002563 systemd[1]: Started cri-containerd-0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42.scope. Mar 17 18:21:18.058635 env[1818]: time="2025-03-17T18:21:18.058581152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jgvb,Uid:6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70,Namespace:kube-system,Attempt:0,} returns sandbox id \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\"" Mar 17 18:21:18.064951 env[1818]: time="2025-03-17T18:21:18.064897500Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:21:18.208868 env[1818]: time="2025-03-17T18:21:18.208755181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rlr9d,Uid:dae35ac2-b690-4fc6-a65e-fba2be1899e6,Namespace:kube-system,Attempt:0,}" Mar 17 18:21:18.246890 env[1818]: time="2025-03-17T18:21:18.246322113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:21:18.246890 env[1818]: time="2025-03-17T18:21:18.246512051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:21:18.246890 env[1818]: time="2025-03-17T18:21:18.246595739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:21:18.247673 env[1818]: time="2025-03-17T18:21:18.247551211Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53 pid=3073 runtime=io.containerd.runc.v2 Mar 17 18:21:18.273204 systemd[1]: Started cri-containerd-d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53.scope. Mar 17 18:21:18.355872 env[1818]: time="2025-03-17T18:21:18.355811249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rlr9d,Uid:dae35ac2-b690-4fc6-a65e-fba2be1899e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\"" Mar 17 18:21:18.777961 kubelet[2945]: E0317 18:21:18.777843 2945 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:21:18.778200 kubelet[2945]: E0317 18:21:18.777997 2945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fddb9ec7-b60d-4a4c-8c70-27629c574d0e-kube-proxy podName:fddb9ec7-b60d-4a4c-8c70-27629c574d0e nodeName:}" failed. No retries permitted until 2025-03-17 18:21:19.277965321 +0000 UTC m=+17.314403969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/fddb9ec7-b60d-4a4c-8c70-27629c574d0e-kube-proxy") pod "kube-proxy-wppw2" (UID: "fddb9ec7-b60d-4a4c-8c70-27629c574d0e") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:21:19.392220 env[1818]: time="2025-03-17T18:21:19.391526853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wppw2,Uid:fddb9ec7-b60d-4a4c-8c70-27629c574d0e,Namespace:kube-system,Attempt:0,}" Mar 17 18:21:19.430871 env[1818]: time="2025-03-17T18:21:19.430731309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:21:19.431199 env[1818]: time="2025-03-17T18:21:19.431095860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:21:19.431394 env[1818]: time="2025-03-17T18:21:19.431313613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:21:19.431973 env[1818]: time="2025-03-17T18:21:19.431864370Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb83097f0b62ca6002fd906a45158033e6cf3435e4b2e7a38601742eb88461d4 pid=3115 runtime=io.containerd.runc.v2 Mar 17 18:21:19.469718 systemd[1]: run-containerd-runc-k8s.io-bb83097f0b62ca6002fd906a45158033e6cf3435e4b2e7a38601742eb88461d4-runc.4AaVEh.mount: Deactivated successfully. Mar 17 18:21:19.478806 systemd[1]: Started cri-containerd-bb83097f0b62ca6002fd906a45158033e6cf3435e4b2e7a38601742eb88461d4.scope. Mar 17 18:21:19.529893 env[1818]: time="2025-03-17T18:21:19.529838392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wppw2,Uid:fddb9ec7-b60d-4a4c-8c70-27629c574d0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb83097f0b62ca6002fd906a45158033e6cf3435e4b2e7a38601742eb88461d4\"" Mar 17 18:21:19.537177 env[1818]: time="2025-03-17T18:21:19.536603134Z" level=info msg="CreateContainer within sandbox \"bb83097f0b62ca6002fd906a45158033e6cf3435e4b2e7a38601742eb88461d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:21:19.580783 env[1818]: time="2025-03-17T18:21:19.580713432Z" level=info msg="CreateContainer within sandbox \"bb83097f0b62ca6002fd906a45158033e6cf3435e4b2e7a38601742eb88461d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7e01e517beef68a9dd3dab757a6bceab1b3b5ddbe94297ac2b1370634c8194b\"" Mar 17 18:21:19.583230 env[1818]: time="2025-03-17T18:21:19.581801133Z" level=info msg="StartContainer for \"e7e01e517beef68a9dd3dab757a6bceab1b3b5ddbe94297ac2b1370634c8194b\"" Mar 17 18:21:19.616114 systemd[1]: Started cri-containerd-e7e01e517beef68a9dd3dab757a6bceab1b3b5ddbe94297ac2b1370634c8194b.scope. Mar 17 18:21:19.686820 env[1818]: time="2025-03-17T18:21:19.686724182Z" level=info msg="StartContainer for \"e7e01e517beef68a9dd3dab757a6bceab1b3b5ddbe94297ac2b1370634c8194b\" returns successfully" Mar 17 18:21:26.871294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891576389.mount: Deactivated successfully. Mar 17 18:21:30.907942 env[1818]: time="2025-03-17T18:21:30.907846814Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:30.915672 env[1818]: time="2025-03-17T18:21:30.915604886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:30.920823 env[1818]: time="2025-03-17T18:21:30.920746017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:30.922038 env[1818]: time="2025-03-17T18:21:30.921968308Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:21:30.928145 env[1818]: time="2025-03-17T18:21:30.928073706Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:21:30.931633 env[1818]: time="2025-03-17T18:21:30.931419110Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:21:30.964459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294670106.mount: Deactivated successfully. Mar 17 18:21:30.979543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671972360.mount: Deactivated successfully. Mar 17 18:21:30.984464 env[1818]: time="2025-03-17T18:21:30.984342277Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\"" Mar 17 18:21:30.986732 env[1818]: time="2025-03-17T18:21:30.986651044Z" level=info msg="StartContainer for \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\"" Mar 17 18:21:31.027988 systemd[1]: Started cri-containerd-5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7.scope. Mar 17 18:21:31.095526 env[1818]: time="2025-03-17T18:21:31.095438705Z" level=info msg="StartContainer for \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\" returns successfully" Mar 17 18:21:31.112998 systemd[1]: cri-containerd-5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7.scope: Deactivated successfully. Mar 17 18:21:31.409578 kubelet[2945]: I0317 18:21:31.409483 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wppw2" podStartSLOduration=14.409463104 podStartE2EDuration="14.409463104s" podCreationTimestamp="2025-03-17 18:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:21:20.361712856 +0000 UTC m=+18.398151528" watchObservedRunningTime="2025-03-17 18:21:31.409463104 +0000 UTC m=+29.445901764" Mar 17 18:21:31.957770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7-rootfs.mount: Deactivated successfully. Mar 17 18:21:32.079733 env[1818]: time="2025-03-17T18:21:32.079345657Z" level=info msg="shim disconnected" id=5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7 Mar 17 18:21:32.079733 env[1818]: time="2025-03-17T18:21:32.079437806Z" level=warning msg="cleaning up after shim disconnected" id=5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7 namespace=k8s.io Mar 17 18:21:32.079733 env[1818]: time="2025-03-17T18:21:32.079462742Z" level=info msg="cleaning up dead shim" Mar 17 18:21:32.095091 env[1818]: time="2025-03-17T18:21:32.094987689Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:21:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3355 runtime=io.containerd.runc.v2\n" Mar 17 18:21:32.394735 env[1818]: time="2025-03-17T18:21:32.394091840Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:21:32.426789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309421031.mount: Deactivated successfully. Mar 17 18:21:32.451061 env[1818]: time="2025-03-17T18:21:32.450963931Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\"" Mar 17 18:21:32.453219 env[1818]: time="2025-03-17T18:21:32.452059453Z" level=info msg="StartContainer for \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\"" Mar 17 18:21:32.485436 systemd[1]: Started cri-containerd-5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a.scope. Mar 17 18:21:32.549686 env[1818]: time="2025-03-17T18:21:32.549604404Z" level=info msg="StartContainer for \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\" returns successfully" Mar 17 18:21:32.574912 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:21:32.575925 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:21:32.576925 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:21:32.581106 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:21:32.588290 systemd[1]: cri-containerd-5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a.scope: Deactivated successfully. Mar 17 18:21:32.605454 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:21:32.671987 env[1818]: time="2025-03-17T18:21:32.671814852Z" level=info msg="shim disconnected" id=5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a Mar 17 18:21:32.671987 env[1818]: time="2025-03-17T18:21:32.671887296Z" level=warning msg="cleaning up after shim disconnected" id=5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a namespace=k8s.io Mar 17 18:21:32.671987 env[1818]: time="2025-03-17T18:21:32.671911380Z" level=info msg="cleaning up dead shim" Mar 17 18:21:32.687462 env[1818]: time="2025-03-17T18:21:32.687378356Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:21:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3419 runtime=io.containerd.runc.v2\n" Mar 17 18:21:33.154969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154694846.mount: Deactivated successfully. Mar 17 18:21:33.395199 env[1818]: time="2025-03-17T18:21:33.395110171Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:21:33.435913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount668956526.mount: Deactivated successfully. Mar 17 18:21:33.452240 env[1818]: time="2025-03-17T18:21:33.452136590Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\"" Mar 17 18:21:33.454066 env[1818]: time="2025-03-17T18:21:33.453617410Z" level=info msg="StartContainer for \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\"" Mar 17 18:21:33.502407 systemd[1]: Started cri-containerd-5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67.scope. Mar 17 18:21:33.589204 env[1818]: time="2025-03-17T18:21:33.589077843Z" level=info msg="StartContainer for \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\" returns successfully" Mar 17 18:21:33.596329 systemd[1]: cri-containerd-5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67.scope: Deactivated successfully. Mar 17 18:21:33.686182 env[1818]: time="2025-03-17T18:21:33.686006101Z" level=info msg="shim disconnected" id=5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67 Mar 17 18:21:33.686182 env[1818]: time="2025-03-17T18:21:33.686076673Z" level=warning msg="cleaning up after shim disconnected" id=5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67 namespace=k8s.io Mar 17 18:21:33.686182 env[1818]: time="2025-03-17T18:21:33.686099774Z" level=info msg="cleaning up dead shim" Mar 17 18:21:33.717849 env[1818]: time="2025-03-17T18:21:33.717778217Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:21:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3481 runtime=io.containerd.runc.v2\n" Mar 17 18:21:34.205026 env[1818]: time="2025-03-17T18:21:34.204960537Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:34.207391 env[1818]: time="2025-03-17T18:21:34.207327563Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:34.211602 env[1818]: time="2025-03-17T18:21:34.211519295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:21:34.212990 env[1818]: time="2025-03-17T18:21:34.212932039Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:21:34.220927 env[1818]: time="2025-03-17T18:21:34.220860940Z" level=info msg="CreateContainer within sandbox \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:21:34.248416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2488113921.mount: Deactivated successfully. Mar 17 18:21:34.265699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873458937.mount: Deactivated successfully. Mar 17 18:21:34.273140 env[1818]: time="2025-03-17T18:21:34.273072878Z" level=info msg="CreateContainer within sandbox \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746\"" Mar 17 18:21:34.275981 env[1818]: time="2025-03-17T18:21:34.275923110Z" level=info msg="StartContainer for \"68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746\"" Mar 17 18:21:34.309441 systemd[1]: Started cri-containerd-68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746.scope. Mar 17 18:21:34.376705 env[1818]: time="2025-03-17T18:21:34.376627804Z" level=info msg="StartContainer for \"68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746\" returns successfully" Mar 17 18:21:34.406483 env[1818]: time="2025-03-17T18:21:34.405810051Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:21:34.441781 env[1818]: time="2025-03-17T18:21:34.441700732Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\"" Mar 17 18:21:34.442926 env[1818]: time="2025-03-17T18:21:34.442873522Z" level=info msg="StartContainer for \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\"" Mar 17 18:21:34.467939 kubelet[2945]: I0317 18:21:34.467617 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-rlr9d" podStartSLOduration=1.6108437310000001 podStartE2EDuration="17.467594323s" podCreationTimestamp="2025-03-17 18:21:17 +0000 UTC" firstStartedPulling="2025-03-17 18:21:18.358724465 +0000 UTC m=+16.395163101" lastFinishedPulling="2025-03-17 18:21:34.215475069 +0000 UTC m=+32.251913693" observedRunningTime="2025-03-17 18:21:34.467049424 +0000 UTC m=+32.503488084" watchObservedRunningTime="2025-03-17 18:21:34.467594323 +0000 UTC m=+32.504032995" Mar 17 18:21:34.498335 systemd[1]: Started cri-containerd-ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f.scope. Mar 17 18:21:34.576946 systemd[1]: cri-containerd-ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f.scope: Deactivated successfully. Mar 17 18:21:34.580303 env[1818]: time="2025-03-17T18:21:34.580112473Z" level=info msg="StartContainer for \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\" returns successfully" Mar 17 18:21:34.684990 env[1818]: time="2025-03-17T18:21:34.684925607Z" level=info msg="shim disconnected" id=ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f Mar 17 18:21:34.685435 env[1818]: time="2025-03-17T18:21:34.685376905Z" level=warning msg="cleaning up after shim disconnected" id=ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f namespace=k8s.io Mar 17 18:21:34.685598 env[1818]: time="2025-03-17T18:21:34.685568342Z" level=info msg="cleaning up dead shim" Mar 17 18:21:34.713072 env[1818]: time="2025-03-17T18:21:34.713013795Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:21:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3577 runtime=io.containerd.runc.v2\n" Mar 17 18:21:35.433628 env[1818]: time="2025-03-17T18:21:35.433568499Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:21:35.464236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330438210.mount: Deactivated successfully. Mar 17 18:21:35.466539 env[1818]: time="2025-03-17T18:21:35.466462508Z" level=info msg="CreateContainer within sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\"" Mar 17 18:21:35.467846 env[1818]: time="2025-03-17T18:21:35.467795847Z" level=info msg="StartContainer for \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\"" Mar 17 18:21:35.538911 systemd[1]: Started cri-containerd-fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48.scope. Mar 17 18:21:35.725777 env[1818]: time="2025-03-17T18:21:35.725611979Z" level=info msg="StartContainer for \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\" returns successfully" Mar 17 18:21:35.959548 systemd[1]: run-containerd-runc-k8s.io-fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48-runc.Of6kNR.mount: Deactivated successfully. Mar 17 18:21:36.048301 kubelet[2945]: I0317 18:21:36.047484 2945 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:21:36.143983 kubelet[2945]: I0317 18:21:36.143933 2945 topology_manager.go:215] "Topology Admit Handler" podUID="a79872fb-9fc5-4b0e-9fff-f3302d65cd4f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2tj84" Mar 17 18:21:36.154878 systemd[1]: Created slice kubepods-burstable-poda79872fb_9fc5_4b0e_9fff_f3302d65cd4f.slice. Mar 17 18:21:36.163256 kubelet[2945]: I0317 18:21:36.163184 2945 topology_manager.go:215] "Topology Admit Handler" podUID="be26682c-505b-4f49-aaa4-9ff781b122ea" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cfq8l" Mar 17 18:21:36.173607 systemd[1]: Created slice kubepods-burstable-podbe26682c_505b_4f49_aaa4_9ff781b122ea.slice. Mar 17 18:21:36.208289 kubelet[2945]: I0317 18:21:36.208219 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be26682c-505b-4f49-aaa4-9ff781b122ea-config-volume\") pod \"coredns-7db6d8ff4d-cfq8l\" (UID: \"be26682c-505b-4f49-aaa4-9ff781b122ea\") " pod="kube-system/coredns-7db6d8ff4d-cfq8l" Mar 17 18:21:36.208500 kubelet[2945]: I0317 18:21:36.208307 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvxg\" (UniqueName: \"kubernetes.io/projected/be26682c-505b-4f49-aaa4-9ff781b122ea-kube-api-access-2rvxg\") pod \"coredns-7db6d8ff4d-cfq8l\" (UID: \"be26682c-505b-4f49-aaa4-9ff781b122ea\") " pod="kube-system/coredns-7db6d8ff4d-cfq8l" Mar 17 18:21:36.208500 kubelet[2945]: I0317 18:21:36.208354 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a79872fb-9fc5-4b0e-9fff-f3302d65cd4f-config-volume\") pod \"coredns-7db6d8ff4d-2tj84\" (UID: \"a79872fb-9fc5-4b0e-9fff-f3302d65cd4f\") " pod="kube-system/coredns-7db6d8ff4d-2tj84" Mar 17 18:21:36.208500 kubelet[2945]: I0317 18:21:36.208397 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lfpn\" (UniqueName: \"kubernetes.io/projected/a79872fb-9fc5-4b0e-9fff-f3302d65cd4f-kube-api-access-2lfpn\") pod \"coredns-7db6d8ff4d-2tj84\" (UID: \"a79872fb-9fc5-4b0e-9fff-f3302d65cd4f\") " pod="kube-system/coredns-7db6d8ff4d-2tj84" Mar 17 18:21:36.309191 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:21:36.462413 env[1818]: time="2025-03-17T18:21:36.462336197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2tj84,Uid:a79872fb-9fc5-4b0e-9fff-f3302d65cd4f,Namespace:kube-system,Attempt:0,}" Mar 17 18:21:36.471362 kubelet[2945]: I0317 18:21:36.471264 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2jgvb" podStartSLOduration=6.608420016 podStartE2EDuration="19.471241374s" podCreationTimestamp="2025-03-17 18:21:17 +0000 UTC" firstStartedPulling="2025-03-17 18:21:18.061903343 +0000 UTC m=+16.098341991" lastFinishedPulling="2025-03-17 18:21:30.924724713 +0000 UTC m=+28.961163349" observedRunningTime="2025-03-17 18:21:36.466352895 +0000 UTC m=+34.502791555" watchObservedRunningTime="2025-03-17 18:21:36.471241374 +0000 UTC m=+34.507680058" Mar 17 18:21:36.482611 env[1818]: time="2025-03-17T18:21:36.482545029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cfq8l,Uid:be26682c-505b-4f49-aaa4-9ff781b122ea,Namespace:kube-system,Attempt:0,}" Mar 17 18:21:37.235209 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:21:39.043007 systemd-networkd[1534]: cilium_host: Link UP Mar 17 18:21:39.043319 systemd-networkd[1534]: cilium_net: Link UP Mar 17 18:21:39.047921 systemd-networkd[1534]: cilium_net: Gained carrier Mar 17 18:21:39.050036 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:21:39.050140 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:21:39.051679 systemd-networkd[1534]: cilium_host: Gained carrier Mar 17 18:21:39.052003 systemd-networkd[1534]: cilium_net: Gained IPv6LL Mar 17 18:21:39.053417 systemd-networkd[1534]: cilium_host: Gained IPv6LL Mar 17 18:21:39.055512 (udev-worker)[3701]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:21:39.057274 (udev-worker)[3739]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:21:39.224794 systemd-networkd[1534]: cilium_vxlan: Link UP Mar 17 18:21:39.224806 systemd-networkd[1534]: cilium_vxlan: Gained carrier Mar 17 18:21:39.698205 kernel: NET: Registered PF_ALG protocol family Mar 17 18:21:41.002100 (udev-worker)[3750]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:21:41.007052 systemd-networkd[1534]: lxc_health: Link UP Mar 17 18:21:41.021222 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:21:41.020657 systemd-networkd[1534]: lxc_health: Gained carrier Mar 17 18:21:41.173852 systemd-networkd[1534]: cilium_vxlan: Gained IPv6LL Mar 17 18:21:41.543972 systemd-networkd[1534]: lxc497abe5d0b87: Link UP Mar 17 18:21:41.561351 kernel: eth0: renamed from tmp9bd08 Mar 17 18:21:41.568413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc497abe5d0b87: link becomes ready Mar 17 18:21:41.567731 systemd-networkd[1534]: lxc497abe5d0b87: Gained carrier Mar 17 18:21:41.614023 systemd-networkd[1534]: lxc6a7940a60aac: Link UP Mar 17 18:21:41.631282 kernel: eth0: renamed from tmpe7352 Mar 17 18:21:41.649313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6a7940a60aac: link becomes ready Mar 17 18:21:41.649729 systemd-networkd[1534]: lxc6a7940a60aac: Gained carrier Mar 17 18:21:42.710038 systemd-networkd[1534]: lxc497abe5d0b87: Gained IPv6LL Mar 17 18:21:42.837794 systemd-networkd[1534]: lxc_health: Gained IPv6LL Mar 17 18:21:43.542091 systemd-networkd[1534]: lxc6a7940a60aac: Gained IPv6LL Mar 17 18:21:48.988434 systemd[1]: Started sshd@5-172.31.18.98:22-139.178.89.65:59224.service. Mar 17 18:21:49.168425 sshd[4105]: Accepted publickey for core from 139.178.89.65 port 59224 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:49.170337 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:49.178096 systemd-logind[1804]: New session 6 of user core. Mar 17 18:21:49.180740 systemd[1]: Started session-6.scope. Mar 17 18:21:49.482067 sshd[4105]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:49.488542 systemd-logind[1804]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:21:49.490322 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:21:49.491438 systemd[1]: sshd@5-172.31.18.98:22-139.178.89.65:59224.service: Deactivated successfully. Mar 17 18:21:49.493784 systemd-logind[1804]: Removed session 6. Mar 17 18:21:49.901009 env[1818]: time="2025-03-17T18:21:49.898939411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:21:49.901009 env[1818]: time="2025-03-17T18:21:49.899016190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:21:49.901009 env[1818]: time="2025-03-17T18:21:49.899043443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:21:49.907241 env[1818]: time="2025-03-17T18:21:49.902084211Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bd0879d86b55bdc83f4d27e074d13f8bb34f0e44c90d50c406718cd8b9e8ffd pid=4131 runtime=io.containerd.runc.v2 Mar 17 18:21:49.924494 env[1818]: time="2025-03-17T18:21:49.924336835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:21:49.924697 env[1818]: time="2025-03-17T18:21:49.924486637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:21:49.924697 env[1818]: time="2025-03-17T18:21:49.924547707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:21:49.933270 env[1818]: time="2025-03-17T18:21:49.925258013Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e73528c2ba9f2c063e5ee61886a4654290dabdbfee3cf7cd47f7147d550e8c08 pid=4147 runtime=io.containerd.runc.v2 Mar 17 18:21:49.984811 systemd[1]: run-containerd-runc-k8s.io-e73528c2ba9f2c063e5ee61886a4654290dabdbfee3cf7cd47f7147d550e8c08-runc.hj5lwd.mount: Deactivated successfully. Mar 17 18:21:49.991433 systemd[1]: Started cri-containerd-9bd0879d86b55bdc83f4d27e074d13f8bb34f0e44c90d50c406718cd8b9e8ffd.scope. Mar 17 18:21:50.000403 systemd[1]: Started cri-containerd-e73528c2ba9f2c063e5ee61886a4654290dabdbfee3cf7cd47f7147d550e8c08.scope. Mar 17 18:21:50.093388 env[1818]: time="2025-03-17T18:21:50.093330098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cfq8l,Uid:be26682c-505b-4f49-aaa4-9ff781b122ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"e73528c2ba9f2c063e5ee61886a4654290dabdbfee3cf7cd47f7147d550e8c08\"" Mar 17 18:21:50.098759 env[1818]: time="2025-03-17T18:21:50.098686059Z" level=info msg="CreateContainer within sandbox \"e73528c2ba9f2c063e5ee61886a4654290dabdbfee3cf7cd47f7147d550e8c08\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:21:50.139634 env[1818]: time="2025-03-17T18:21:50.138403274Z" level=info msg="CreateContainer within sandbox \"e73528c2ba9f2c063e5ee61886a4654290dabdbfee3cf7cd47f7147d550e8c08\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5012c71ecb7efab07161db8e03747ebd368ba805758ac15206278e9741f31071\"" Mar 17 18:21:50.139856 env[1818]: time="2025-03-17T18:21:50.139796945Z" level=info msg="StartContainer for \"5012c71ecb7efab07161db8e03747ebd368ba805758ac15206278e9741f31071\"" Mar 17 18:21:50.182025 systemd[1]: Started cri-containerd-5012c71ecb7efab07161db8e03747ebd368ba805758ac15206278e9741f31071.scope. Mar 17 18:21:50.190685 env[1818]: time="2025-03-17T18:21:50.190618855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2tj84,Uid:a79872fb-9fc5-4b0e-9fff-f3302d65cd4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bd0879d86b55bdc83f4d27e074d13f8bb34f0e44c90d50c406718cd8b9e8ffd\"" Mar 17 18:21:50.203396 env[1818]: time="2025-03-17T18:21:50.202531804Z" level=info msg="CreateContainer within sandbox \"9bd0879d86b55bdc83f4d27e074d13f8bb34f0e44c90d50c406718cd8b9e8ffd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:21:50.255382 env[1818]: time="2025-03-17T18:21:50.255282328Z" level=info msg="CreateContainer within sandbox \"9bd0879d86b55bdc83f4d27e074d13f8bb34f0e44c90d50c406718cd8b9e8ffd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a13386503a286083665fa82e3e4cf48742adfa5a0268aa2a984a2526bc2548c\"" Mar 17 18:21:50.262196 env[1818]: time="2025-03-17T18:21:50.257204269Z" level=info msg="StartContainer for \"0a13386503a286083665fa82e3e4cf48742adfa5a0268aa2a984a2526bc2548c\"" Mar 17 18:21:50.334168 systemd[1]: Started cri-containerd-0a13386503a286083665fa82e3e4cf48742adfa5a0268aa2a984a2526bc2548c.scope. Mar 17 18:21:50.348186 env[1818]: time="2025-03-17T18:21:50.345246849Z" level=info msg="StartContainer for \"5012c71ecb7efab07161db8e03747ebd368ba805758ac15206278e9741f31071\" returns successfully" Mar 17 18:21:50.441918 env[1818]: time="2025-03-17T18:21:50.441742858Z" level=info msg="StartContainer for \"0a13386503a286083665fa82e3e4cf48742adfa5a0268aa2a984a2526bc2548c\" returns successfully" Mar 17 18:21:50.584546 kubelet[2945]: I0317 18:21:50.584453 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2tj84" podStartSLOduration=33.584406313 podStartE2EDuration="33.584406313s" podCreationTimestamp="2025-03-17 18:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:21:50.583027944 +0000 UTC m=+48.619466592" watchObservedRunningTime="2025-03-17 18:21:50.584406313 +0000 UTC m=+48.620844973" Mar 17 18:21:50.585523 kubelet[2945]: I0317 18:21:50.585447 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cfq8l" podStartSLOduration=33.585406069 podStartE2EDuration="33.585406069s" podCreationTimestamp="2025-03-17 18:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:21:50.541695434 +0000 UTC m=+48.578134094" watchObservedRunningTime="2025-03-17 18:21:50.585406069 +0000 UTC m=+48.621844777" Mar 17 18:21:50.910840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3420021231.mount: Deactivated successfully. Mar 17 18:21:54.512044 systemd[1]: Started sshd@6-172.31.18.98:22-139.178.89.65:37488.service. Mar 17 18:21:54.688517 sshd[4290]: Accepted publickey for core from 139.178.89.65 port 37488 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:54.689654 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:54.698143 systemd-logind[1804]: New session 7 of user core. Mar 17 18:21:54.698741 systemd[1]: Started session-7.scope. Mar 17 18:21:54.945426 sshd[4290]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:54.950366 systemd[1]: sshd@6-172.31.18.98:22-139.178.89.65:37488.service: Deactivated successfully. Mar 17 18:21:54.951634 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:21:54.953651 systemd-logind[1804]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:21:54.955114 systemd-logind[1804]: Removed session 7. Mar 17 18:21:59.974495 systemd[1]: Started sshd@7-172.31.18.98:22-139.178.89.65:37504.service. Mar 17 18:22:00.149167 sshd[4303]: Accepted publickey for core from 139.178.89.65 port 37504 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:00.153741 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:00.164034 systemd[1]: Started session-8.scope. Mar 17 18:22:00.165236 systemd-logind[1804]: New session 8 of user core. Mar 17 18:22:00.423533 sshd[4303]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:00.429058 systemd-logind[1804]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:22:00.429703 systemd[1]: sshd@7-172.31.18.98:22-139.178.89.65:37504.service: Deactivated successfully. Mar 17 18:22:00.431045 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:22:00.432689 systemd-logind[1804]: Removed session 8. Mar 17 18:22:05.453744 systemd[1]: Started sshd@8-172.31.18.98:22-139.178.89.65:57972.service. Mar 17 18:22:05.629669 sshd[4320]: Accepted publickey for core from 139.178.89.65 port 57972 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:05.631578 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:05.640364 systemd[1]: Started session-9.scope. Mar 17 18:22:05.642725 systemd-logind[1804]: New session 9 of user core. Mar 17 18:22:05.879552 sshd[4320]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:05.885425 systemd-logind[1804]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:22:05.886819 systemd[1]: sshd@8-172.31.18.98:22-139.178.89.65:57972.service: Deactivated successfully. Mar 17 18:22:05.888194 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:22:05.890266 systemd-logind[1804]: Removed session 9. Mar 17 18:22:10.914409 systemd[1]: Started sshd@9-172.31.18.98:22-139.178.89.65:57976.service. Mar 17 18:22:11.096017 sshd[4332]: Accepted publickey for core from 139.178.89.65 port 57976 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:11.099118 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:11.107139 systemd-logind[1804]: New session 10 of user core. Mar 17 18:22:11.109517 systemd[1]: Started session-10.scope. Mar 17 18:22:11.379327 sshd[4332]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:11.384555 systemd-logind[1804]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:22:11.385378 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:22:11.386611 systemd[1]: sshd@9-172.31.18.98:22-139.178.89.65:57976.service: Deactivated successfully. Mar 17 18:22:11.388908 systemd-logind[1804]: Removed session 10. Mar 17 18:22:11.407511 systemd[1]: Started sshd@10-172.31.18.98:22-139.178.89.65:47264.service. Mar 17 18:22:11.581676 sshd[4345]: Accepted publickey for core from 139.178.89.65 port 47264 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:11.584231 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:11.593257 systemd-logind[1804]: New session 11 of user core. Mar 17 18:22:11.593649 systemd[1]: Started session-11.scope. Mar 17 18:22:11.935453 sshd[4345]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:11.942289 systemd-logind[1804]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:22:11.942815 systemd[1]: sshd@10-172.31.18.98:22-139.178.89.65:47264.service: Deactivated successfully. Mar 17 18:22:11.944205 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:22:11.949648 systemd-logind[1804]: Removed session 11. Mar 17 18:22:11.967283 systemd[1]: Started sshd@11-172.31.18.98:22-139.178.89.65:47266.service. Mar 17 18:22:12.144854 sshd[4354]: Accepted publickey for core from 139.178.89.65 port 47266 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:12.147682 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:12.155718 systemd-logind[1804]: New session 12 of user core. Mar 17 18:22:12.156662 systemd[1]: Started session-12.scope. Mar 17 18:22:12.410139 sshd[4354]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:12.415769 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:22:12.417069 systemd-logind[1804]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:22:12.417528 systemd[1]: sshd@11-172.31.18.98:22-139.178.89.65:47266.service: Deactivated successfully. Mar 17 18:22:12.420551 systemd-logind[1804]: Removed session 12. Mar 17 18:22:14.765041 amazon-ssm-agent[1789]: 2025-03-17 18:22:14 INFO [HealthCheck] HealthCheck reporting agent health. Mar 17 18:22:17.438697 systemd[1]: Started sshd@12-172.31.18.98:22-139.178.89.65:47268.service. Mar 17 18:22:17.609597 sshd[4367]: Accepted publickey for core from 139.178.89.65 port 47268 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:17.612256 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:17.619871 systemd-logind[1804]: New session 13 of user core. Mar 17 18:22:17.621337 systemd[1]: Started session-13.scope. Mar 17 18:22:17.865741 sshd[4367]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:17.871526 systemd-logind[1804]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:22:17.871671 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:22:17.873116 systemd[1]: sshd@12-172.31.18.98:22-139.178.89.65:47268.service: Deactivated successfully. Mar 17 18:22:17.875696 systemd-logind[1804]: Removed session 13. Mar 17 18:22:22.894362 systemd[1]: Started sshd@13-172.31.18.98:22-139.178.89.65:40728.service. Mar 17 18:22:23.066038 sshd[4381]: Accepted publickey for core from 139.178.89.65 port 40728 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:23.069287 sshd[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:23.076251 systemd-logind[1804]: New session 14 of user core. Mar 17 18:22:23.078824 systemd[1]: Started session-14.scope. Mar 17 18:22:23.318190 sshd[4381]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:23.322858 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:22:23.324134 systemd-logind[1804]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:22:23.324572 systemd[1]: sshd@13-172.31.18.98:22-139.178.89.65:40728.service: Deactivated successfully. Mar 17 18:22:23.327132 systemd-logind[1804]: Removed session 14. Mar 17 18:22:28.347693 systemd[1]: Started sshd@14-172.31.18.98:22-139.178.89.65:40744.service. Mar 17 18:22:28.524088 sshd[4393]: Accepted publickey for core from 139.178.89.65 port 40744 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:28.526836 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:28.535282 systemd-logind[1804]: New session 15 of user core. Mar 17 18:22:28.536043 systemd[1]: Started session-15.scope. Mar 17 18:22:28.783527 sshd[4393]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:28.787597 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:22:28.788902 systemd-logind[1804]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:22:28.789376 systemd[1]: sshd@14-172.31.18.98:22-139.178.89.65:40744.service: Deactivated successfully. Mar 17 18:22:28.791745 systemd-logind[1804]: Removed session 15. Mar 17 18:22:33.813871 systemd[1]: Started sshd@15-172.31.18.98:22-139.178.89.65:36060.service. Mar 17 18:22:33.989607 sshd[4405]: Accepted publickey for core from 139.178.89.65 port 36060 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:33.992338 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:34.000309 systemd-logind[1804]: New session 16 of user core. Mar 17 18:22:34.001750 systemd[1]: Started session-16.scope. Mar 17 18:22:34.245178 sshd[4405]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:34.251004 systemd[1]: sshd@15-172.31.18.98:22-139.178.89.65:36060.service: Deactivated successfully. Mar 17 18:22:34.252718 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:22:34.254130 systemd-logind[1804]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:22:34.255807 systemd-logind[1804]: Removed session 16. Mar 17 18:22:34.274578 systemd[1]: Started sshd@16-172.31.18.98:22-139.178.89.65:36066.service. Mar 17 18:22:34.453382 sshd[4417]: Accepted publickey for core from 139.178.89.65 port 36066 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:34.456492 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:34.465143 systemd[1]: Started session-17.scope. Mar 17 18:22:34.466392 systemd-logind[1804]: New session 17 of user core. Mar 17 18:22:34.784872 sshd[4417]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:34.789777 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:22:34.789827 systemd-logind[1804]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:22:34.791759 systemd[1]: sshd@16-172.31.18.98:22-139.178.89.65:36066.service: Deactivated successfully. Mar 17 18:22:34.794455 systemd-logind[1804]: Removed session 17. Mar 17 18:22:34.815110 systemd[1]: Started sshd@17-172.31.18.98:22-139.178.89.65:36076.service. Mar 17 18:22:34.993090 sshd[4426]: Accepted publickey for core from 139.178.89.65 port 36076 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:34.996198 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:35.004879 systemd[1]: Started session-18.scope. Mar 17 18:22:35.005944 systemd-logind[1804]: New session 18 of user core. Mar 17 18:22:37.524982 sshd[4426]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:37.531935 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:22:37.533663 systemd[1]: sshd@17-172.31.18.98:22-139.178.89.65:36076.service: Deactivated successfully. Mar 17 18:22:37.534993 systemd-logind[1804]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:22:37.536941 systemd-logind[1804]: Removed session 18. Mar 17 18:22:37.556626 systemd[1]: Started sshd@18-172.31.18.98:22-139.178.89.65:36090.service. Mar 17 18:22:37.734772 sshd[4442]: Accepted publickey for core from 139.178.89.65 port 36090 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:37.737395 sshd[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:37.746315 systemd-logind[1804]: New session 19 of user core. Mar 17 18:22:37.746624 systemd[1]: Started session-19.scope. Mar 17 18:22:38.225794 sshd[4442]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:38.230922 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:22:38.232274 systemd-logind[1804]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:22:38.232700 systemd[1]: sshd@18-172.31.18.98:22-139.178.89.65:36090.service: Deactivated successfully. Mar 17 18:22:38.235026 systemd-logind[1804]: Removed session 19. Mar 17 18:22:38.254324 systemd[1]: Started sshd@19-172.31.18.98:22-139.178.89.65:36100.service. Mar 17 18:22:38.428942 sshd[4453]: Accepted publickey for core from 139.178.89.65 port 36100 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:38.431775 sshd[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:38.440498 systemd[1]: Started session-20.scope. Mar 17 18:22:38.441781 systemd-logind[1804]: New session 20 of user core. Mar 17 18:22:38.682957 sshd[4453]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:38.687061 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:22:38.688598 systemd-logind[1804]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:22:38.688964 systemd[1]: sshd@19-172.31.18.98:22-139.178.89.65:36100.service: Deactivated successfully. Mar 17 18:22:38.692647 systemd-logind[1804]: Removed session 20. Mar 17 18:22:43.714654 systemd[1]: Started sshd@20-172.31.18.98:22-139.178.89.65:41356.service. Mar 17 18:22:43.894194 sshd[4464]: Accepted publickey for core from 139.178.89.65 port 41356 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:43.896775 sshd[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:43.905817 systemd[1]: Started session-21.scope. Mar 17 18:22:43.907127 systemd-logind[1804]: New session 21 of user core. Mar 17 18:22:44.153518 sshd[4464]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:44.158456 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:22:44.159653 systemd[1]: sshd@20-172.31.18.98:22-139.178.89.65:41356.service: Deactivated successfully. Mar 17 18:22:44.161452 systemd-logind[1804]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:22:44.163184 systemd-logind[1804]: Removed session 21. Mar 17 18:22:49.181331 systemd[1]: Started sshd@21-172.31.18.98:22-139.178.89.65:41370.service. Mar 17 18:22:49.355079 sshd[4479]: Accepted publickey for core from 139.178.89.65 port 41370 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:49.358252 sshd[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:49.367340 systemd[1]: Started session-22.scope. Mar 17 18:22:49.368468 systemd-logind[1804]: New session 22 of user core. Mar 17 18:22:49.616212 sshd[4479]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:49.622002 systemd-logind[1804]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:22:49.622681 systemd[1]: sshd@21-172.31.18.98:22-139.178.89.65:41370.service: Deactivated successfully. Mar 17 18:22:49.623975 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:22:49.625799 systemd-logind[1804]: Removed session 22. Mar 17 18:22:54.645610 systemd[1]: Started sshd@22-172.31.18.98:22-139.178.89.65:59164.service. Mar 17 18:22:54.821501 sshd[4494]: Accepted publickey for core from 139.178.89.65 port 59164 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:54.824104 sshd[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:54.831958 systemd-logind[1804]: New session 23 of user core. Mar 17 18:22:54.833374 systemd[1]: Started session-23.scope. Mar 17 18:22:55.071381 sshd[4494]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:55.076134 systemd-logind[1804]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:22:55.076558 systemd[1]: sshd@22-172.31.18.98:22-139.178.89.65:59164.service: Deactivated successfully. Mar 17 18:22:55.077872 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:22:55.079761 systemd-logind[1804]: Removed session 23. Mar 17 18:23:00.099040 systemd[1]: Started sshd@23-172.31.18.98:22-139.178.89.65:59180.service. Mar 17 18:23:00.274037 sshd[4506]: Accepted publickey for core from 139.178.89.65 port 59180 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:23:00.277578 sshd[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:00.286210 systemd-logind[1804]: New session 24 of user core. Mar 17 18:23:00.286971 systemd[1]: Started session-24.scope. Mar 17 18:23:00.528271 sshd[4506]: pam_unix(sshd:session): session closed for user core Mar 17 18:23:00.533176 systemd[1]: sshd@23-172.31.18.98:22-139.178.89.65:59180.service: Deactivated successfully. Mar 17 18:23:00.534499 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:23:00.536471 systemd-logind[1804]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:23:00.538530 systemd-logind[1804]: Removed session 24. Mar 17 18:23:00.560319 systemd[1]: Started sshd@24-172.31.18.98:22-139.178.89.65:59186.service. Mar 17 18:23:00.736911 sshd[4518]: Accepted publickey for core from 139.178.89.65 port 59186 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:23:00.740205 sshd[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:00.747280 systemd-logind[1804]: New session 25 of user core. Mar 17 18:23:00.748845 systemd[1]: Started session-25.scope. Mar 17 18:23:03.456125 env[1818]: time="2025-03-17T18:23:03.456053505Z" level=info msg="StopContainer for \"68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746\" with timeout 30 (s)" Mar 17 18:23:03.457245 env[1818]: time="2025-03-17T18:23:03.457141400Z" level=info msg="Stop container \"68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746\" with signal terminated" Mar 17 18:23:03.528059 env[1818]: time="2025-03-17T18:23:03.527974469Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:23:03.536631 systemd[1]: cri-containerd-68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746.scope: Deactivated successfully. Mar 17 18:23:03.567819 env[1818]: time="2025-03-17T18:23:03.567761870Z" level=info msg="StopContainer for \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\" with timeout 2 (s)" Mar 17 18:23:03.577522 env[1818]: time="2025-03-17T18:23:03.577461923Z" level=info msg="Stop container \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\" with signal terminated" Mar 17 18:23:03.603722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746-rootfs.mount: Deactivated successfully. Mar 17 18:23:03.621701 systemd-networkd[1534]: lxc_health: Link DOWN Mar 17 18:23:03.621721 systemd-networkd[1534]: lxc_health: Lost carrier Mar 17 18:23:03.633790 env[1818]: time="2025-03-17T18:23:03.633726342Z" level=info msg="shim disconnected" id=68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746 Mar 17 18:23:03.634227 env[1818]: time="2025-03-17T18:23:03.634185143Z" level=warning msg="cleaning up after shim disconnected" id=68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746 namespace=k8s.io Mar 17 18:23:03.634415 env[1818]: time="2025-03-17T18:23:03.634383997Z" level=info msg="cleaning up dead shim" Mar 17 18:23:03.669411 env[1818]: time="2025-03-17T18:23:03.669352579Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4570 runtime=io.containerd.runc.v2\n" Mar 17 18:23:03.674800 env[1818]: time="2025-03-17T18:23:03.674736497Z" level=info msg="StopContainer for \"68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746\" returns successfully" Mar 17 18:23:03.676638 env[1818]: time="2025-03-17T18:23:03.676520148Z" level=info msg="StopPodSandbox for \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\"" Mar 17 18:23:03.676821 env[1818]: time="2025-03-17T18:23:03.676692086Z" level=info msg="Container to stop \"68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:23:03.683929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53-shm.mount: Deactivated successfully. Mar 17 18:23:03.698057 systemd[1]: cri-containerd-fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48.scope: Deactivated successfully. Mar 17 18:23:03.698728 systemd[1]: cri-containerd-fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48.scope: Consumed 13.977s CPU time. Mar 17 18:23:03.708919 systemd[1]: cri-containerd-d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53.scope: Deactivated successfully. Mar 17 18:23:03.757588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48-rootfs.mount: Deactivated successfully. Mar 17 18:23:03.774065 env[1818]: time="2025-03-17T18:23:03.773985008Z" level=info msg="shim disconnected" id=fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48 Mar 17 18:23:03.774065 env[1818]: time="2025-03-17T18:23:03.774060177Z" level=warning msg="cleaning up after shim disconnected" id=fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48 namespace=k8s.io Mar 17 18:23:03.774452 env[1818]: time="2025-03-17T18:23:03.774083877Z" level=info msg="cleaning up dead shim" Mar 17 18:23:03.782904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53-rootfs.mount: Deactivated successfully. Mar 17 18:23:03.786565 env[1818]: time="2025-03-17T18:23:03.786490835Z" level=info msg="shim disconnected" id=d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53 Mar 17 18:23:03.787347 env[1818]: time="2025-03-17T18:23:03.787287416Z" level=warning msg="cleaning up after shim disconnected" id=d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53 namespace=k8s.io Mar 17 18:23:03.787674 env[1818]: time="2025-03-17T18:23:03.787634400Z" level=info msg="cleaning up dead shim" Mar 17 18:23:03.796048 env[1818]: time="2025-03-17T18:23:03.795972042Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4619 runtime=io.containerd.runc.v2\n" Mar 17 18:23:03.800567 env[1818]: time="2025-03-17T18:23:03.800488614Z" level=info msg="StopContainer for \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\" returns successfully" Mar 17 18:23:03.801267 env[1818]: time="2025-03-17T18:23:03.801198350Z" level=info msg="StopPodSandbox for \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\"" Mar 17 18:23:03.801410 env[1818]: time="2025-03-17T18:23:03.801355024Z" level=info msg="Container to stop \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:23:03.801481 env[1818]: time="2025-03-17T18:23:03.801389008Z" level=info msg="Container to stop \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:23:03.801481 env[1818]: time="2025-03-17T18:23:03.801435749Z" level=info msg="Container to stop \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:23:03.801615 env[1818]: time="2025-03-17T18:23:03.801471809Z" level=info msg="Container to stop \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:23:03.801615 env[1818]: time="2025-03-17T18:23:03.801499097Z" level=info msg="Container to stop \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:23:03.805956 env[1818]: time="2025-03-17T18:23:03.805895129Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4628 runtime=io.containerd.runc.v2\n" Mar 17 18:23:03.806901 env[1818]: time="2025-03-17T18:23:03.806839335Z" level=info msg="TearDown network for sandbox \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\" successfully" Mar 17 18:23:03.807113 env[1818]: time="2025-03-17T18:23:03.807077382Z" level=info msg="StopPodSandbox for \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\" returns successfully" Mar 17 18:23:03.829554 systemd[1]: cri-containerd-0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42.scope: Deactivated successfully. Mar 17 18:23:03.864568 kubelet[2945]: I0317 18:23:03.863861 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dae35ac2-b690-4fc6-a65e-fba2be1899e6-cilium-config-path\") pod \"dae35ac2-b690-4fc6-a65e-fba2be1899e6\" (UID: \"dae35ac2-b690-4fc6-a65e-fba2be1899e6\") " Mar 17 18:23:03.864568 kubelet[2945]: I0317 18:23:03.863936 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg7h4\" (UniqueName: \"kubernetes.io/projected/dae35ac2-b690-4fc6-a65e-fba2be1899e6-kube-api-access-wg7h4\") pod \"dae35ac2-b690-4fc6-a65e-fba2be1899e6\" (UID: \"dae35ac2-b690-4fc6-a65e-fba2be1899e6\") " Mar 17 18:23:03.872042 kubelet[2945]: I0317 18:23:03.871944 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dae35ac2-b690-4fc6-a65e-fba2be1899e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dae35ac2-b690-4fc6-a65e-fba2be1899e6" (UID: "dae35ac2-b690-4fc6-a65e-fba2be1899e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:23:03.873899 kubelet[2945]: I0317 18:23:03.873813 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae35ac2-b690-4fc6-a65e-fba2be1899e6-kube-api-access-wg7h4" (OuterVolumeSpecName: "kube-api-access-wg7h4") pod "dae35ac2-b690-4fc6-a65e-fba2be1899e6" (UID: "dae35ac2-b690-4fc6-a65e-fba2be1899e6"). InnerVolumeSpecName "kube-api-access-wg7h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:23:03.876759 env[1818]: time="2025-03-17T18:23:03.876685813Z" level=info msg="shim disconnected" id=0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42 Mar 17 18:23:03.876956 env[1818]: time="2025-03-17T18:23:03.876757682Z" level=warning msg="cleaning up after shim disconnected" id=0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42 namespace=k8s.io Mar 17 18:23:03.876956 env[1818]: time="2025-03-17T18:23:03.876782906Z" level=info msg="cleaning up dead shim" Mar 17 18:23:03.891944 env[1818]: time="2025-03-17T18:23:03.891869853Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4668 runtime=io.containerd.runc.v2\n" Mar 17 18:23:03.892558 env[1818]: time="2025-03-17T18:23:03.892495816Z" level=info msg="TearDown network for sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" successfully" Mar 17 18:23:03.892690 env[1818]: time="2025-03-17T18:23:03.892558132Z" level=info msg="StopPodSandbox for \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" returns successfully" Mar 17 18:23:03.964901 kubelet[2945]: I0317 18:23:03.964758 2945 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wg7h4\" (UniqueName: \"kubernetes.io/projected/dae35ac2-b690-4fc6-a65e-fba2be1899e6-kube-api-access-wg7h4\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:03.964901 kubelet[2945]: I0317 18:23:03.964811 2945 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dae35ac2-b690-4fc6-a65e-fba2be1899e6-cilium-config-path\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.065311 kubelet[2945]: I0317 18:23:04.065243 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.065633 kubelet[2945]: I0317 18:23:04.065583 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-lib-modules\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.066462 kubelet[2945]: I0317 18:23:04.065800 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-clustermesh-secrets\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.066600 kubelet[2945]: I0317 18:23:04.066519 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-bpf-maps\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.066600 kubelet[2945]: I0317 18:23:04.066562 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-run\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.066600 kubelet[2945]: I0317 18:23:04.066596 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-cgroup\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.066818 kubelet[2945]: I0317 18:23:04.066639 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkttz\" (UniqueName: \"kubernetes.io/projected/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-kube-api-access-nkttz\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.066818 kubelet[2945]: I0317 18:23:04.066702 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-hostproc\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.066818 kubelet[2945]: I0317 18:23:04.066735 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-etc-cni-netd\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.066818 kubelet[2945]: I0317 18:23:04.066767 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-host-proc-sys-net\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.066818 kubelet[2945]: I0317 18:23:04.066808 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-config-path\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.067127 kubelet[2945]: I0317 18:23:04.066870 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-hubble-tls\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.067127 kubelet[2945]: I0317 18:23:04.066907 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-host-proc-sys-kernel\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.067127 kubelet[2945]: I0317 18:23:04.066944 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cni-path\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.067127 kubelet[2945]: I0317 18:23:04.066979 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-xtables-lock\") pod \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\" (UID: \"6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70\") " Mar 17 18:23:04.067127 kubelet[2945]: I0317 18:23:04.067041 2945 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-lib-modules\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.067127 kubelet[2945]: I0317 18:23:04.067092 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.067517 kubelet[2945]: I0317 18:23:04.067143 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.067517 kubelet[2945]: I0317 18:23:04.067212 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.067517 kubelet[2945]: I0317 18:23:04.067249 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.068009 kubelet[2945]: I0317 18:23:04.067956 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-hostproc" (OuterVolumeSpecName: "hostproc") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.068123 kubelet[2945]: I0317 18:23:04.068026 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.068123 kubelet[2945]: I0317 18:23:04.068065 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.068601 kubelet[2945]: I0317 18:23:04.068556 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.071243 kubelet[2945]: I0317 18:23:04.071111 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cni-path" (OuterVolumeSpecName: "cni-path") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:04.074180 kubelet[2945]: I0317 18:23:04.074095 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:23:04.075876 kubelet[2945]: I0317 18:23:04.075805 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-kube-api-access-nkttz" (OuterVolumeSpecName: "kube-api-access-nkttz") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "kube-api-access-nkttz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:23:04.080196 kubelet[2945]: I0317 18:23:04.080095 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:23:04.080463 kubelet[2945]: I0317 18:23:04.080416 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" (UID: "6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:23:04.167419 kubelet[2945]: I0317 18:23:04.167363 2945 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-bpf-maps\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167607 kubelet[2945]: I0317 18:23:04.167433 2945 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-run\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167607 kubelet[2945]: I0317 18:23:04.167461 2945 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-cgroup\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167607 kubelet[2945]: I0317 18:23:04.167510 2945 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nkttz\" (UniqueName: \"kubernetes.io/projected/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-kube-api-access-nkttz\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167607 kubelet[2945]: I0317 18:23:04.167539 2945 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-hostproc\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167607 kubelet[2945]: I0317 18:23:04.167564 2945 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-etc-cni-netd\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167949 kubelet[2945]: I0317 18:23:04.167614 2945 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cilium-config-path\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167949 kubelet[2945]: I0317 18:23:04.167636 2945 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-host-proc-sys-net\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167949 kubelet[2945]: I0317 18:23:04.167655 2945 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-cni-path\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167949 kubelet[2945]: I0317 18:23:04.167709 2945 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-hubble-tls\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.167949 kubelet[2945]: I0317 18:23:04.167731 2945 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-host-proc-sys-kernel\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.170405 kubelet[2945]: I0317 18:23:04.168211 2945 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-xtables-lock\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.170405 kubelet[2945]: I0317 18:23:04.168241 2945 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70-clustermesh-secrets\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:04.251373 systemd[1]: Removed slice kubepods-besteffort-poddae35ac2_b690_4fc6_a65e_fba2be1899e6.slice. Mar 17 18:23:04.257006 systemd[1]: Removed slice kubepods-burstable-pod6e6a83be_a0d3_4ef1_b7a9_4a08c0f0bb70.slice. Mar 17 18:23:04.257241 systemd[1]: kubepods-burstable-pod6e6a83be_a0d3_4ef1_b7a9_4a08c0f0bb70.slice: Consumed 14.196s CPU time. Mar 17 18:23:04.474179 systemd[1]: var-lib-kubelet-pods-dae35ac2\x2db690\x2d4fc6\x2da65e\x2dfba2be1899e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwg7h4.mount: Deactivated successfully. Mar 17 18:23:04.474637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42-rootfs.mount: Deactivated successfully. Mar 17 18:23:04.474909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42-shm.mount: Deactivated successfully. Mar 17 18:23:04.475180 systemd[1]: var-lib-kubelet-pods-6e6a83be\x2da0d3\x2d4ef1\x2db7a9\x2d4a08c0f0bb70-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnkttz.mount: Deactivated successfully. Mar 17 18:23:04.475564 systemd[1]: var-lib-kubelet-pods-6e6a83be\x2da0d3\x2d4ef1\x2db7a9\x2d4a08c0f0bb70-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:23:04.475820 systemd[1]: var-lib-kubelet-pods-6e6a83be\x2da0d3\x2d4ef1\x2db7a9\x2d4a08c0f0bb70-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:23:04.697621 kubelet[2945]: I0317 18:23:04.697584 2945 scope.go:117] "RemoveContainer" containerID="fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48" Mar 17 18:23:04.701768 env[1818]: time="2025-03-17T18:23:04.701412295Z" level=info msg="RemoveContainer for \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\"" Mar 17 18:23:04.712786 env[1818]: time="2025-03-17T18:23:04.712724684Z" level=info msg="RemoveContainer for \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\" returns successfully" Mar 17 18:23:04.714271 kubelet[2945]: I0317 18:23:04.714219 2945 scope.go:117] "RemoveContainer" containerID="ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f" Mar 17 18:23:04.718432 env[1818]: time="2025-03-17T18:23:04.718367936Z" level=info msg="RemoveContainer for \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\"" Mar 17 18:23:04.726282 env[1818]: time="2025-03-17T18:23:04.726124915Z" level=info msg="RemoveContainer for \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\" returns successfully" Mar 17 18:23:04.728614 kubelet[2945]: I0317 18:23:04.728563 2945 scope.go:117] "RemoveContainer" containerID="5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67" Mar 17 18:23:04.747566 env[1818]: time="2025-03-17T18:23:04.747488552Z" level=info msg="RemoveContainer for \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\"" Mar 17 18:23:04.760047 env[1818]: time="2025-03-17T18:23:04.759989266Z" level=info msg="RemoveContainer for \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\" returns successfully" Mar 17 18:23:04.760723 kubelet[2945]: I0317 18:23:04.760581 2945 scope.go:117] "RemoveContainer" containerID="5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a" Mar 17 18:23:04.763401 env[1818]: time="2025-03-17T18:23:04.763351330Z" level=info msg="RemoveContainer for \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\"" Mar 17 18:23:04.776069 env[1818]: time="2025-03-17T18:23:04.776009102Z" level=info msg="RemoveContainer for \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\" returns successfully" Mar 17 18:23:04.776721 kubelet[2945]: I0317 18:23:04.776537 2945 scope.go:117] "RemoveContainer" containerID="5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7" Mar 17 18:23:04.778884 env[1818]: time="2025-03-17T18:23:04.778811432Z" level=info msg="RemoveContainer for \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\"" Mar 17 18:23:04.785113 env[1818]: time="2025-03-17T18:23:04.785041058Z" level=info msg="RemoveContainer for \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\" returns successfully" Mar 17 18:23:04.785645 kubelet[2945]: I0317 18:23:04.785479 2945 scope.go:117] "RemoveContainer" containerID="fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48" Mar 17 18:23:04.786286 env[1818]: time="2025-03-17T18:23:04.786121622Z" level=error msg="ContainerStatus for \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\": not found" Mar 17 18:23:04.786962 kubelet[2945]: E0317 18:23:04.786628 2945 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\": not found" containerID="fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48" Mar 17 18:23:04.786962 kubelet[2945]: I0317 18:23:04.786680 2945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48"} err="failed to get container status \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\": rpc error: code = NotFound desc = an error occurred when try to find container \"fac12e9f62c873e25d758041fb489530fee1b678ab41fbd9e67ec27034d01b48\": not found" Mar 17 18:23:04.786962 kubelet[2945]: I0317 18:23:04.786813 2945 scope.go:117] "RemoveContainer" containerID="ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f" Mar 17 18:23:04.787441 env[1818]: time="2025-03-17T18:23:04.787246574Z" level=error msg="ContainerStatus for \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\": not found" Mar 17 18:23:04.787885 kubelet[2945]: E0317 18:23:04.787676 2945 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\": not found" containerID="ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f" Mar 17 18:23:04.787885 kubelet[2945]: I0317 18:23:04.787722 2945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f"} err="failed to get container status \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab53c3d5064d1cd40394824393671e91f89f12791d0649aa7f00101e20e4e29f\": not found" Mar 17 18:23:04.787885 kubelet[2945]: I0317 18:23:04.787755 2945 scope.go:117] "RemoveContainer" containerID="5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67" Mar 17 18:23:04.788570 env[1818]: time="2025-03-17T18:23:04.788487675Z" level=error msg="ContainerStatus for \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\": not found" Mar 17 18:23:04.789114 kubelet[2945]: E0317 18:23:04.788878 2945 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\": not found" containerID="5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67" Mar 17 18:23:04.789114 kubelet[2945]: I0317 18:23:04.788928 2945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67"} err="failed to get container status \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ec35ac7c91f45d263df22be5b3feac328b2e120a36c30029bff40acb0e09e67\": not found" Mar 17 18:23:04.789114 kubelet[2945]: I0317 18:23:04.788961 2945 scope.go:117] "RemoveContainer" containerID="5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a" Mar 17 18:23:04.789552 env[1818]: time="2025-03-17T18:23:04.789404329Z" level=error msg="ContainerStatus for \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\": not found" Mar 17 18:23:04.789989 kubelet[2945]: E0317 18:23:04.789760 2945 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\": not found" containerID="5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a" Mar 17 18:23:04.789989 kubelet[2945]: I0317 18:23:04.789806 2945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a"} err="failed to get container status \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5246c2850a73272c448c8f829926d681c47286be92c3eaad815713f17c4faa8a\": not found" Mar 17 18:23:04.789989 kubelet[2945]: I0317 18:23:04.789847 2945 scope.go:117] "RemoveContainer" containerID="5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7" Mar 17 18:23:04.790719 env[1818]: time="2025-03-17T18:23:04.790636130Z" level=error msg="ContainerStatus for \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\": not found" Mar 17 18:23:04.791314 kubelet[2945]: E0317 18:23:04.791028 2945 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\": not found" containerID="5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7" Mar 17 18:23:04.791314 kubelet[2945]: I0317 18:23:04.791072 2945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7"} err="failed to get container status \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5958020f0a0674ebea2ac6f4ae70f1e3befd9fe5dac0e7e5488c2b3364c260d7\": not found" Mar 17 18:23:04.791314 kubelet[2945]: I0317 18:23:04.791104 2945 scope.go:117] "RemoveContainer" containerID="68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746" Mar 17 18:23:04.793429 env[1818]: time="2025-03-17T18:23:04.793337155Z" level=info msg="RemoveContainer for \"68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746\"" Mar 17 18:23:04.799977 env[1818]: time="2025-03-17T18:23:04.799905018Z" level=info msg="RemoveContainer for \"68b548e595f38339a197ceb5c9ffb9331dddf22c52ec8684289d82dc0f0c5746\" returns successfully" Mar 17 18:23:05.402571 sshd[4518]: pam_unix(sshd:session): session closed for user core Mar 17 18:23:05.407145 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:23:05.407506 systemd[1]: session-25.scope: Consumed 1.902s CPU time. Mar 17 18:23:05.408300 systemd[1]: sshd@24-172.31.18.98:22-139.178.89.65:59186.service: Deactivated successfully. Mar 17 18:23:05.410019 systemd-logind[1804]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:23:05.411824 systemd-logind[1804]: Removed session 25. Mar 17 18:23:05.428862 systemd[1]: Started sshd@25-172.31.18.98:22-139.178.89.65:34968.service. Mar 17 18:23:05.600836 sshd[4686]: Accepted publickey for core from 139.178.89.65 port 34968 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:23:05.603984 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:05.612494 systemd-logind[1804]: New session 26 of user core. Mar 17 18:23:05.612886 systemd[1]: Started session-26.scope. Mar 17 18:23:06.251881 kubelet[2945]: I0317 18:23:06.251817 2945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" path="/var/lib/kubelet/pods/6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70/volumes" Mar 17 18:23:06.254633 kubelet[2945]: I0317 18:23:06.254581 2945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae35ac2-b690-4fc6-a65e-fba2be1899e6" path="/var/lib/kubelet/pods/dae35ac2-b690-4fc6-a65e-fba2be1899e6/volumes" Mar 17 18:23:07.460121 kubelet[2945]: E0317 18:23:07.460075 2945 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:23:08.620210 sshd[4686]: pam_unix(sshd:session): session closed for user core Mar 17 18:23:08.625471 systemd-logind[1804]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:23:08.625882 systemd[1]: sshd@25-172.31.18.98:22-139.178.89.65:34968.service: Deactivated successfully. Mar 17 18:23:08.627273 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:23:08.627593 systemd[1]: session-26.scope: Consumed 2.780s CPU time. Mar 17 18:23:08.629646 systemd-logind[1804]: Removed session 26. Mar 17 18:23:08.637932 kubelet[2945]: I0317 18:23:08.637883 2945 topology_manager.go:215] "Topology Admit Handler" podUID="bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" podNamespace="kube-system" podName="cilium-nplg2" Mar 17 18:23:08.638684 kubelet[2945]: E0317 18:23:08.638649 2945 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" containerName="mount-cgroup" Mar 17 18:23:08.638822 kubelet[2945]: E0317 18:23:08.638798 2945 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" containerName="clean-cilium-state" Mar 17 18:23:08.638953 kubelet[2945]: E0317 18:23:08.638929 2945 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" containerName="cilium-agent" Mar 17 18:23:08.639077 kubelet[2945]: E0317 18:23:08.639055 2945 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" containerName="apply-sysctl-overwrites" Mar 17 18:23:08.639219 kubelet[2945]: E0317 18:23:08.639196 2945 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" containerName="mount-bpf-fs" Mar 17 18:23:08.639332 kubelet[2945]: E0317 18:23:08.639310 2945 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dae35ac2-b690-4fc6-a65e-fba2be1899e6" containerName="cilium-operator" Mar 17 18:23:08.639514 kubelet[2945]: I0317 18:23:08.639490 2945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6a83be-a0d3-4ef1-b7a9-4a08c0f0bb70" containerName="cilium-agent" Mar 17 18:23:08.639638 kubelet[2945]: I0317 18:23:08.639616 2945 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae35ac2-b690-4fc6-a65e-fba2be1899e6" containerName="cilium-operator" Mar 17 18:23:08.651599 systemd[1]: Started sshd@26-172.31.18.98:22-139.178.89.65:34976.service. Mar 17 18:23:08.664566 kubelet[2945]: W0317 18:23:08.664526 2945 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:23:08.665056 kubelet[2945]: E0317 18:23:08.665008 2945 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:23:08.668625 systemd[1]: Created slice kubepods-burstable-podbc3d6913_802b_4ad5_b0a4_e9edecf7a8bf.slice. Mar 17 18:23:08.669218 kubelet[2945]: W0317 18:23:08.665191 2945 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:23:08.669453 kubelet[2945]: E0317 18:23:08.669414 2945 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:23:08.669581 kubelet[2945]: W0317 18:23:08.665251 2945 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:23:08.669729 kubelet[2945]: E0317 18:23:08.669707 2945 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:23:08.669862 kubelet[2945]: W0317 18:23:08.665296 2945 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:23:08.669997 kubelet[2945]: E0317 18:23:08.669977 2945 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-18-98" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-98' and this object Mar 17 18:23:08.796459 kubelet[2945]: I0317 18:23:08.796378 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-bpf-maps\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.796630 kubelet[2945]: I0317 18:23:08.796463 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-hostproc\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.796630 kubelet[2945]: I0317 18:23:08.796531 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cni-path\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.796630 kubelet[2945]: I0317 18:23:08.796593 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-etc-cni-netd\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.796850 kubelet[2945]: I0317 18:23:08.796634 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-lib-modules\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.796850 kubelet[2945]: I0317 18:23:08.796693 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-xtables-lock\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.796850 kubelet[2945]: I0317 18:23:08.796734 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-config-path\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.796850 kubelet[2945]: I0317 18:23:08.796800 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-cgroup\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.797082 kubelet[2945]: I0317 18:23:08.796861 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-hubble-tls\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.797082 kubelet[2945]: I0317 18:23:08.796900 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-run\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.797082 kubelet[2945]: I0317 18:23:08.796963 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-host-proc-sys-net\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.797082 kubelet[2945]: I0317 18:23:08.797023 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-host-proc-sys-kernel\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.797082 kubelet[2945]: I0317 18:23:08.797066 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7c7n\" (UniqueName: \"kubernetes.io/projected/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-kube-api-access-b7c7n\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.797406 kubelet[2945]: I0317 18:23:08.797128 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-clustermesh-secrets\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.797406 kubelet[2945]: I0317 18:23:08.797217 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-ipsec-secrets\") pod \"cilium-nplg2\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " pod="kube-system/cilium-nplg2" Mar 17 18:23:08.839447 sshd[4696]: Accepted publickey for core from 139.178.89.65 port 34976 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:23:08.841966 sshd[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:08.853719 systemd[1]: Started session-27.scope. Mar 17 18:23:08.854822 systemd-logind[1804]: New session 27 of user core. Mar 17 18:23:09.121101 kubelet[2945]: E0317 18:23:09.121021 2945 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-nplg2" podUID="bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" Mar 17 18:23:09.121898 sshd[4696]: pam_unix(sshd:session): session closed for user core Mar 17 18:23:09.126563 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:23:09.128897 systemd[1]: sshd@26-172.31.18.98:22-139.178.89.65:34976.service: Deactivated successfully. Mar 17 18:23:09.130360 systemd-logind[1804]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:23:09.132701 systemd-logind[1804]: Removed session 27. Mar 17 18:23:09.150192 systemd[1]: Started sshd@27-172.31.18.98:22-139.178.89.65:34986.service. Mar 17 18:23:09.328755 sshd[4709]: Accepted publickey for core from 139.178.89.65 port 34986 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:23:09.331357 sshd[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:09.340974 systemd[1]: Started session-28.scope. Mar 17 18:23:09.341759 systemd-logind[1804]: New session 28 of user core. Mar 17 18:23:09.911610 kubelet[2945]: I0317 18:23:09.910717 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-xtables-lock\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.911610 kubelet[2945]: I0317 18:23:09.910789 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7c7n\" (UniqueName: \"kubernetes.io/projected/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-kube-api-access-b7c7n\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.911610 kubelet[2945]: I0317 18:23:09.910828 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-etc-cni-netd\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.911610 kubelet[2945]: I0317 18:23:09.910867 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-run\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.911610 kubelet[2945]: I0317 18:23:09.910906 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-hubble-tls\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.911610 kubelet[2945]: I0317 18:23:09.910938 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-lib-modules\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.912474 kubelet[2945]: I0317 18:23:09.910971 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cni-path\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.912474 kubelet[2945]: I0317 18:23:09.911003 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-cgroup\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.912474 kubelet[2945]: I0317 18:23:09.911037 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-host-proc-sys-net\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.912474 kubelet[2945]: I0317 18:23:09.911071 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-bpf-maps\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.912474 kubelet[2945]: I0317 18:23:09.911106 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-config-path\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.912474 kubelet[2945]: I0317 18:23:09.911138 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-host-proc-sys-kernel\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.912814 kubelet[2945]: I0317 18:23:09.911245 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-hostproc\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.912814 kubelet[2945]: I0317 18:23:09.911331 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-ipsec-secrets\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:09.912814 kubelet[2945]: I0317 18:23:09.911969 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cni-path" (OuterVolumeSpecName: "cni-path") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.912814 kubelet[2945]: I0317 18:23:09.912028 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.924563 kubelet[2945]: I0317 18:23:09.921341 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.924563 kubelet[2945]: I0317 18:23:09.921423 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.924563 kubelet[2945]: I0317 18:23:09.921909 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.924563 kubelet[2945]: I0317 18:23:09.921960 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.924563 kubelet[2945]: I0317 18:23:09.922000 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.923440 systemd[1]: var-lib-kubelet-pods-bc3d6913\x2d802b\x2d4ad5\x2db0a4\x2de9edecf7a8bf-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:23:09.926865 kubelet[2945]: I0317 18:23:09.926786 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.926975 kubelet[2945]: I0317 18:23:09.926884 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-hostproc" (OuterVolumeSpecName: "hostproc") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.927287 kubelet[2945]: I0317 18:23:09.927252 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:23:09.927450 kubelet[2945]: I0317 18:23:09.927408 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:23:09.928771 kubelet[2945]: I0317 18:23:09.928646 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:23:09.937773 kubelet[2945]: I0317 18:23:09.932674 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-kube-api-access-b7c7n" (OuterVolumeSpecName: "kube-api-access-b7c7n") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "kube-api-access-b7c7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:23:09.934251 systemd[1]: var-lib-kubelet-pods-bc3d6913\x2d802b\x2d4ad5\x2db0a4\x2de9edecf7a8bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db7c7n.mount: Deactivated successfully. Mar 17 18:23:09.941229 kubelet[2945]: I0317 18:23:09.939863 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:23:09.940320 systemd[1]: var-lib-kubelet-pods-bc3d6913\x2d802b\x2d4ad5\x2db0a4\x2de9edecf7a8bf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:23:10.012007 kubelet[2945]: I0317 18:23:10.011966 2945 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-ipsec-secrets\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.012249 kubelet[2945]: I0317 18:23:10.012226 2945 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-xtables-lock\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.012425 kubelet[2945]: I0317 18:23:10.012403 2945 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b7c7n\" (UniqueName: \"kubernetes.io/projected/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-kube-api-access-b7c7n\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.012593 kubelet[2945]: I0317 18:23:10.012570 2945 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-etc-cni-netd\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.012736 kubelet[2945]: I0317 18:23:10.012715 2945 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-run\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.012877 kubelet[2945]: I0317 18:23:10.012856 2945 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-hubble-tls\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.013008 kubelet[2945]: I0317 18:23:10.012988 2945 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-lib-modules\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.013149 kubelet[2945]: I0317 18:23:10.013129 2945 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-cgroup\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.013305 kubelet[2945]: I0317 18:23:10.013280 2945 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-host-proc-sys-net\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.013444 kubelet[2945]: I0317 18:23:10.013424 2945 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cni-path\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.013576 kubelet[2945]: I0317 18:23:10.013555 2945 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-bpf-maps\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.013708 kubelet[2945]: I0317 18:23:10.013687 2945 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-cilium-config-path\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.013852 kubelet[2945]: I0317 18:23:10.013831 2945 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-hostproc\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.013986 kubelet[2945]: I0317 18:23:10.013965 2945 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-host-proc-sys-kernel\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.114985 kubelet[2945]: I0317 18:23:10.114942 2945 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-clustermesh-secrets\") pod \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\" (UID: \"bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf\") " Mar 17 18:23:10.124267 systemd[1]: var-lib-kubelet-pods-bc3d6913\x2d802b\x2d4ad5\x2db0a4\x2de9edecf7a8bf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:23:10.126616 kubelet[2945]: I0317 18:23:10.126565 2945 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" (UID: "bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:23:10.215692 kubelet[2945]: I0317 18:23:10.215654 2945 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf-clustermesh-secrets\") on node \"ip-172-31-18-98\" DevicePath \"\"" Mar 17 18:23:10.250662 systemd[1]: Removed slice kubepods-burstable-podbc3d6913_802b_4ad5_b0a4_e9edecf7a8bf.slice. Mar 17 18:23:10.805564 kubelet[2945]: I0317 18:23:10.805508 2945 topology_manager.go:215] "Topology Admit Handler" podUID="dc53073d-679f-4758-bdc1-1d8105124a7a" podNamespace="kube-system" podName="cilium-q92w2" Mar 17 18:23:10.818731 systemd[1]: Created slice kubepods-burstable-poddc53073d_679f_4758_bdc1_1d8105124a7a.slice. Mar 17 18:23:10.819545 kubelet[2945]: I0317 18:23:10.819386 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-hostproc\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.819545 kubelet[2945]: I0317 18:23:10.819446 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-bpf-maps\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.819545 kubelet[2945]: I0317 18:23:10.819486 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-xtables-lock\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.819545 kubelet[2945]: I0317 18:23:10.819522 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc53073d-679f-4758-bdc1-1d8105124a7a-clustermesh-secrets\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.819781 kubelet[2945]: I0317 18:23:10.819559 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-cilium-cgroup\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.819781 kubelet[2945]: I0317 18:23:10.819593 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-etc-cni-netd\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.819781 kubelet[2945]: I0317 18:23:10.819633 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-cilium-run\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.819781 kubelet[2945]: I0317 18:23:10.819669 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-cni-path\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.819781 kubelet[2945]: I0317 18:23:10.819708 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc53073d-679f-4758-bdc1-1d8105124a7a-hubble-tls\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.819781 kubelet[2945]: I0317 18:23:10.819744 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bb7z\" (UniqueName: \"kubernetes.io/projected/dc53073d-679f-4758-bdc1-1d8105124a7a-kube-api-access-7bb7z\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.820119 kubelet[2945]: I0317 18:23:10.819780 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-lib-modules\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.820119 kubelet[2945]: I0317 18:23:10.819814 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-host-proc-sys-kernel\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.820119 kubelet[2945]: I0317 18:23:10.819848 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc53073d-679f-4758-bdc1-1d8105124a7a-cilium-config-path\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.820119 kubelet[2945]: I0317 18:23:10.819885 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc53073d-679f-4758-bdc1-1d8105124a7a-cilium-ipsec-secrets\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:10.820119 kubelet[2945]: I0317 18:23:10.819919 2945 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc53073d-679f-4758-bdc1-1d8105124a7a-host-proc-sys-net\") pod \"cilium-q92w2\" (UID: \"dc53073d-679f-4758-bdc1-1d8105124a7a\") " pod="kube-system/cilium-q92w2" Mar 17 18:23:11.126787 env[1818]: time="2025-03-17T18:23:11.125975177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q92w2,Uid:dc53073d-679f-4758-bdc1-1d8105124a7a,Namespace:kube-system,Attempt:0,}" Mar 17 18:23:11.159699 env[1818]: time="2025-03-17T18:23:11.159517835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:23:11.159941 env[1818]: time="2025-03-17T18:23:11.159645588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:23:11.159941 env[1818]: time="2025-03-17T18:23:11.159673200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:23:11.160145 env[1818]: time="2025-03-17T18:23:11.159986728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c pid=4736 runtime=io.containerd.runc.v2 Mar 17 18:23:11.181301 systemd[1]: Started cri-containerd-0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c.scope. Mar 17 18:23:11.238147 env[1818]: time="2025-03-17T18:23:11.238043118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q92w2,Uid:dc53073d-679f-4758-bdc1-1d8105124a7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\"" Mar 17 18:23:11.239924 kubelet[2945]: E0317 18:23:11.238989 2945 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-cfq8l" podUID="be26682c-505b-4f49-aaa4-9ff781b122ea" Mar 17 18:23:11.244352 env[1818]: time="2025-03-17T18:23:11.244286782Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:23:11.272872 env[1818]: time="2025-03-17T18:23:11.272634466Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2ca5b0e7b1fc5f8d387f2a6b55a339712673c5e348834fef8212d22c4b6b490d\"" Mar 17 18:23:11.279108 env[1818]: time="2025-03-17T18:23:11.279043731Z" level=info msg="StartContainer for \"2ca5b0e7b1fc5f8d387f2a6b55a339712673c5e348834fef8212d22c4b6b490d\"" Mar 17 18:23:11.308730 systemd[1]: Started cri-containerd-2ca5b0e7b1fc5f8d387f2a6b55a339712673c5e348834fef8212d22c4b6b490d.scope. Mar 17 18:23:11.370949 env[1818]: time="2025-03-17T18:23:11.370884950Z" level=info msg="StartContainer for \"2ca5b0e7b1fc5f8d387f2a6b55a339712673c5e348834fef8212d22c4b6b490d\" returns successfully" Mar 17 18:23:11.387889 systemd[1]: cri-containerd-2ca5b0e7b1fc5f8d387f2a6b55a339712673c5e348834fef8212d22c4b6b490d.scope: Deactivated successfully. Mar 17 18:23:11.441097 env[1818]: time="2025-03-17T18:23:11.441021116Z" level=info msg="shim disconnected" id=2ca5b0e7b1fc5f8d387f2a6b55a339712673c5e348834fef8212d22c4b6b490d Mar 17 18:23:11.441530 env[1818]: time="2025-03-17T18:23:11.441099369Z" level=warning msg="cleaning up after shim disconnected" id=2ca5b0e7b1fc5f8d387f2a6b55a339712673c5e348834fef8212d22c4b6b490d namespace=k8s.io Mar 17 18:23:11.441530 env[1818]: time="2025-03-17T18:23:11.441122637Z" level=info msg="cleaning up dead shim" Mar 17 18:23:11.455089 env[1818]: time="2025-03-17T18:23:11.454983618Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4820 runtime=io.containerd.runc.v2\n" Mar 17 18:23:11.736356 env[1818]: time="2025-03-17T18:23:11.736294645Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:23:11.761304 env[1818]: time="2025-03-17T18:23:11.761222823Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9a1444d73bd77f76705f17e633972cb9361d63e9b1a451e6384cbfc24993ba6f\"" Mar 17 18:23:11.763800 env[1818]: time="2025-03-17T18:23:11.762377787Z" level=info msg="StartContainer for \"9a1444d73bd77f76705f17e633972cb9361d63e9b1a451e6384cbfc24993ba6f\"" Mar 17 18:23:11.822100 systemd[1]: Started cri-containerd-9a1444d73bd77f76705f17e633972cb9361d63e9b1a451e6384cbfc24993ba6f.scope. Mar 17 18:23:11.877860 env[1818]: time="2025-03-17T18:23:11.877796641Z" level=info msg="StartContainer for \"9a1444d73bd77f76705f17e633972cb9361d63e9b1a451e6384cbfc24993ba6f\" returns successfully" Mar 17 18:23:11.891682 systemd[1]: cri-containerd-9a1444d73bd77f76705f17e633972cb9361d63e9b1a451e6384cbfc24993ba6f.scope: Deactivated successfully. Mar 17 18:23:11.939023 env[1818]: time="2025-03-17T18:23:11.938961552Z" level=info msg="shim disconnected" id=9a1444d73bd77f76705f17e633972cb9361d63e9b1a451e6384cbfc24993ba6f Mar 17 18:23:11.939601 env[1818]: time="2025-03-17T18:23:11.939554178Z" level=warning msg="cleaning up after shim disconnected" id=9a1444d73bd77f76705f17e633972cb9361d63e9b1a451e6384cbfc24993ba6f namespace=k8s.io Mar 17 18:23:11.939763 env[1818]: time="2025-03-17T18:23:11.939733280Z" level=info msg="cleaning up dead shim" Mar 17 18:23:11.954432 env[1818]: time="2025-03-17T18:23:11.954375985Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4879 runtime=io.containerd.runc.v2\n" Mar 17 18:23:12.241547 kubelet[2945]: E0317 18:23:12.239650 2945 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-2tj84" podUID="a79872fb-9fc5-4b0e-9fff-f3302d65cd4f" Mar 17 18:23:12.244283 kubelet[2945]: I0317 18:23:12.244238 2945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf" path="/var/lib/kubelet/pods/bc3d6913-802b-4ad5-b0a4-e9edecf7a8bf/volumes" Mar 17 18:23:12.461722 kubelet[2945]: E0317 18:23:12.461661 2945 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:23:12.742015 env[1818]: time="2025-03-17T18:23:12.741945427Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:23:12.787407 env[1818]: time="2025-03-17T18:23:12.787317430Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"00ada468c914874b77f5b9928cf60d4fc4d0c88a62b9b4bdd87774ab706f41b9\"" Mar 17 18:23:12.788142 env[1818]: time="2025-03-17T18:23:12.788092277Z" level=info msg="StartContainer for \"00ada468c914874b77f5b9928cf60d4fc4d0c88a62b9b4bdd87774ab706f41b9\"" Mar 17 18:23:12.832316 systemd[1]: Started cri-containerd-00ada468c914874b77f5b9928cf60d4fc4d0c88a62b9b4bdd87774ab706f41b9.scope. Mar 17 18:23:12.909117 env[1818]: time="2025-03-17T18:23:12.909045076Z" level=info msg="StartContainer for \"00ada468c914874b77f5b9928cf60d4fc4d0c88a62b9b4bdd87774ab706f41b9\" returns successfully" Mar 17 18:23:12.909806 systemd[1]: cri-containerd-00ada468c914874b77f5b9928cf60d4fc4d0c88a62b9b4bdd87774ab706f41b9.scope: Deactivated successfully. Mar 17 18:23:12.955009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00ada468c914874b77f5b9928cf60d4fc4d0c88a62b9b4bdd87774ab706f41b9-rootfs.mount: Deactivated successfully. Mar 17 18:23:12.972015 env[1818]: time="2025-03-17T18:23:12.971950156Z" level=info msg="shim disconnected" id=00ada468c914874b77f5b9928cf60d4fc4d0c88a62b9b4bdd87774ab706f41b9 Mar 17 18:23:12.972544 env[1818]: time="2025-03-17T18:23:12.972509962Z" level=warning msg="cleaning up after shim disconnected" id=00ada468c914874b77f5b9928cf60d4fc4d0c88a62b9b4bdd87774ab706f41b9 namespace=k8s.io Mar 17 18:23:12.972738 env[1818]: time="2025-03-17T18:23:12.972697920Z" level=info msg="cleaning up dead shim" Mar 17 18:23:12.987706 env[1818]: time="2025-03-17T18:23:12.987646363Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4937 runtime=io.containerd.runc.v2\n" Mar 17 18:23:13.238977 kubelet[2945]: E0317 18:23:13.238911 2945 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-cfq8l" podUID="be26682c-505b-4f49-aaa4-9ff781b122ea" Mar 17 18:23:13.755044 env[1818]: time="2025-03-17T18:23:13.754981233Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:23:13.792761 env[1818]: time="2025-03-17T18:23:13.792695668Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52660c6bfb45d724d4fd5df48a7b50a9c3ac6f89289c64ffbdba07f09494f099\"" Mar 17 18:23:13.794256 env[1818]: time="2025-03-17T18:23:13.794201575Z" level=info msg="StartContainer for \"52660c6bfb45d724d4fd5df48a7b50a9c3ac6f89289c64ffbdba07f09494f099\"" Mar 17 18:23:13.851273 systemd[1]: Started cri-containerd-52660c6bfb45d724d4fd5df48a7b50a9c3ac6f89289c64ffbdba07f09494f099.scope. Mar 17 18:23:13.922224 systemd[1]: cri-containerd-52660c6bfb45d724d4fd5df48a7b50a9c3ac6f89289c64ffbdba07f09494f099.scope: Deactivated successfully. Mar 17 18:23:13.924601 env[1818]: time="2025-03-17T18:23:13.924522056Z" level=info msg="StartContainer for \"52660c6bfb45d724d4fd5df48a7b50a9c3ac6f89289c64ffbdba07f09494f099\" returns successfully" Mar 17 18:23:13.952678 systemd[1]: run-containerd-runc-k8s.io-52660c6bfb45d724d4fd5df48a7b50a9c3ac6f89289c64ffbdba07f09494f099-runc.DbF95B.mount: Deactivated successfully. Mar 17 18:23:13.968482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52660c6bfb45d724d4fd5df48a7b50a9c3ac6f89289c64ffbdba07f09494f099-rootfs.mount: Deactivated successfully. Mar 17 18:23:13.982461 env[1818]: time="2025-03-17T18:23:13.982396850Z" level=info msg="shim disconnected" id=52660c6bfb45d724d4fd5df48a7b50a9c3ac6f89289c64ffbdba07f09494f099 Mar 17 18:23:13.982894 env[1818]: time="2025-03-17T18:23:13.982859778Z" level=warning msg="cleaning up after shim disconnected" id=52660c6bfb45d724d4fd5df48a7b50a9c3ac6f89289c64ffbdba07f09494f099 namespace=k8s.io Mar 17 18:23:13.983028 env[1818]: time="2025-03-17T18:23:13.982999028Z" level=info msg="cleaning up dead shim" Mar 17 18:23:13.997331 env[1818]: time="2025-03-17T18:23:13.997275559Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4993 runtime=io.containerd.runc.v2\n" Mar 17 18:23:14.240656 kubelet[2945]: E0317 18:23:14.239462 2945 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-2tj84" podUID="a79872fb-9fc5-4b0e-9fff-f3302d65cd4f" Mar 17 18:23:14.765192 env[1818]: time="2025-03-17T18:23:14.761430505Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:23:14.798903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871176786.mount: Deactivated successfully. Mar 17 18:23:14.814668 env[1818]: time="2025-03-17T18:23:14.814605796Z" level=info msg="CreateContainer within sandbox \"0831d7c205d4cc42d72afebb8cc7c9ec68c4968e1b93cedbaa76e90da61f6d7c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"78d4511a75546a8232fabbdf8936e9d52ea424dce815519c5e8dfe1f69a55295\"" Mar 17 18:23:14.815817 env[1818]: time="2025-03-17T18:23:14.815766387Z" level=info msg="StartContainer for \"78d4511a75546a8232fabbdf8936e9d52ea424dce815519c5e8dfe1f69a55295\"" Mar 17 18:23:14.846526 systemd[1]: Started cri-containerd-78d4511a75546a8232fabbdf8936e9d52ea424dce815519c5e8dfe1f69a55295.scope. Mar 17 18:23:14.921503 env[1818]: time="2025-03-17T18:23:14.921413150Z" level=info msg="StartContainer for \"78d4511a75546a8232fabbdf8936e9d52ea424dce815519c5e8dfe1f69a55295\" returns successfully" Mar 17 18:23:14.972003 systemd[1]: run-containerd-runc-k8s.io-78d4511a75546a8232fabbdf8936e9d52ea424dce815519c5e8dfe1f69a55295-runc.HjsfHA.mount: Deactivated successfully. Mar 17 18:23:15.070169 kubelet[2945]: I0317 18:23:15.069993 2945 setters.go:580] "Node became not ready" node="ip-172-31-18-98" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:23:15Z","lastTransitionTime":"2025-03-17T18:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:23:15.238976 kubelet[2945]: E0317 18:23:15.238887 2945 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-cfq8l" podUID="be26682c-505b-4f49-aaa4-9ff781b122ea" Mar 17 18:23:15.725144 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:23:16.239710 kubelet[2945]: E0317 18:23:16.239641 2945 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-2tj84" podUID="a79872fb-9fc5-4b0e-9fff-f3302d65cd4f" Mar 17 18:23:17.239390 kubelet[2945]: E0317 18:23:17.239313 2945 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-cfq8l" podUID="be26682c-505b-4f49-aaa4-9ff781b122ea" Mar 17 18:23:17.909981 systemd[1]: run-containerd-runc-k8s.io-78d4511a75546a8232fabbdf8936e9d52ea424dce815519c5e8dfe1f69a55295-runc.9Ubwc4.mount: Deactivated successfully. Mar 17 18:23:19.800828 (udev-worker)[5542]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:23:19.805699 systemd-networkd[1534]: lxc_health: Link UP Mar 17 18:23:19.811583 (udev-worker)[5543]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:23:19.836900 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:23:19.835957 systemd-networkd[1534]: lxc_health: Gained carrier Mar 17 18:23:20.211283 systemd[1]: run-containerd-runc-k8s.io-78d4511a75546a8232fabbdf8936e9d52ea424dce815519c5e8dfe1f69a55295-runc.keZaO5.mount: Deactivated successfully. Mar 17 18:23:21.185564 kubelet[2945]: I0317 18:23:21.185473 2945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q92w2" podStartSLOduration=11.185450088 podStartE2EDuration="11.185450088s" podCreationTimestamp="2025-03-17 18:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:23:15.79197581 +0000 UTC m=+133.828414482" watchObservedRunningTime="2025-03-17 18:23:21.185450088 +0000 UTC m=+139.221888772" Mar 17 18:23:21.205932 systemd-networkd[1534]: lxc_health: Gained IPv6LL Mar 17 18:23:22.558473 systemd[1]: run-containerd-runc-k8s.io-78d4511a75546a8232fabbdf8936e9d52ea424dce815519c5e8dfe1f69a55295-runc.TYp43T.mount: Deactivated successfully. Mar 17 18:23:27.341645 sshd[4709]: pam_unix(sshd:session): session closed for user core Mar 17 18:23:27.349137 systemd-logind[1804]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:23:27.349574 systemd[1]: sshd@27-172.31.18.98:22-139.178.89.65:34986.service: Deactivated successfully. Mar 17 18:23:27.350850 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:23:27.353986 systemd-logind[1804]: Removed session 28. Mar 17 18:23:40.956607 systemd[1]: cri-containerd-9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189.scope: Deactivated successfully. Mar 17 18:23:40.957189 systemd[1]: cri-containerd-9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189.scope: Consumed 4.993s CPU time. Mar 17 18:23:40.994575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189-rootfs.mount: Deactivated successfully. Mar 17 18:23:41.014514 env[1818]: time="2025-03-17T18:23:41.014430742Z" level=info msg="shim disconnected" id=9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189 Mar 17 18:23:41.014514 env[1818]: time="2025-03-17T18:23:41.014505455Z" level=warning msg="cleaning up after shim disconnected" id=9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189 namespace=k8s.io Mar 17 18:23:41.015303 env[1818]: time="2025-03-17T18:23:41.014533595Z" level=info msg="cleaning up dead shim" Mar 17 18:23:41.028917 env[1818]: time="2025-03-17T18:23:41.028843266Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5681 runtime=io.containerd.runc.v2\n" Mar 17 18:23:41.824389 kubelet[2945]: I0317 18:23:41.824344 2945 scope.go:117] "RemoveContainer" containerID="9fba03af6d19e44a6806cbf30539dae0b6acfb92b0aa7ab4fbfd58a16d7e9189" Mar 17 18:23:41.828994 env[1818]: time="2025-03-17T18:23:41.828914817Z" level=info msg="CreateContainer within sandbox \"c8da8ee8b03eb9e6472266d9163643a9b6d339bf0d9b28e195ac9db38b814d54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 18:23:41.854698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92926050.mount: Deactivated successfully. Mar 17 18:23:41.871006 env[1818]: time="2025-03-17T18:23:41.870937438Z" level=info msg="CreateContainer within sandbox \"c8da8ee8b03eb9e6472266d9163643a9b6d339bf0d9b28e195ac9db38b814d54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fe9417df6224e5ac20eb18ded11af596dc9463fee55878fbe005f9ebf64aba55\"" Mar 17 18:23:41.871932 env[1818]: time="2025-03-17T18:23:41.871887018Z" level=info msg="StartContainer for \"fe9417df6224e5ac20eb18ded11af596dc9463fee55878fbe005f9ebf64aba55\"" Mar 17 18:23:41.907791 systemd[1]: Started cri-containerd-fe9417df6224e5ac20eb18ded11af596dc9463fee55878fbe005f9ebf64aba55.scope. Mar 17 18:23:41.993861 env[1818]: time="2025-03-17T18:23:41.993795595Z" level=info msg="StartContainer for \"fe9417df6224e5ac20eb18ded11af596dc9463fee55878fbe005f9ebf64aba55\" returns successfully" Mar 17 18:23:44.536946 kubelet[2945]: E0317 18:23:44.536851 2945 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-98?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:23:45.731613 systemd[1]: cri-containerd-03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6.scope: Deactivated successfully. Mar 17 18:23:45.732137 systemd[1]: cri-containerd-03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6.scope: Consumed 4.276s CPU time. Mar 17 18:23:45.771751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6-rootfs.mount: Deactivated successfully. Mar 17 18:23:45.785904 env[1818]: time="2025-03-17T18:23:45.785841297Z" level=info msg="shim disconnected" id=03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6 Mar 17 18:23:45.786865 env[1818]: time="2025-03-17T18:23:45.786816197Z" level=warning msg="cleaning up after shim disconnected" id=03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6 namespace=k8s.io Mar 17 18:23:45.787015 env[1818]: time="2025-03-17T18:23:45.786987331Z" level=info msg="cleaning up dead shim" Mar 17 18:23:45.801561 env[1818]: time="2025-03-17T18:23:45.801494078Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:23:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5740 runtime=io.containerd.runc.v2\n" Mar 17 18:23:45.839778 kubelet[2945]: I0317 18:23:45.839729 2945 scope.go:117] "RemoveContainer" containerID="03e532b76a143d5d35f01234dc268dcc63602df5dec83b61fa34e80796c6f2a6" Mar 17 18:23:45.844036 env[1818]: time="2025-03-17T18:23:45.843968234Z" level=info msg="CreateContainer within sandbox \"3a5bd873d521580063b739efb689fa8ce43e3f324a068c13ca53cb7d7b062ae0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 18:23:45.870067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224836648.mount: Deactivated successfully. Mar 17 18:23:45.883044 env[1818]: time="2025-03-17T18:23:45.882983493Z" level=info msg="CreateContainer within sandbox \"3a5bd873d521580063b739efb689fa8ce43e3f324a068c13ca53cb7d7b062ae0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"69d23e7e420bf17e7a48c344cb89d0ab18e58772ec3568ba887b2890f7fbbd34\"" Mar 17 18:23:45.884062 env[1818]: time="2025-03-17T18:23:45.884020746Z" level=info msg="StartContainer for \"69d23e7e420bf17e7a48c344cb89d0ab18e58772ec3568ba887b2890f7fbbd34\"" Mar 17 18:23:45.920433 systemd[1]: Started cri-containerd-69d23e7e420bf17e7a48c344cb89d0ab18e58772ec3568ba887b2890f7fbbd34.scope. Mar 17 18:23:46.000414 env[1818]: time="2025-03-17T18:23:46.000229105Z" level=info msg="StartContainer for \"69d23e7e420bf17e7a48c344cb89d0ab18e58772ec3568ba887b2890f7fbbd34\" returns successfully" Mar 17 18:23:54.538144 kubelet[2945]: E0317 18:23:54.538054 2945 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-98?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:24:02.157086 env[1818]: time="2025-03-17T18:24:02.157019923Z" level=info msg="StopPodSandbox for \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\"" Mar 17 18:24:02.157687 env[1818]: time="2025-03-17T18:24:02.157190409Z" level=info msg="TearDown network for sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" successfully" Mar 17 18:24:02.157687 env[1818]: time="2025-03-17T18:24:02.157251321Z" level=info msg="StopPodSandbox for \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" returns successfully" Mar 17 18:24:02.160450 env[1818]: time="2025-03-17T18:24:02.158602425Z" level=info msg="RemovePodSandbox for \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\"" Mar 17 18:24:02.160450 env[1818]: time="2025-03-17T18:24:02.158683126Z" level=info msg="Forcibly stopping sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\"" Mar 17 18:24:02.160450 env[1818]: time="2025-03-17T18:24:02.158866715Z" level=info msg="TearDown network for sandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" successfully" Mar 17 18:24:02.168005 env[1818]: time="2025-03-17T18:24:02.167940556Z" level=info msg="RemovePodSandbox \"0053c3b40234fc2e89b957af72f803cd2f2a39acc49ff10b46bbcef8d7a9bd42\" returns successfully" Mar 17 18:24:02.168981 env[1818]: time="2025-03-17T18:24:02.168674986Z" level=info msg="StopPodSandbox for \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\"" Mar 17 18:24:02.168981 env[1818]: time="2025-03-17T18:24:02.168805679Z" level=info msg="TearDown network for sandbox \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\" successfully" Mar 17 18:24:02.168981 env[1818]: time="2025-03-17T18:24:02.168860808Z" level=info msg="StopPodSandbox for \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\" returns successfully" Mar 17 18:24:02.171179 env[1818]: time="2025-03-17T18:24:02.169623858Z" level=info msg="RemovePodSandbox for \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\"" Mar 17 18:24:02.171179 env[1818]: time="2025-03-17T18:24:02.169670503Z" level=info msg="Forcibly stopping sandbox \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\"" Mar 17 18:24:02.171179 env[1818]: time="2025-03-17T18:24:02.169790600Z" level=info msg="TearDown network for sandbox \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\" successfully" Mar 17 18:24:02.176070 env[1818]: time="2025-03-17T18:24:02.176015532Z" level=info msg="RemovePodSandbox \"d01505322a6b8b1eab4cc564253eab6096cca57b35bfb0849899fa2b07aada53\" returns successfully"