Feb 9 19:16:41.985026 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 19:16:41.985065 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 19:16:41.985088 kernel: efi: EFI v2.70 by EDK II Feb 9 19:16:41.985103 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 19:16:41.985117 kernel: ACPI: Early table checksum verification disabled Feb 9 19:16:41.985130 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 19:16:41.985146 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 19:16:41.985161 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 19:16:41.985174 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 19:16:41.985188 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 19:16:41.985206 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 19:16:41.985220 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 19:16:41.985233 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 19:16:41.985247 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 19:16:41.985264 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 19:16:41.985283 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 19:16:41.985297 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 19:16:41.985312 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 19:16:41.985326 kernel: printk: bootconsole [uart0] enabled Feb 9 19:16:41.985340 kernel: NUMA: Failed to initialise from firmware Feb 9 19:16:41.985355 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:16:41.985370 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 19:16:41.985384 kernel: Zone ranges: Feb 9 19:16:41.985398 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 19:16:41.985412 kernel: DMA32 empty Feb 9 19:16:41.985427 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 19:16:41.985445 kernel: Movable zone start for each node Feb 9 19:16:41.985460 kernel: Early memory node ranges Feb 9 19:16:41.985474 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 19:16:41.985488 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 19:16:41.985503 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 19:16:41.985517 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 19:16:41.985531 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 19:16:41.985545 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 19:16:41.985560 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:16:41.985574 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 19:16:41.985588 kernel: psci: probing for conduit method from ACPI. Feb 9 19:16:41.985602 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 19:16:41.985620 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 19:16:41.985635 kernel: psci: Trusted OS migration not required Feb 9 19:16:41.985655 kernel: psci: SMC Calling Convention v1.1 Feb 9 19:16:41.985671 kernel: ACPI: SRAT not present Feb 9 19:16:41.985687 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 19:16:41.985706 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 19:16:41.985722 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 19:16:41.985737 kernel: Detected PIPT I-cache on CPU0 Feb 9 19:16:41.985752 kernel: CPU features: detected: GIC system register CPU interface Feb 9 19:16:41.985767 kernel: CPU features: detected: Spectre-v2 Feb 9 19:16:41.985809 kernel: CPU features: detected: Spectre-v3a Feb 9 19:16:41.985829 kernel: CPU features: detected: Spectre-BHB Feb 9 19:16:41.985845 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 19:16:41.985861 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 19:16:41.985877 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 19:16:41.985892 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 19:16:41.985912 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 19:16:41.985928 kernel: Policy zone: Normal Feb 9 19:16:41.985946 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:16:41.985962 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:16:41.985977 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:16:41.985993 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:16:41.986008 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:16:41.986023 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 19:16:41.986040 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 19:16:41.986055 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:16:41.986075 kernel: trace event string verifier disabled Feb 9 19:16:41.986090 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 19:16:41.986106 kernel: rcu: RCU event tracing is enabled. Feb 9 19:16:41.986123 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:16:41.986138 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 19:16:41.986154 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:16:41.986169 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:16:41.986184 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:16:41.986200 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 19:16:41.986215 kernel: GICv3: 96 SPIs implemented Feb 9 19:16:41.986229 kernel: GICv3: 0 Extended SPIs implemented Feb 9 19:16:41.986244 kernel: GICv3: Distributor has no Range Selector support Feb 9 19:16:41.986264 kernel: Root IRQ handler: gic_handle_irq Feb 9 19:16:41.986279 kernel: GICv3: 16 PPIs implemented Feb 9 19:16:41.986294 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 19:16:41.986309 kernel: ACPI: SRAT not present Feb 9 19:16:41.986323 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 19:16:41.986339 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 19:16:41.986354 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 19:16:41.986370 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 19:16:41.986384 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 19:16:41.986400 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 19:16:41.986414 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 19:16:41.986434 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 19:16:41.986450 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 19:16:41.986465 kernel: Console: colour dummy device 80x25 Feb 9 19:16:41.986481 kernel: printk: console [tty1] enabled Feb 9 19:16:41.986496 kernel: ACPI: Core revision 20210730 Feb 9 19:16:41.986512 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 19:16:41.986528 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:16:41.986544 kernel: LSM: Security Framework initializing Feb 9 19:16:41.986559 kernel: SELinux: Initializing. Feb 9 19:16:41.986575 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:16:41.986595 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:16:41.986611 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:16:41.986626 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 19:16:41.986642 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 19:16:41.986658 kernel: Remapping and enabling EFI services. Feb 9 19:16:41.986673 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:16:41.986689 kernel: Detected PIPT I-cache on CPU1 Feb 9 19:16:41.986705 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 19:16:41.986720 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 19:16:41.986740 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 19:16:41.986755 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:16:41.986771 kernel: SMP: Total of 2 processors activated. Feb 9 19:16:41.986804 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 19:16:41.986823 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 19:16:41.986839 kernel: CPU features: detected: CRC32 instructions Feb 9 19:16:41.986855 kernel: CPU: All CPU(s) started at EL1 Feb 9 19:16:41.986871 kernel: alternatives: patching kernel code Feb 9 19:16:41.986886 kernel: devtmpfs: initialized Feb 9 19:16:41.986906 kernel: KASLR disabled due to lack of seed Feb 9 19:16:41.986922 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:16:41.986938 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:16:41.986965 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:16:41.986985 kernel: SMBIOS 3.0.0 present. Feb 9 19:16:41.987002 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 19:16:41.987018 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:16:41.987034 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 19:16:41.987050 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 19:16:41.987066 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 19:16:41.987082 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:16:41.987099 kernel: audit: type=2000 audit(0.250:1): state=initialized audit_enabled=0 res=1 Feb 9 19:16:41.987119 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:16:41.987136 kernel: cpuidle: using governor menu Feb 9 19:16:41.987152 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 19:16:41.987168 kernel: ASID allocator initialised with 32768 entries Feb 9 19:16:41.987184 kernel: ACPI: bus type PCI registered Feb 9 19:16:41.987205 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:16:41.987221 kernel: Serial: AMBA PL011 UART driver Feb 9 19:16:41.987237 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:16:41.987253 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 19:16:41.987269 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:16:41.987285 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 19:16:41.987301 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:16:41.987317 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 19:16:41.987333 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:16:41.987354 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:16:41.987370 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:16:41.987386 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:16:41.987402 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:16:41.987418 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:16:41.987434 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:16:41.987450 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:16:41.987466 kernel: ACPI: Interpreter enabled Feb 9 19:16:41.987482 kernel: ACPI: Using GIC for interrupt routing Feb 9 19:16:41.987502 kernel: ACPI: MCFG table detected, 1 entries Feb 9 19:16:41.987519 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 19:16:41.987827 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:16:41.988038 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 19:16:41.988260 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 19:16:41.988461 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 19:16:41.988658 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 19:16:41.988687 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 19:16:41.988705 kernel: acpiphp: Slot [1] registered Feb 9 19:16:41.988721 kernel: acpiphp: Slot [2] registered Feb 9 19:16:41.988737 kernel: acpiphp: Slot [3] registered Feb 9 19:16:41.988753 kernel: acpiphp: Slot [4] registered Feb 9 19:16:41.988769 kernel: acpiphp: Slot [5] registered Feb 9 19:16:41.988807 kernel: acpiphp: Slot [6] registered Feb 9 19:16:41.988826 kernel: acpiphp: Slot [7] registered Feb 9 19:16:41.988843 kernel: acpiphp: Slot [8] registered Feb 9 19:16:41.988864 kernel: acpiphp: Slot [9] registered Feb 9 19:16:41.988880 kernel: acpiphp: Slot [10] registered Feb 9 19:16:41.988897 kernel: acpiphp: Slot [11] registered Feb 9 19:16:41.988913 kernel: acpiphp: Slot [12] registered Feb 9 19:16:41.988929 kernel: acpiphp: Slot [13] registered Feb 9 19:16:41.988945 kernel: acpiphp: Slot [14] registered Feb 9 19:16:41.988961 kernel: acpiphp: Slot [15] registered Feb 9 19:16:41.988977 kernel: acpiphp: Slot [16] registered Feb 9 19:16:41.988993 kernel: acpiphp: Slot [17] registered Feb 9 19:16:41.989009 kernel: acpiphp: Slot [18] registered Feb 9 19:16:41.989030 kernel: acpiphp: Slot [19] registered Feb 9 19:16:41.989046 kernel: acpiphp: Slot [20] registered Feb 9 19:16:41.989061 kernel: acpiphp: Slot [21] registered Feb 9 19:16:41.989077 kernel: acpiphp: Slot [22] registered Feb 9 19:16:41.989093 kernel: acpiphp: Slot [23] registered Feb 9 19:16:41.989109 kernel: acpiphp: Slot [24] registered Feb 9 19:16:41.989125 kernel: acpiphp: Slot [25] registered Feb 9 19:16:41.989141 kernel: acpiphp: Slot [26] registered Feb 9 19:16:41.989157 kernel: acpiphp: Slot [27] registered Feb 9 19:16:41.989177 kernel: acpiphp: Slot [28] registered Feb 9 19:16:41.989194 kernel: acpiphp: Slot [29] registered Feb 9 19:16:41.989210 kernel: acpiphp: Slot [30] registered Feb 9 19:16:41.989226 kernel: acpiphp: Slot [31] registered Feb 9 19:16:41.989242 kernel: PCI host bridge to bus 0000:00 Feb 9 19:16:41.989444 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 19:16:41.989625 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 19:16:41.991855 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 19:16:41.992068 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 19:16:41.992317 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 19:16:41.992547 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 19:16:41.992752 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 19:16:41.993026 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 19:16:41.993230 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 19:16:41.993438 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:16:41.993652 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 19:16:41.993876 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 19:16:41.994084 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 19:16:41.994282 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 19:16:41.994482 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:16:41.994680 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 19:16:41.994908 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 19:16:41.995111 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 19:16:41.995309 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 19:16:41.995508 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 19:16:41.995694 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 19:16:42.006056 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 19:16:42.006265 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 19:16:42.006299 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 19:16:42.006317 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 19:16:42.006334 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 19:16:42.006350 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 19:16:42.006367 kernel: iommu: Default domain type: Translated Feb 9 19:16:42.006383 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 19:16:42.006399 kernel: vgaarb: loaded Feb 9 19:16:42.006415 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:16:42.006432 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:16:42.006452 kernel: PTP clock support registered Feb 9 19:16:42.006469 kernel: Registered efivars operations Feb 9 19:16:42.006485 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 19:16:42.006502 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:16:42.006518 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:16:42.006534 kernel: pnp: PnP ACPI init Feb 9 19:16:42.006755 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 19:16:42.007832 kernel: pnp: PnP ACPI: found 1 devices Feb 9 19:16:42.007867 kernel: NET: Registered PF_INET protocol family Feb 9 19:16:42.007894 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:16:42.007911 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:16:42.007928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:16:42.007945 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:16:42.007962 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:16:42.007978 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:16:42.007995 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:16:42.008011 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:16:42.008028 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:16:42.008048 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:16:42.008065 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 19:16:42.008081 kernel: kvm [1]: HYP mode not available Feb 9 19:16:42.008097 kernel: Initialise system trusted keyrings Feb 9 19:16:42.008114 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:16:42.008131 kernel: Key type asymmetric registered Feb 9 19:16:42.008164 kernel: Asymmetric key parser 'x509' registered Feb 9 19:16:42.008185 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:16:42.008202 kernel: io scheduler mq-deadline registered Feb 9 19:16:42.008224 kernel: io scheduler kyber registered Feb 9 19:16:42.008241 kernel: io scheduler bfq registered Feb 9 19:16:42.008478 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 19:16:42.008504 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 19:16:42.008521 kernel: ACPI: button: Power Button [PWRB] Feb 9 19:16:42.008538 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:16:42.008555 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 19:16:42.008762 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 19:16:42.008808 kernel: printk: console [ttyS0] disabled Feb 9 19:16:42.008828 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 19:16:42.014091 kernel: printk: console [ttyS0] enabled Feb 9 19:16:42.014109 kernel: printk: bootconsole [uart0] disabled Feb 9 19:16:42.014126 kernel: thunder_xcv, ver 1.0 Feb 9 19:16:42.014142 kernel: thunder_bgx, ver 1.0 Feb 9 19:16:42.014159 kernel: nicpf, ver 1.0 Feb 9 19:16:42.014175 kernel: nicvf, ver 1.0 Feb 9 19:16:42.014449 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 19:16:42.014695 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T19:16:41 UTC (1707506201) Feb 9 19:16:42.014721 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:16:42.014737 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:16:42.014754 kernel: Segment Routing with IPv6 Feb 9 19:16:42.014770 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:16:42.014804 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:16:42.014823 kernel: Key type dns_resolver registered Feb 9 19:16:42.014840 kernel: registered taskstats version 1 Feb 9 19:16:42.014862 kernel: Loading compiled-in X.509 certificates Feb 9 19:16:42.014880 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 19:16:42.014896 kernel: Key type .fscrypt registered Feb 9 19:16:42.014912 kernel: Key type fscrypt-provisioning registered Feb 9 19:16:42.014928 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:16:42.014945 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:16:42.014961 kernel: ima: No architecture policies found Feb 9 19:16:42.014977 kernel: Freeing unused kernel memory: 34688K Feb 9 19:16:42.014993 kernel: Run /init as init process Feb 9 19:16:42.015013 kernel: with arguments: Feb 9 19:16:42.015030 kernel: /init Feb 9 19:16:42.015046 kernel: with environment: Feb 9 19:16:42.015061 kernel: HOME=/ Feb 9 19:16:42.015078 kernel: TERM=linux Feb 9 19:16:42.015093 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:16:42.015116 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:16:42.015137 systemd[1]: Detected virtualization amazon. Feb 9 19:16:42.015160 systemd[1]: Detected architecture arm64. Feb 9 19:16:42.015177 systemd[1]: Running in initrd. Feb 9 19:16:42.015195 systemd[1]: No hostname configured, using default hostname. Feb 9 19:16:42.015212 systemd[1]: Hostname set to . Feb 9 19:16:42.015230 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:16:42.015248 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:16:42.015265 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:16:42.015283 systemd[1]: Reached target cryptsetup.target. Feb 9 19:16:42.015304 systemd[1]: Reached target paths.target. Feb 9 19:16:42.015321 systemd[1]: Reached target slices.target. Feb 9 19:16:42.015339 systemd[1]: Reached target swap.target. Feb 9 19:16:42.015356 systemd[1]: Reached target timers.target. Feb 9 19:16:42.015374 systemd[1]: Listening on iscsid.socket. Feb 9 19:16:42.015392 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:16:42.015409 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:16:42.015427 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:16:42.015449 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:16:42.015467 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:16:42.015485 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:16:42.015502 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:16:42.015519 systemd[1]: Reached target sockets.target. Feb 9 19:16:42.015537 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:16:42.015555 systemd[1]: Finished network-cleanup.service. Feb 9 19:16:42.015572 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:16:42.015590 systemd[1]: Starting systemd-journald.service... Feb 9 19:16:42.015612 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:16:42.015630 systemd[1]: Starting systemd-resolved.service... Feb 9 19:16:42.015647 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:16:42.015664 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:16:42.015683 kernel: audit: type=1130 audit(1707506201.971:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.015701 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:16:42.015718 kernel: audit: type=1130 audit(1707506201.994:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.015736 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:16:42.015757 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:16:42.015792 systemd-journald[308]: Journal started Feb 9 19:16:42.015885 systemd-journald[308]: Runtime Journal (/run/log/journal/ec2ce6a200ad5fe370057b02c73e0ebb) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:16:41.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:41.954132 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 19:16:42.032106 kernel: audit: type=1130 audit(1707506202.017:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.032143 systemd[1]: Started systemd-journald.service. Feb 9 19:16:42.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.034992 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 19:16:42.050981 kernel: Bridge firewalling registered Feb 9 19:16:42.051017 kernel: audit: type=1130 audit(1707506202.037:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.053856 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:16:42.061292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:16:42.077836 kernel: SCSI subsystem initialized Feb 9 19:16:42.092936 systemd-resolved[310]: Positive Trust Anchors: Feb 9 19:16:42.092963 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:16:42.093019 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:16:42.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.095502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:16:42.110133 kernel: audit: type=1130 audit(1707506202.091:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.110178 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:16:42.112626 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:16:42.116490 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:16:42.156625 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:16:42.176535 kernel: audit: type=1130 audit(1707506202.159:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.171358 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:16:42.184649 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 19:16:42.191143 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:16:42.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.207837 kernel: audit: type=1130 audit(1707506202.194:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.207425 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:16:42.231864 dracut-cmdline[328]: dracut-dracut-053 Feb 9 19:16:42.236383 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:16:42.253857 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:16:42.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.266516 kernel: audit: type=1130 audit(1707506202.254:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.359819 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:16:42.373817 kernel: iscsi: registered transport (tcp) Feb 9 19:16:42.397677 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:16:42.397758 kernel: QLogic iSCSI HBA Driver Feb 9 19:16:42.587329 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 19:16:42.589951 kernel: random: crng init done Feb 9 19:16:42.591101 systemd[1]: Started systemd-resolved.service. Feb 9 19:16:42.615674 kernel: audit: type=1130 audit(1707506202.589:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.591650 systemd[1]: Reached target nss-lookup.target. Feb 9 19:16:42.620503 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:16:42.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:42.624256 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:16:42.690826 kernel: raid6: neonx8 gen() 6413 MB/s Feb 9 19:16:42.708814 kernel: raid6: neonx8 xor() 4738 MB/s Feb 9 19:16:42.726823 kernel: raid6: neonx4 gen() 6588 MB/s Feb 9 19:16:42.744815 kernel: raid6: neonx4 xor() 4949 MB/s Feb 9 19:16:42.762812 kernel: raid6: neonx2 gen() 5809 MB/s Feb 9 19:16:42.780814 kernel: raid6: neonx2 xor() 4546 MB/s Feb 9 19:16:42.798817 kernel: raid6: neonx1 gen() 4507 MB/s Feb 9 19:16:42.816815 kernel: raid6: neonx1 xor() 3693 MB/s Feb 9 19:16:42.834814 kernel: raid6: int64x8 gen() 3435 MB/s Feb 9 19:16:42.852815 kernel: raid6: int64x8 xor() 2090 MB/s Feb 9 19:16:42.870816 kernel: raid6: int64x4 gen() 3856 MB/s Feb 9 19:16:42.888814 kernel: raid6: int64x4 xor() 2201 MB/s Feb 9 19:16:42.906815 kernel: raid6: int64x2 gen() 3619 MB/s Feb 9 19:16:42.924813 kernel: raid6: int64x2 xor() 1954 MB/s Feb 9 19:16:42.942814 kernel: raid6: int64x1 gen() 2775 MB/s Feb 9 19:16:42.962285 kernel: raid6: int64x1 xor() 1454 MB/s Feb 9 19:16:42.962317 kernel: raid6: using algorithm neonx4 gen() 6588 MB/s Feb 9 19:16:42.962341 kernel: raid6: .... xor() 4949 MB/s, rmw enabled Feb 9 19:16:42.964072 kernel: raid6: using neon recovery algorithm Feb 9 19:16:42.982820 kernel: xor: measuring software checksum speed Feb 9 19:16:42.987307 kernel: 8regs : 9354 MB/sec Feb 9 19:16:42.987339 kernel: 32regs : 11110 MB/sec Feb 9 19:16:42.991787 kernel: arm64_neon : 9590 MB/sec Feb 9 19:16:42.991828 kernel: xor: using function: 32regs (11110 MB/sec) Feb 9 19:16:43.082826 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 19:16:43.099833 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:16:43.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:43.102000 audit: BPF prog-id=7 op=LOAD Feb 9 19:16:43.102000 audit: BPF prog-id=8 op=LOAD Feb 9 19:16:43.104738 systemd[1]: Starting systemd-udevd.service... Feb 9 19:16:43.134258 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 9 19:16:43.145283 systemd[1]: Started systemd-udevd.service. Feb 9 19:16:43.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:43.152412 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:16:43.182041 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Feb 9 19:16:43.243322 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:16:43.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:43.247029 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:16:43.352533 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:16:43.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:43.469151 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 19:16:43.469218 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 19:16:43.479242 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 19:16:43.479553 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 19:16:43.486833 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:68:24:fc:2d:13 Feb 9 19:16:43.489287 (udev-worker)[563]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:43.511820 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 19:16:43.514814 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 19:16:43.522884 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 19:16:43.528528 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:16:43.528581 kernel: GPT:9289727 != 16777215 Feb 9 19:16:43.528605 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:16:43.530706 kernel: GPT:9289727 != 16777215 Feb 9 19:16:43.532001 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:16:43.535370 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:16:43.606819 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (568) Feb 9 19:16:43.629372 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:16:43.702683 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:16:43.715197 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:16:43.720194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:16:43.726512 systemd[1]: Starting disk-uuid.service... Feb 9 19:16:43.743115 disk-uuid[665]: Primary Header is updated. Feb 9 19:16:43.743115 disk-uuid[665]: Secondary Entries is updated. Feb 9 19:16:43.743115 disk-uuid[665]: Secondary Header is updated. Feb 9 19:16:43.764295 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:16:43.776825 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:16:44.780817 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:16:44.781572 disk-uuid[666]: The operation has completed successfully. Feb 9 19:16:44.946062 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:16:44.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:44.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:44.946277 systemd[1]: Finished disk-uuid.service. Feb 9 19:16:44.962151 systemd[1]: Starting verity-setup.service... Feb 9 19:16:44.996823 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 19:16:45.085250 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:16:45.090493 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:16:45.094330 systemd[1]: Finished verity-setup.service. Feb 9 19:16:45.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.177822 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:16:45.178285 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:16:45.181209 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:16:45.185209 systemd[1]: Starting ignition-setup.service... Feb 9 19:16:45.191264 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:16:45.219010 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:16:45.219078 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:16:45.221212 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:16:45.231806 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:16:45.250683 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:16:45.288208 systemd[1]: Finished ignition-setup.service. Feb 9 19:16:45.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.290621 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:16:45.350135 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:16:45.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.352000 audit: BPF prog-id=9 op=LOAD Feb 9 19:16:45.355296 systemd[1]: Starting systemd-networkd.service... Feb 9 19:16:45.402039 systemd-networkd[1106]: lo: Link UP Feb 9 19:16:45.402061 systemd-networkd[1106]: lo: Gained carrier Feb 9 19:16:45.405533 systemd-networkd[1106]: Enumeration completed Feb 9 19:16:45.406023 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:16:45.406998 systemd[1]: Started systemd-networkd.service. Feb 9 19:16:45.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.414518 systemd[1]: Reached target network.target. Feb 9 19:16:45.419322 systemd[1]: Starting iscsiuio.service... Feb 9 19:16:45.424476 systemd-networkd[1106]: eth0: Link UP Feb 9 19:16:45.424666 systemd-networkd[1106]: eth0: Gained carrier Feb 9 19:16:45.431772 systemd[1]: Started iscsiuio.service. Feb 9 19:16:45.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.435962 systemd[1]: Starting iscsid.service... Feb 9 19:16:45.444775 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:16:45.444775 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:16:45.444775 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:16:45.444775 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:16:45.444775 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:16:45.464953 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:16:45.470134 systemd[1]: Started iscsid.service. Feb 9 19:16:45.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.485319 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:16:45.490018 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.23.38/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:16:45.515030 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:16:45.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.518449 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:16:45.520399 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:16:45.522239 systemd[1]: Reached target remote-fs.target. Feb 9 19:16:45.536438 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:16:45.547653 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:16:45.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.822105 ignition[1059]: Ignition 2.14.0 Feb 9 19:16:45.822654 ignition[1059]: Stage: fetch-offline Feb 9 19:16:45.823026 ignition[1059]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:45.823087 ignition[1059]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:45.842870 ignition[1059]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:45.845549 ignition[1059]: Ignition finished successfully Feb 9 19:16:45.848163 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:16:45.860959 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:16:45.861509 kernel: audit: type=1130 audit(1707506205.850:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.853053 systemd[1]: Starting ignition-fetch.service... Feb 9 19:16:45.871363 ignition[1130]: Ignition 2.14.0 Feb 9 19:16:45.871392 ignition[1130]: Stage: fetch Feb 9 19:16:45.871701 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:45.871758 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:45.887594 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:45.889985 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:45.910579 ignition[1130]: INFO : PUT result: OK Feb 9 19:16:45.915821 ignition[1130]: DEBUG : parsed url from cmdline: "" Feb 9 19:16:45.915821 ignition[1130]: INFO : no config URL provided Feb 9 19:16:45.915821 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:16:45.915821 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 19:16:45.915821 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:45.926166 ignition[1130]: INFO : PUT result: OK Feb 9 19:16:45.926166 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 19:16:45.930816 ignition[1130]: INFO : GET result: OK Feb 9 19:16:45.930816 ignition[1130]: DEBUG : parsing config with SHA512: a325edd22d51724f1eceffe81f1785f854a1fa5feb63c5bdc90d5ad7f72d8bcff1bc19b89677a1c51cd526611da7f96f98cf3e173a6cea42a4379469dd8be959 Feb 9 19:16:45.959767 unknown[1130]: fetched base config from "system" Feb 9 19:16:45.959822 unknown[1130]: fetched base config from "system" Feb 9 19:16:45.959838 unknown[1130]: fetched user config from "aws" Feb 9 19:16:45.965479 ignition[1130]: fetch: fetch complete Feb 9 19:16:45.965507 ignition[1130]: fetch: fetch passed Feb 9 19:16:45.965601 ignition[1130]: Ignition finished successfully Feb 9 19:16:45.972110 systemd[1]: Finished ignition-fetch.service. Feb 9 19:16:45.982314 kernel: audit: type=1130 audit(1707506205.972:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:45.983680 systemd[1]: Starting ignition-kargs.service... Feb 9 19:16:46.000587 ignition[1136]: Ignition 2.14.0 Feb 9 19:16:46.000621 ignition[1136]: Stage: kargs Feb 9 19:16:46.000974 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:46.001035 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:46.014903 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:46.017187 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:46.020317 ignition[1136]: INFO : PUT result: OK Feb 9 19:16:46.025602 ignition[1136]: kargs: kargs passed Feb 9 19:16:46.025926 ignition[1136]: Ignition finished successfully Feb 9 19:16:46.030498 systemd[1]: Finished ignition-kargs.service. Feb 9 19:16:46.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.035076 systemd[1]: Starting ignition-disks.service... Feb 9 19:16:46.044052 kernel: audit: type=1130 audit(1707506206.031:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.051515 ignition[1142]: Ignition 2.14.0 Feb 9 19:16:46.053266 ignition[1142]: Stage: disks Feb 9 19:16:46.054925 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:46.057167 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:46.072880 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:46.075456 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:46.078898 ignition[1142]: INFO : PUT result: OK Feb 9 19:16:46.083642 ignition[1142]: disks: disks passed Feb 9 19:16:46.083747 ignition[1142]: Ignition finished successfully Feb 9 19:16:46.087833 systemd[1]: Finished ignition-disks.service. Feb 9 19:16:46.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.088476 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:16:46.108324 kernel: audit: type=1130 audit(1707506206.086:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.088683 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:16:46.089323 systemd[1]: Reached target local-fs.target. Feb 9 19:16:46.089642 systemd[1]: Reached target sysinit.target. Feb 9 19:16:46.090284 systemd[1]: Reached target basic.target. Feb 9 19:16:46.099073 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:16:46.148114 systemd-fsck[1150]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 19:16:46.154991 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:16:46.170726 kernel: audit: type=1130 audit(1707506206.155:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.158319 systemd[1]: Mounting sysroot.mount... Feb 9 19:16:46.178821 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:16:46.180648 systemd[1]: Mounted sysroot.mount. Feb 9 19:16:46.195108 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:16:46.212181 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:16:46.218574 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:16:46.218660 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:16:46.218721 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:16:46.222362 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:16:46.241456 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:16:46.246314 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:16:46.261822 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1167) Feb 9 19:16:46.265267 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:16:46.273154 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:16:46.273222 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:16:46.273255 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:16:46.281830 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:16:46.286294 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:16:46.292369 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:16:46.302261 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:16:46.310987 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:16:46.520584 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:16:46.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.525182 systemd[1]: Starting ignition-mount.service... Feb 9 19:16:46.537999 kernel: audit: type=1130 audit(1707506206.521:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.537207 systemd[1]: Starting sysroot-boot.service... Feb 9 19:16:46.545980 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:16:46.546208 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:16:46.579502 ignition[1233]: INFO : Ignition 2.14.0 Feb 9 19:16:46.581452 ignition[1233]: INFO : Stage: mount Feb 9 19:16:46.581452 ignition[1233]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:46.585337 ignition[1233]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:46.591814 systemd[1]: Finished sysroot-boot.service. Feb 9 19:16:46.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.602815 kernel: audit: type=1130 audit(1707506206.593:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.611705 ignition[1233]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:46.614186 ignition[1233]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:46.617219 ignition[1233]: INFO : PUT result: OK Feb 9 19:16:46.622337 ignition[1233]: INFO : mount: mount passed Feb 9 19:16:46.624046 ignition[1233]: INFO : Ignition finished successfully Feb 9 19:16:46.625483 systemd[1]: Finished ignition-mount.service. Feb 9 19:16:46.638298 kernel: audit: type=1130 audit(1707506206.629:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.631959 systemd[1]: Starting ignition-files.service... Feb 9 19:16:46.648505 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:16:46.665818 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1242) Feb 9 19:16:46.671988 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:16:46.672034 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:16:46.672059 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:16:46.680807 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:16:46.685749 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:16:46.713118 ignition[1261]: INFO : Ignition 2.14.0 Feb 9 19:16:46.713118 ignition[1261]: INFO : Stage: files Feb 9 19:16:46.716518 ignition[1261]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:46.716518 ignition[1261]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:46.733310 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:46.736266 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:46.738852 ignition[1261]: INFO : PUT result: OK Feb 9 19:16:46.744083 ignition[1261]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:16:46.748017 ignition[1261]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:16:46.748017 ignition[1261]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:16:46.777333 ignition[1261]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:16:46.782666 ignition[1261]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:16:46.786429 unknown[1261]: wrote ssh authorized keys file for user: core Feb 9 19:16:46.788889 ignition[1261]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:16:46.792467 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 19:16:46.796310 ignition[1261]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 19:16:47.235275 ignition[1261]: INFO : GET result: OK Feb 9 19:16:47.270950 systemd-networkd[1106]: eth0: Gained IPv6LL Feb 9 19:16:47.802851 ignition[1261]: DEBUG : file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 19:16:47.807941 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 19:16:47.811714 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 19:16:47.811714 ignition[1261]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 19:16:48.184394 ignition[1261]: INFO : GET result: OK Feb 9 19:16:48.460420 ignition[1261]: DEBUG : file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 19:16:48.464983 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 19:16:48.464983 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:16:48.472099 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:48.481000 ignition[1261]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem565521012" Feb 9 19:16:48.487684 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1261) Feb 9 19:16:48.487730 ignition[1261]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem565521012": device or resource busy Feb 9 19:16:48.487730 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem565521012", trying btrfs: device or resource busy Feb 9 19:16:48.487730 ignition[1261]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem565521012" Feb 9 19:16:48.503962 ignition[1261]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem565521012" Feb 9 19:16:48.516810 ignition[1261]: INFO : op(3): [started] unmounting "/mnt/oem565521012" Feb 9 19:16:48.520262 ignition[1261]: INFO : op(3): [finished] unmounting "/mnt/oem565521012" Feb 9 19:16:48.519370 systemd[1]: mnt-oem565521012.mount: Deactivated successfully. Feb 9 19:16:48.524638 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:16:48.524638 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:16:48.524638 ignition[1261]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 9 19:16:48.639719 ignition[1261]: INFO : GET result: OK Feb 9 19:16:49.219386 ignition[1261]: DEBUG : file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 9 19:16:49.224633 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:16:49.224633 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:16:49.224633 ignition[1261]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 9 19:16:49.288869 ignition[1261]: INFO : GET result: OK Feb 9 19:16:50.532539 ignition[1261]: DEBUG : file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 9 19:16:50.537504 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:16:50.537504 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:16:50.537504 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:16:50.537504 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:16:50.550805 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:16:50.550805 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:16:50.550805 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:16:50.550805 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:16:50.564686 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:50.583386 ignition[1261]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem266798155" Feb 9 19:16:50.583386 ignition[1261]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem266798155": device or resource busy Feb 9 19:16:50.583386 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem266798155", trying btrfs: device or resource busy Feb 9 19:16:50.583386 ignition[1261]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem266798155" Feb 9 19:16:50.613375 ignition[1261]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem266798155" Feb 9 19:16:50.613375 ignition[1261]: INFO : op(6): [started] unmounting "/mnt/oem266798155" Feb 9 19:16:50.613375 ignition[1261]: INFO : op(6): [finished] unmounting "/mnt/oem266798155" Feb 9 19:16:50.604323 systemd[1]: mnt-oem266798155.mount: Deactivated successfully. Feb 9 19:16:50.622777 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:16:50.626595 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:16:50.630252 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:50.641599 ignition[1261]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3868416450" Feb 9 19:16:50.644470 ignition[1261]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3868416450": device or resource busy Feb 9 19:16:50.644470 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3868416450", trying btrfs: device or resource busy Feb 9 19:16:50.644470 ignition[1261]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3868416450" Feb 9 19:16:50.666379 ignition[1261]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3868416450" Feb 9 19:16:50.666379 ignition[1261]: INFO : op(9): [started] unmounting "/mnt/oem3868416450" Feb 9 19:16:50.659589 systemd[1]: mnt-oem3868416450.mount: Deactivated successfully. Feb 9 19:16:50.674911 ignition[1261]: INFO : op(9): [finished] unmounting "/mnt/oem3868416450" Feb 9 19:16:50.674911 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:16:50.674911 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:16:50.674911 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:50.691173 ignition[1261]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2518802210" Feb 9 19:16:50.694448 ignition[1261]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2518802210": device or resource busy Feb 9 19:16:50.697750 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2518802210", trying btrfs: device or resource busy Feb 9 19:16:50.697750 ignition[1261]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2518802210" Feb 9 19:16:50.709946 ignition[1261]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2518802210" Feb 9 19:16:50.712740 ignition[1261]: INFO : op(c): [started] unmounting "/mnt/oem2518802210" Feb 9 19:16:50.726138 ignition[1261]: INFO : op(c): [finished] unmounting "/mnt/oem2518802210" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(f): [started] processing unit "amazon-ssm-agent.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(f): op(10): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(f): op(10): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(f): [finished] processing unit "amazon-ssm-agent.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(11): [started] processing unit "nvidia.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(11): [finished] processing unit "nvidia.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(16): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:16:50.730814 ignition[1261]: INFO : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:16:50.791174 ignition[1261]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Feb 9 19:16:50.791174 ignition[1261]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:16:50.791174 ignition[1261]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:16:50.791174 ignition[1261]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:16:50.791174 ignition[1261]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:16:50.791174 ignition[1261]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:16:50.791174 ignition[1261]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:16:50.791174 ignition[1261]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:16:50.813953 ignition[1261]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:16:50.817478 ignition[1261]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:16:50.820895 ignition[1261]: INFO : files: files passed Feb 9 19:16:50.822628 ignition[1261]: INFO : Ignition finished successfully Feb 9 19:16:50.829584 systemd[1]: Finished ignition-files.service. Feb 9 19:16:50.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.840641 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:16:50.846423 kernel: audit: type=1130 audit(1707506210.831:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.843101 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:16:50.845395 systemd[1]: Starting ignition-quench.service... Feb 9 19:16:50.857153 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:16:50.859144 systemd[1]: Finished ignition-quench.service. Feb 9 19:16:50.877238 kernel: audit: type=1130 audit(1707506210.859:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.877276 kernel: audit: type=1131 audit(1707506210.859:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.882881 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:16:50.887344 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:16:50.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.891322 systemd[1]: Reached target ignition-complete.target. Feb 9 19:16:50.908242 kernel: audit: type=1130 audit(1707506210.889:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.901995 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:16:50.930497 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:16:50.932771 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:16:50.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.936341 systemd[1]: Reached target initrd-fs.target. Feb 9 19:16:50.951730 kernel: audit: type=1130 audit(1707506210.934:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.951766 kernel: audit: type=1131 audit(1707506210.934:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.953469 systemd[1]: Reached target initrd.target. Feb 9 19:16:50.955046 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:16:50.956604 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:16:50.980631 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:16:50.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:50.991777 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:16:50.995845 kernel: audit: type=1130 audit(1707506210.983:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.011056 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:16:51.014634 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:16:51.018336 systemd[1]: Stopped target timers.target. Feb 9 19:16:51.021412 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:16:51.023565 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:16:51.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.031030 systemd[1]: Stopped target initrd.target. Feb 9 19:16:51.035712 systemd[1]: Stopped target basic.target. Feb 9 19:16:51.037323 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:16:51.039230 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:16:51.063057 kernel: audit: type=1131 audit(1707506211.025:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.041110 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:16:51.042993 systemd[1]: Stopped target remote-fs.target. Feb 9 19:16:51.047118 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:16:51.055214 systemd[1]: Stopped target sysinit.target. Feb 9 19:16:51.057737 systemd[1]: Stopped target local-fs.target. Feb 9 19:16:51.061709 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:16:51.097950 kernel: audit: type=1131 audit(1707506211.074:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.097996 kernel: audit: type=1131 audit(1707506211.078:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.071285 systemd[1]: Stopped target swap.target. Feb 9 19:16:51.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.108686 kernel: audit: type=1131 audit(1707506211.096:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.073453 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:16:51.073741 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:16:51.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.076301 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:16:51.078559 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:16:51.078846 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:16:51.130096 ignition[1300]: INFO : Ignition 2.14.0 Feb 9 19:16:51.130096 ignition[1300]: INFO : Stage: umount Feb 9 19:16:51.130096 ignition[1300]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:51.130096 ignition[1300]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:51.080892 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:16:51.081174 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:16:51.098202 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:16:51.099028 systemd[1]: Stopped ignition-files.service. Feb 9 19:16:51.110886 systemd[1]: Stopping ignition-mount.service... Feb 9 19:16:51.150867 systemd[1]: Stopping iscsiuio.service... Feb 9 19:16:51.153204 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:16:51.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.153457 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:16:51.177565 ignition[1300]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:51.177565 ignition[1300]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:51.161092 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:16:51.175904 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:16:51.182194 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:16:51.184340 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:16:51.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.192854 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:16:51.199920 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:16:51.201716 systemd[1]: Stopped iscsiuio.service. Feb 9 19:16:51.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.207856 ignition[1300]: INFO : PUT result: OK Feb 9 19:16:51.210931 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:16:51.212962 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:16:51.216536 ignition[1300]: INFO : umount: umount passed Feb 9 19:16:51.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.219941 ignition[1300]: INFO : Ignition finished successfully Feb 9 19:16:51.222321 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:16:51.227070 systemd[1]: Stopped ignition-mount.service. Feb 9 19:16:51.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.230520 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:16:51.232595 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:16:51.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.235889 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:16:51.235991 systemd[1]: Stopped ignition-disks.service. Feb 9 19:16:51.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.240739 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:16:51.240841 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:16:51.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.245510 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:16:51.245593 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:16:51.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.250023 systemd[1]: Stopped target network.target. Feb 9 19:16:51.250128 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:16:51.254884 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:16:51.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.258296 systemd[1]: Stopped target paths.target. Feb 9 19:16:51.258409 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:16:51.266849 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:16:51.270093 systemd[1]: Stopped target slices.target. Feb 9 19:16:51.271607 systemd[1]: Stopped target sockets.target. Feb 9 19:16:51.274463 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:16:51.277590 systemd[1]: Closed iscsid.socket. Feb 9 19:16:51.280185 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:16:51.280270 systemd[1]: Closed iscsiuio.socket. Feb 9 19:16:51.283242 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:16:51.286424 systemd[1]: Stopped ignition-setup.service. Feb 9 19:16:51.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.289460 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:16:51.289554 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:16:51.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.294973 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:16:51.298171 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:16:51.302867 systemd-networkd[1106]: eth0: DHCPv6 lease lost Feb 9 19:16:51.307586 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:16:51.309997 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:16:51.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.313690 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:16:51.314861 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:16:51.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.322000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:16:51.324659 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:16:51.324744 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:16:51.328191 systemd[1]: Stopping network-cleanup.service... Feb 9 19:16:51.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.334000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:16:51.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.333595 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:16:51.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.333715 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:16:51.335763 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:16:51.336154 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:16:51.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.338940 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:16:51.339028 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:16:51.342756 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:16:51.352687 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:16:51.352951 systemd[1]: Stopped network-cleanup.service. Feb 9 19:16:51.357675 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:16:51.358009 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:16:51.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.368711 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:16:51.368831 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:16:51.373920 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:16:51.373997 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:16:51.377399 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:16:51.380854 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:16:51.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.383913 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:16:51.384000 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:16:51.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.388981 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:16:51.389060 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:16:51.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.395400 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:16:51.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.397504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:16:51.397613 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:16:51.418175 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:16:51.418547 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:16:51.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:51.424318 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:16:51.429081 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:16:51.444324 systemd[1]: Switching root. Feb 9 19:16:51.476879 iscsid[1111]: iscsid shutting down. Feb 9 19:16:51.478421 systemd-journald[308]: Received SIGTERM from PID 1 (n/a). Feb 9 19:16:51.478497 systemd-journald[308]: Journal stopped Feb 9 19:16:56.933885 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:16:56.934005 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:16:56.934044 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:16:56.934075 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:16:56.934107 kernel: SELinux: policy capability open_perms=1 Feb 9 19:16:56.934138 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:16:56.934169 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:16:56.934199 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:16:56.934231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:16:56.934262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:16:56.934301 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:16:56.934332 systemd[1]: Successfully loaded SELinux policy in 119.158ms. Feb 9 19:16:56.934394 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.758ms. Feb 9 19:16:56.934429 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:16:56.934461 systemd[1]: Detected virtualization amazon. Feb 9 19:16:56.934491 systemd[1]: Detected architecture arm64. Feb 9 19:16:56.934522 systemd[1]: Detected first boot. Feb 9 19:16:56.934556 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:16:56.934598 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:16:56.934633 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:16:56.934672 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:16:56.934709 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:16:56.934749 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:16:56.934863 kernel: kauditd_printk_skb: 46 callbacks suppressed Feb 9 19:16:56.934894 kernel: audit: type=1334 audit(1707506216.490:87): prog-id=12 op=LOAD Feb 9 19:16:56.934926 kernel: audit: type=1334 audit(1707506216.492:88): prog-id=3 op=UNLOAD Feb 9 19:16:56.934961 kernel: audit: type=1334 audit(1707506216.492:89): prog-id=13 op=LOAD Feb 9 19:16:56.934993 kernel: audit: type=1334 audit(1707506216.495:90): prog-id=14 op=LOAD Feb 9 19:16:56.935024 kernel: audit: type=1334 audit(1707506216.495:91): prog-id=4 op=UNLOAD Feb 9 19:16:56.935057 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:16:56.935089 kernel: audit: type=1334 audit(1707506216.495:92): prog-id=5 op=UNLOAD Feb 9 19:16:56.935119 systemd[1]: Stopped iscsid.service. Feb 9 19:16:56.935150 kernel: audit: type=1131 audit(1707506216.498:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.935183 kernel: audit: type=1334 audit(1707506216.510:94): prog-id=12 op=UNLOAD Feb 9 19:16:56.935217 kernel: audit: type=1131 audit(1707506216.526:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.935251 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:16:56.935282 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:16:56.935314 kernel: audit: type=1130 audit(1707506216.541:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.935348 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:16:56.935388 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:16:56.935430 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:16:56.935466 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:16:56.935496 systemd[1]: Created slice system-getty.slice. Feb 9 19:16:56.935526 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:16:56.935558 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:16:56.935591 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:16:56.935621 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:16:56.935651 systemd[1]: Created slice user.slice. Feb 9 19:16:56.935682 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:16:56.935715 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:16:56.935750 systemd[1]: Set up automount boot.automount. Feb 9 19:16:56.935816 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:16:56.935858 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:16:56.935892 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:16:56.935924 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:16:56.935960 systemd[1]: Reached target integritysetup.target. Feb 9 19:16:56.940758 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:16:56.957158 systemd[1]: Reached target remote-fs.target. Feb 9 19:16:56.957204 systemd[1]: Reached target slices.target. Feb 9 19:16:56.957244 systemd[1]: Reached target swap.target. Feb 9 19:16:56.957276 systemd[1]: Reached target torcx.target. Feb 9 19:16:56.957307 systemd[1]: Reached target veritysetup.target. Feb 9 19:16:56.957340 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:16:56.957374 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:16:56.957403 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:16:56.957435 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:16:56.957464 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:16:56.957495 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:16:56.957526 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:16:56.957560 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:16:56.957592 systemd[1]: Mounting media.mount... Feb 9 19:16:56.957621 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:16:56.957652 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:16:56.957684 systemd[1]: Mounting tmp.mount... Feb 9 19:16:56.957713 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:16:56.957742 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:16:56.957772 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:16:56.957827 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:16:56.957865 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:16:56.957898 systemd[1]: Starting modprobe@drm.service... Feb 9 19:16:56.957928 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:16:56.957957 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:16:56.957986 systemd[1]: Starting modprobe@loop.service... Feb 9 19:16:56.958017 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:16:56.958048 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:16:56.958077 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:16:56.958108 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:16:56.958143 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:16:56.958174 systemd[1]: Stopped systemd-journald.service. Feb 9 19:16:56.958206 systemd[1]: Starting systemd-journald.service... Feb 9 19:16:56.958237 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:16:56.958266 kernel: loop: module loaded Feb 9 19:16:56.958296 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:16:56.958324 kernel: fuse: init (API version 7.34) Feb 9 19:16:56.958353 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:16:56.958382 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:16:56.958416 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:16:56.958446 systemd[1]: Stopped verity-setup.service. Feb 9 19:16:56.958475 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:16:56.958507 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:16:56.958537 systemd[1]: Mounted media.mount. Feb 9 19:16:56.958566 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:16:56.958597 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:16:56.958628 systemd[1]: Mounted tmp.mount. Feb 9 19:16:56.958657 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:16:56.958691 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:16:56.958721 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:16:56.958752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:16:56.958797 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:16:56.958837 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:16:56.958871 systemd[1]: Finished modprobe@drm.service. Feb 9 19:16:56.958900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:16:56.958929 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:16:56.958961 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:16:56.958993 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:16:56.959022 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:16:56.959051 systemd[1]: Finished modprobe@loop.service. Feb 9 19:16:56.959080 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:16:56.959113 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:16:56.959149 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:16:56.959180 systemd[1]: Reached target network-pre.target. Feb 9 19:16:56.959212 systemd-journald[1412]: Journal started Feb 9 19:16:56.959308 systemd-journald[1412]: Runtime Journal (/run/log/journal/ec2ce6a200ad5fe370057b02c73e0ebb) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:16:52.197000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:16:52.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:16:52.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:16:52.394000 audit: BPF prog-id=10 op=LOAD Feb 9 19:16:52.394000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:16:52.394000 audit: BPF prog-id=11 op=LOAD Feb 9 19:16:52.394000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:16:52.652000 audit[1333]: AVC avc: denied { associate } for pid=1333 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:16:52.652000 audit[1333]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458d4 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:52.652000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:16:52.656000 audit[1333]: AVC avc: denied { associate } for pid=1333 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:16:52.656000 audit[1333]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001459b9 a2=1ed a3=0 items=2 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:52.656000 audit: CWD cwd="/" Feb 9 19:16:52.656000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:52.656000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:16:52.656000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:16:56.490000 audit: BPF prog-id=12 op=LOAD Feb 9 19:16:56.492000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:16:56.492000 audit: BPF prog-id=13 op=LOAD Feb 9 19:16:56.495000 audit: BPF prog-id=14 op=LOAD Feb 9 19:16:56.495000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:16:56.495000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:16:56.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.510000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:16:56.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.801000 audit: BPF prog-id=15 op=LOAD Feb 9 19:16:56.801000 audit: BPF prog-id=16 op=LOAD Feb 9 19:16:56.801000 audit: BPF prog-id=17 op=LOAD Feb 9 19:16:56.801000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:16:56.801000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:16:56.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.970389 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:16:56.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.976206 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:16:56.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.929000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:16:56.929000 audit[1412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffdd9ad1b0 a2=4000 a3=1 items=0 ppid=1 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:56.929000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:16:56.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:56.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:52.621736 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:16:56.488497 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:16:52.632076 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:16:56.499681 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:16:52.632149 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:16:52.632221 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:16:52.632248 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:16:52.632317 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:16:52.632349 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:16:52.632766 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:16:56.984029 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:16:52.632877 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:16:52.632915 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:16:56.990988 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:16:52.643287 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:16:52.643377 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:16:52.643426 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:16:52.643468 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:16:52.643516 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:16:52.643555 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:16:55.655905 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:16:55.656462 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:16:55.656716 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:16:57.000848 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:16:57.003515 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:16:55.657189 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:16:55.657293 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:16:55.657431 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-02-09T19:16:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:16:57.012832 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:16:57.020354 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:16:57.029508 systemd[1]: Started systemd-journald.service. Feb 9 19:16:57.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:57.031265 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:16:57.034272 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:16:57.038960 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:16:57.060557 systemd-journald[1412]: Time spent on flushing to /var/log/journal/ec2ce6a200ad5fe370057b02c73e0ebb is 75.682ms for 1141 entries. Feb 9 19:16:57.060557 systemd-journald[1412]: System Journal (/var/log/journal/ec2ce6a200ad5fe370057b02c73e0ebb) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:16:57.171908 systemd-journald[1412]: Received client request to flush runtime journal. Feb 9 19:16:57.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:57.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:57.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:57.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:57.067969 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:16:57.070098 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:16:57.086989 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:16:57.118338 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:16:57.122445 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:16:57.165645 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:16:57.169973 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:16:57.174519 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:16:57.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:57.192108 udevadm[1453]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:16:57.243172 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:16:57.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:57.929502 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:16:57.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:57.930000 audit: BPF prog-id=18 op=LOAD Feb 9 19:16:57.930000 audit: BPF prog-id=19 op=LOAD Feb 9 19:16:57.930000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:16:57.930000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:16:57.933737 systemd[1]: Starting systemd-udevd.service... Feb 9 19:16:57.972299 systemd-udevd[1454]: Using default interface naming scheme 'v252'. Feb 9 19:16:58.042389 systemd[1]: Started systemd-udevd.service. Feb 9 19:16:58.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:58.046000 audit: BPF prog-id=20 op=LOAD Feb 9 19:16:58.049617 systemd[1]: Starting systemd-networkd.service... Feb 9 19:16:58.058000 audit: BPF prog-id=21 op=LOAD Feb 9 19:16:58.058000 audit: BPF prog-id=22 op=LOAD Feb 9 19:16:58.059000 audit: BPF prog-id=23 op=LOAD Feb 9 19:16:58.062085 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:16:58.140795 systemd[1]: Started systemd-userdbd.service. Feb 9 19:16:58.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:58.159529 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:16:58.196138 (udev-worker)[1468]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:58.286709 systemd-networkd[1460]: lo: Link UP Feb 9 19:16:58.286733 systemd-networkd[1460]: lo: Gained carrier Feb 9 19:16:58.287696 systemd-networkd[1460]: Enumeration completed Feb 9 19:16:58.287900 systemd[1]: Started systemd-networkd.service. Feb 9 19:16:58.287924 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:16:58.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:58.294241 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:16:58.302835 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:16:58.302941 systemd-networkd[1460]: eth0: Link UP Feb 9 19:16:58.303227 systemd-networkd[1460]: eth0: Gained carrier Feb 9 19:16:58.311049 systemd-networkd[1460]: eth0: DHCPv4 address 172.31.23.38/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:16:58.335852 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1466) Feb 9 19:16:58.510236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:16:58.522548 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:16:58.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:58.526920 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:16:58.573169 lvm[1573]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:16:58.611384 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:16:58.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:58.613479 systemd[1]: Reached target cryptsetup.target. Feb 9 19:16:58.617355 systemd[1]: Starting lvm2-activation.service... Feb 9 19:16:58.625253 lvm[1574]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:16:58.660491 systemd[1]: Finished lvm2-activation.service. Feb 9 19:16:58.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:58.662481 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:16:58.664267 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:16:58.664326 systemd[1]: Reached target local-fs.target. Feb 9 19:16:58.665975 systemd[1]: Reached target machines.target. Feb 9 19:16:58.670154 systemd[1]: Starting ldconfig.service... Feb 9 19:16:58.685086 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:16:58.685202 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:16:58.688302 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:16:58.694241 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:16:58.702055 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:16:58.705481 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:16:58.705737 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:16:58.710121 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:16:58.714881 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1576 (bootctl) Feb 9 19:16:58.717734 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:16:58.752104 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:16:58.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:58.768737 systemd-tmpfiles[1579]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:16:58.784879 systemd-tmpfiles[1579]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:16:58.805971 systemd-tmpfiles[1579]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:16:58.853498 systemd-fsck[1584]: fsck.fat 4.2 (2021-01-31) Feb 9 19:16:58.853498 systemd-fsck[1584]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 19:16:58.860299 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:16:58.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:58.865202 systemd[1]: Mounting boot.mount... Feb 9 19:16:58.887146 systemd[1]: Mounted boot.mount. Feb 9 19:16:58.919941 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:16:58.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:59.127252 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:16:59.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:59.131952 systemd[1]: Starting audit-rules.service... Feb 9 19:16:59.136172 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:16:59.143481 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:16:59.145000 audit: BPF prog-id=24 op=LOAD Feb 9 19:16:59.153000 audit: BPF prog-id=25 op=LOAD Feb 9 19:16:59.150592 systemd[1]: Starting systemd-resolved.service... Feb 9 19:16:59.157390 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:16:59.163080 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:16:59.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:59.166139 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:16:59.168434 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:16:59.183000 audit[1604]: SYSTEM_BOOT pid=1604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:16:59.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:59.190063 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:16:59.354897 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:16:59.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:59.356973 systemd[1]: Reached target time-set.target. Feb 9 19:16:59.400653 systemd-resolved[1602]: Positive Trust Anchors: Feb 9 19:16:59.400686 systemd-resolved[1602]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:16:59.400738 systemd-resolved[1602]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:16:59.507018 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:16:59.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:59.549000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:16:59.549000 audit[1619]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffee74db50 a2=420 a3=0 items=0 ppid=1598 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:59.549000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:16:59.551749 augenrules[1619]: No rules Feb 9 19:16:59.553878 systemd[1]: Finished audit-rules.service. Feb 9 19:16:59.555098 systemd-timesyncd[1603]: Contacted time server 96.231.25.205:123 (0.flatcar.pool.ntp.org). Feb 9 19:16:59.556009 systemd-timesyncd[1603]: Initial clock synchronization to Fri 2024-02-09 19:16:59.949535 UTC. Feb 9 19:16:59.601733 systemd-resolved[1602]: Defaulting to hostname 'linux'. Feb 9 19:16:59.607049 systemd[1]: Started systemd-resolved.service. Feb 9 19:16:59.610681 systemd[1]: Reached target network.target. Feb 9 19:16:59.612354 systemd[1]: Reached target nss-lookup.target. Feb 9 19:16:59.658739 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:16:59.659874 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:16:59.850398 ldconfig[1575]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:16:59.856296 systemd[1]: Finished ldconfig.service. Feb 9 19:16:59.860417 systemd[1]: Starting systemd-update-done.service... Feb 9 19:16:59.873731 systemd[1]: Finished systemd-update-done.service. Feb 9 19:16:59.875739 systemd[1]: Reached target sysinit.target. Feb 9 19:16:59.877563 systemd[1]: Started motdgen.path. Feb 9 19:16:59.879059 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:16:59.881456 systemd[1]: Started logrotate.timer. Feb 9 19:16:59.883045 systemd[1]: Started mdadm.timer. Feb 9 19:16:59.884399 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:16:59.886098 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:16:59.886151 systemd[1]: Reached target paths.target. Feb 9 19:16:59.887700 systemd[1]: Reached target timers.target. Feb 9 19:16:59.889763 systemd[1]: Listening on dbus.socket. Feb 9 19:16:59.893092 systemd[1]: Starting docker.socket... Feb 9 19:16:59.899710 systemd[1]: Listening on sshd.socket. Feb 9 19:16:59.901534 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:16:59.902444 systemd[1]: Listening on docker.socket. Feb 9 19:16:59.904214 systemd[1]: Reached target sockets.target. Feb 9 19:16:59.905853 systemd[1]: Reached target basic.target. Feb 9 19:16:59.907437 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:16:59.907488 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:16:59.909512 systemd[1]: Starting containerd.service... Feb 9 19:16:59.914860 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:16:59.921018 systemd[1]: Starting dbus.service... Feb 9 19:16:59.927119 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:16:59.934125 systemd[1]: Starting extend-filesystems.service... Feb 9 19:16:59.936202 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:16:59.938768 systemd[1]: Starting motdgen.service... Feb 9 19:16:59.942633 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:16:59.947154 systemd[1]: Starting prepare-critools.service... Feb 9 19:16:59.951368 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:16:59.958078 systemd[1]: Starting sshd-keygen.service... Feb 9 19:16:59.960621 jq[1630]: false Feb 9 19:16:59.964895 systemd[1]: Starting systemd-logind.service... Feb 9 19:16:59.967018 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:16:59.967196 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:16:59.968320 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:16:59.969884 systemd[1]: Starting update-engine.service... Feb 9 19:16:59.975075 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:16:59.993350 jq[1640]: true Feb 9 19:17:00.004085 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:17:00.004515 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:17:00.023547 tar[1642]: ./ Feb 9 19:17:00.023547 tar[1642]: ./loopback Feb 9 19:17:00.021444 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:17:00.040870 tar[1643]: crictl Feb 9 19:17:00.021805 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:17:00.050189 jq[1648]: true Feb 9 19:17:00.069809 dbus-daemon[1629]: [system] SELinux support is enabled Feb 9 19:17:00.071253 systemd-networkd[1460]: eth0: Gained IPv6LL Feb 9 19:17:00.075630 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:17:00.078066 systemd[1]: Started dbus.service. Feb 9 19:17:00.083470 systemd[1]: Reached target network-online.target. Feb 9 19:17:00.087775 systemd[1]: Started amazon-ssm-agent.service. Feb 9 19:17:00.092493 systemd[1]: Started nvidia.service. Feb 9 19:17:00.095129 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:17:00.095193 systemd[1]: Reached target system-config.target. Feb 9 19:17:00.097267 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:17:00.097314 systemd[1]: Reached target user-config.target. Feb 9 19:17:00.149848 dbus-daemon[1629]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1460 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:17:00.182327 dbus-daemon[1629]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:17:00.214937 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:17:00.275595 extend-filesystems[1631]: Found nvme0n1 Feb 9 19:17:00.275595 extend-filesystems[1631]: Found nvme0n1p1 Feb 9 19:17:00.275595 extend-filesystems[1631]: Found nvme0n1p2 Feb 9 19:17:00.275595 extend-filesystems[1631]: Found nvme0n1p3 Feb 9 19:17:00.275595 extend-filesystems[1631]: Found usr Feb 9 19:17:00.275595 extend-filesystems[1631]: Found nvme0n1p4 Feb 9 19:17:00.275595 extend-filesystems[1631]: Found nvme0n1p6 Feb 9 19:17:00.275595 extend-filesystems[1631]: Found nvme0n1p7 Feb 9 19:17:00.275595 extend-filesystems[1631]: Found nvme0n1p9 Feb 9 19:17:00.275595 extend-filesystems[1631]: Checking size of /dev/nvme0n1p9 Feb 9 19:17:00.319283 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:17:00.319677 systemd[1]: Finished motdgen.service. Feb 9 19:17:00.365320 bash[1689]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:17:00.366731 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:17:00.395806 update_engine[1639]: I0209 19:17:00.395166 1639 main.cc:92] Flatcar Update Engine starting Feb 9 19:17:00.400263 systemd[1]: Started update-engine.service. Feb 9 19:17:00.405179 systemd[1]: Started locksmithd.service. Feb 9 19:17:00.412187 extend-filesystems[1631]: Resized partition /dev/nvme0n1p9 Feb 9 19:17:00.416433 update_engine[1639]: I0209 19:17:00.416371 1639 update_check_scheduler.cc:74] Next update check in 2m38s Feb 9 19:17:00.451255 extend-filesystems[1695]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:17:00.482861 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 19:17:00.498354 amazon-ssm-agent[1666]: 2024/02/09 19:17:00 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 19:17:00.505259 amazon-ssm-agent[1666]: Initializing new seelog logger Feb 9 19:17:00.505472 amazon-ssm-agent[1666]: New Seelog Logger Creation Complete Feb 9 19:17:00.505592 amazon-ssm-agent[1666]: 2024/02/09 19:17:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:17:00.505592 amazon-ssm-agent[1666]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:17:00.506019 amazon-ssm-agent[1666]: 2024/02/09 19:17:00 processing appconfig overrides Feb 9 19:17:00.538824 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 19:17:00.568954 extend-filesystems[1695]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 19:17:00.568954 extend-filesystems[1695]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:17:00.568954 extend-filesystems[1695]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 19:17:00.577591 extend-filesystems[1631]: Resized filesystem in /dev/nvme0n1p9 Feb 9 19:17:00.588365 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:17:00.588764 systemd[1]: Finished extend-filesystems.service. Feb 9 19:17:00.600244 env[1647]: time="2024-02-09T19:17:00.597441830Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:17:00.600801 tar[1642]: ./bandwidth Feb 9 19:17:00.639449 systemd-logind[1638]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 19:17:00.643957 systemd-logind[1638]: New seat seat0. Feb 9 19:17:00.656380 systemd[1]: Started systemd-logind.service. Feb 9 19:17:00.742564 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:17:00.798745 env[1647]: time="2024-02-09T19:17:00.798678395Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:17:00.802224 env[1647]: time="2024-02-09T19:17:00.802165293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:00.805128 env[1647]: time="2024-02-09T19:17:00.805057481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:17:00.809763 env[1647]: time="2024-02-09T19:17:00.809630329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:00.810476 env[1647]: time="2024-02-09T19:17:00.810426790Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:17:00.820048 env[1647]: time="2024-02-09T19:17:00.819987716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:00.820279 env[1647]: time="2024-02-09T19:17:00.820233506Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:17:00.820411 env[1647]: time="2024-02-09T19:17:00.820379340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:00.820756 env[1647]: time="2024-02-09T19:17:00.820721093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:00.825870 env[1647]: time="2024-02-09T19:17:00.825774712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:17:00.835105 env[1647]: time="2024-02-09T19:17:00.835042686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:17:00.835310 env[1647]: time="2024-02-09T19:17:00.835272453Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:17:00.838127 env[1647]: time="2024-02-09T19:17:00.838068300Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:17:00.839171 env[1647]: time="2024-02-09T19:17:00.839125214Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:17:00.839475 dbus-daemon[1629]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:17:00.839751 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:17:00.841100 dbus-daemon[1629]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1677 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:17:00.847102 systemd[1]: Starting polkit.service... Feb 9 19:17:00.855746 env[1647]: time="2024-02-09T19:17:00.855617888Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:17:00.855918 env[1647]: time="2024-02-09T19:17:00.855749828Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:17:00.855918 env[1647]: time="2024-02-09T19:17:00.855807521Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:17:00.855918 env[1647]: time="2024-02-09T19:17:00.855880608Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:17:00.856109 env[1647]: time="2024-02-09T19:17:00.855922959Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:17:00.856109 env[1647]: time="2024-02-09T19:17:00.855959364Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:17:00.856109 env[1647]: time="2024-02-09T19:17:00.855993174Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.856566709Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.856653514Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.856690095Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.856731274Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.856763371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.857077310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.857263391Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.857694291Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.857747954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.857785303Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.857962743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.858001944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.858060419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.858862 env[1647]: time="2024-02-09T19:17:00.858102706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.858160236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.858194815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.858254322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.858289745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.858354568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.858836146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.858915544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.858952113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.859008735Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.859047017Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.859107621Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.859147414Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:17:00.859675 env[1647]: time="2024-02-09T19:17:00.859246325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:17:00.860375 env[1647]: time="2024-02-09T19:17:00.859688953Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:17:00.860375 env[1647]: time="2024-02-09T19:17:00.859832482Z" level=info msg="Connect containerd service" Feb 9 19:17:00.860375 env[1647]: time="2024-02-09T19:17:00.859903969Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:17:00.861584 env[1647]: time="2024-02-09T19:17:00.861322161Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:17:00.862436 env[1647]: time="2024-02-09T19:17:00.861805338Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:17:00.862436 env[1647]: time="2024-02-09T19:17:00.861965923Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:17:00.875927 env[1647]: time="2024-02-09T19:17:00.873409324Z" level=info msg="Start subscribing containerd event" Feb 9 19:17:00.875927 env[1647]: time="2024-02-09T19:17:00.873548267Z" level=info msg="Start recovering state" Feb 9 19:17:00.875927 env[1647]: time="2024-02-09T19:17:00.873677737Z" level=info msg="Start event monitor" Feb 9 19:17:00.875927 env[1647]: time="2024-02-09T19:17:00.873718803Z" level=info msg="Start snapshots syncer" Feb 9 19:17:00.875927 env[1647]: time="2024-02-09T19:17:00.873744551Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:17:00.875927 env[1647]: time="2024-02-09T19:17:00.873778071Z" level=info msg="Start streaming server" Feb 9 19:17:00.874335 systemd[1]: Started containerd.service. Feb 9 19:17:00.877343 env[1647]: time="2024-02-09T19:17:00.876920131Z" level=info msg="containerd successfully booted in 0.343105s" Feb 9 19:17:00.900902 polkitd[1730]: Started polkitd version 121 Feb 9 19:17:00.911934 tar[1642]: ./ptp Feb 9 19:17:00.930641 polkitd[1730]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:17:00.942588 polkitd[1730]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:17:00.957896 polkitd[1730]: Finished loading, compiling and executing 2 rules Feb 9 19:17:00.959402 dbus-daemon[1629]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:17:00.959693 systemd[1]: Started polkit.service. Feb 9 19:17:00.963292 polkitd[1730]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:17:00.995397 systemd-hostnamed[1677]: Hostname set to (transient) Feb 9 19:17:00.995563 systemd-resolved[1602]: System hostname changed to 'ip-172-31-23-38'. Feb 9 19:17:01.127667 tar[1642]: ./vlan Feb 9 19:17:01.167115 coreos-metadata[1628]: Feb 09 19:17:01.165 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:17:01.173181 coreos-metadata[1628]: Feb 09 19:17:01.172 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 19:17:01.174268 coreos-metadata[1628]: Feb 09 19:17:01.174 INFO Fetch successful Feb 9 19:17:01.174582 coreos-metadata[1628]: Feb 09 19:17:01.174 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:17:01.175786 coreos-metadata[1628]: Feb 09 19:17:01.175 INFO Fetch successful Feb 9 19:17:01.178762 unknown[1628]: wrote ssh authorized keys file for user: core Feb 9 19:17:01.210262 update-ssh-keys[1774]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:17:01.211418 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:17:01.284404 tar[1642]: ./host-device Feb 9 19:17:01.400517 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Create new startup processor Feb 9 19:17:01.403739 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 19:17:01.403739 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Initializing bookkeeping folders Feb 9 19:17:01.403959 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO removing the completed state files Feb 9 19:17:01.404074 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Initializing bookkeeping folders for long running plugins Feb 9 19:17:01.404166 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 19:17:01.404166 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Initializing healthcheck folders for long running plugins Feb 9 19:17:01.404166 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Initializing locations for inventory plugin Feb 9 19:17:01.404343 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Initializing default location for custom inventory Feb 9 19:17:01.404343 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Initializing default location for file inventory Feb 9 19:17:01.404343 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Initializing default location for role inventory Feb 9 19:17:01.404343 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Init the cloudwatchlogs publisher Feb 9 19:17:01.404565 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 19:17:01.404565 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 19:17:01.404565 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform independent plugin aws:configurePackage Feb 9 19:17:01.404565 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform independent plugin aws:downloadContent Feb 9 19:17:01.404565 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 19:17:01.404813 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 19:17:01.404813 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform independent plugin aws:configureDocker Feb 9 19:17:01.404813 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 19:17:01.404813 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform independent plugin aws:runDocument Feb 9 19:17:01.404813 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 19:17:01.404813 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 19:17:01.404813 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO OS: linux, Arch: arm64 Feb 9 19:17:01.423912 amazon-ssm-agent[1666]: datastore file /var/lib/amazon/ssm/i-0bbf4d7380d04fa6a/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 19:17:01.434865 tar[1642]: ./tuning Feb 9 19:17:01.521897 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 19:17:01.616499 tar[1642]: ./vrf Feb 9 19:17:01.633927 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 19:17:01.728286 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 19:17:01.729930 tar[1642]: ./sbr Feb 9 19:17:01.818555 tar[1642]: ./tap Feb 9 19:17:01.822892 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 19:17:01.917738 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 19:17:01.929957 tar[1642]: ./dhcp Feb 9 19:17:02.012621 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 19:17:02.107802 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0bbf4d7380d04fa6a, requestId: 55ee1fdd-d1e0-48e5-bffe-3502fff539ff Feb 9 19:17:02.203215 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [OfflineService] Starting document processing engine... Feb 9 19:17:02.210511 tar[1642]: ./static Feb 9 19:17:02.235605 systemd[1]: Finished prepare-critools.service. Feb 9 19:17:02.263325 tar[1642]: ./firewall Feb 9 19:17:02.298763 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [OfflineService] [EngineProcessor] Starting Feb 9 19:17:02.321546 tar[1642]: ./macvlan Feb 9 19:17:02.375003 tar[1642]: ./dummy Feb 9 19:17:02.394623 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 19:17:02.427923 tar[1642]: ./bridge Feb 9 19:17:02.485408 tar[1642]: ./ipvlan Feb 9 19:17:02.490587 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 19:17:02.537917 tar[1642]: ./portmap Feb 9 19:17:02.586749 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 19:17:02.589334 tar[1642]: ./host-local Feb 9 19:17:02.655924 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:17:02.683127 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessageGatewayService] listening reply. Feb 9 19:17:02.735655 locksmithd[1694]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:17:02.779713 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] Starting message polling Feb 9 19:17:02.876464 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 19:17:02.973379 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [instanceID=i-0bbf4d7380d04fa6a] Starting association polling Feb 9 19:17:03.070202 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 19:17:03.167324 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 19:17:03.265098 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 19:17:03.365700 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 19:17:03.463278 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 19:17:03.561144 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 19:17:03.660430 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [OfflineService] Starting message polling Feb 9 19:17:03.758622 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [OfflineService] Starting send replies to MDS Feb 9 19:17:03.857106 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:17:03.955929 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [StartupProcessor] Executing startup processor tasks Feb 9 19:17:04.055144 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 19:17:04.154126 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 19:17:04.253455 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 19:17:04.352929 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0bbf4d7380d04fa6a?role=subscribe&stream=input Feb 9 19:17:04.452583 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0bbf4d7380d04fa6a?role=subscribe&stream=input Feb 9 19:17:04.552436 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 19:17:04.652403 amazon-ssm-agent[1666]: 2024-02-09 19:17:01 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 19:17:04.752624 amazon-ssm-agent[1666]: 2024-02-09 19:17:02 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 19:17:06.991004 sshd_keygen[1656]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:17:07.027421 systemd[1]: Finished sshd-keygen.service. Feb 9 19:17:07.032680 systemd[1]: Starting issuegen.service... Feb 9 19:17:07.043406 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:17:07.043774 systemd[1]: Finished issuegen.service. Feb 9 19:17:07.048501 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:17:07.063021 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:17:07.068094 systemd[1]: Started getty@tty1.service. Feb 9 19:17:07.072671 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:17:07.075143 systemd[1]: Reached target getty.target. Feb 9 19:17:07.077006 systemd[1]: Reached target multi-user.target. Feb 9 19:17:07.081736 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:17:07.096515 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:17:07.096899 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:17:07.099156 systemd[1]: Startup finished in 1.136s (kernel) + 10.466s (initrd) + 15.039s (userspace) = 26.643s. Feb 9 19:17:09.488471 systemd[1]: Created slice system-sshd.slice. Feb 9 19:17:09.491008 systemd[1]: Started sshd@0-172.31.23.38:22-147.75.109.163:60214.service. Feb 9 19:17:09.679409 sshd[1842]: Accepted publickey for core from 147.75.109.163 port 60214 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:09.684103 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:09.700900 systemd[1]: Created slice user-500.slice. Feb 9 19:17:09.703308 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:17:09.710917 systemd-logind[1638]: New session 1 of user core. Feb 9 19:17:09.723031 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:17:09.727415 systemd[1]: Starting user@500.service... Feb 9 19:17:09.735416 (systemd)[1845]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:09.911164 systemd[1845]: Queued start job for default target default.target. Feb 9 19:17:09.913634 systemd[1845]: Reached target paths.target. Feb 9 19:17:09.913888 systemd[1845]: Reached target sockets.target. Feb 9 19:17:09.914069 systemd[1845]: Reached target timers.target. Feb 9 19:17:09.914219 systemd[1845]: Reached target basic.target. Feb 9 19:17:09.914444 systemd[1845]: Reached target default.target. Feb 9 19:17:09.914553 systemd[1]: Started user@500.service. Feb 9 19:17:09.915238 systemd[1845]: Startup finished in 167ms. Feb 9 19:17:09.917678 systemd[1]: Started session-1.scope. Feb 9 19:17:10.069498 systemd[1]: Started sshd@1-172.31.23.38:22-147.75.109.163:60230.service. Feb 9 19:17:10.261331 sshd[1854]: Accepted publickey for core from 147.75.109.163 port 60230 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:10.263698 sshd[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:10.272869 systemd[1]: Started session-2.scope. Feb 9 19:17:10.273610 systemd-logind[1638]: New session 2 of user core. Feb 9 19:17:10.408903 sshd[1854]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:10.414748 systemd[1]: sshd@1-172.31.23.38:22-147.75.109.163:60230.service: Deactivated successfully. Feb 9 19:17:10.416106 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:17:10.417321 systemd-logind[1638]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:17:10.419318 systemd-logind[1638]: Removed session 2. Feb 9 19:17:10.435872 systemd[1]: Started sshd@2-172.31.23.38:22-147.75.109.163:60246.service. Feb 9 19:17:10.609853 sshd[1860]: Accepted publickey for core from 147.75.109.163 port 60246 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:10.612907 sshd[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:10.621004 systemd-logind[1638]: New session 3 of user core. Feb 9 19:17:10.621368 systemd[1]: Started session-3.scope. Feb 9 19:17:10.744187 sshd[1860]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:10.748954 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:17:10.750178 systemd[1]: sshd@2-172.31.23.38:22-147.75.109.163:60246.service: Deactivated successfully. Feb 9 19:17:10.751731 systemd-logind[1638]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:17:10.754285 systemd-logind[1638]: Removed session 3. Feb 9 19:17:10.772764 systemd[1]: Started sshd@3-172.31.23.38:22-147.75.109.163:60260.service. Feb 9 19:17:10.955556 sshd[1866]: Accepted publickey for core from 147.75.109.163 port 60260 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:10.958524 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:10.967487 systemd[1]: Started session-4.scope. Feb 9 19:17:10.968454 systemd-logind[1638]: New session 4 of user core. Feb 9 19:17:11.101826 sshd[1866]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:11.107425 systemd-logind[1638]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:17:11.109975 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:17:11.111054 systemd[1]: sshd@3-172.31.23.38:22-147.75.109.163:60260.service: Deactivated successfully. Feb 9 19:17:11.112996 systemd-logind[1638]: Removed session 4. Feb 9 19:17:11.130158 systemd[1]: Started sshd@4-172.31.23.38:22-147.75.109.163:60276.service. Feb 9 19:17:11.308027 sshd[1872]: Accepted publickey for core from 147.75.109.163 port 60276 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:11.310381 sshd[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:11.318080 systemd-logind[1638]: New session 5 of user core. Feb 9 19:17:11.319029 systemd[1]: Started session-5.scope. Feb 9 19:17:11.436296 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:17:11.437309 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:17:12.075905 systemd[1]: Reloading. Feb 9 19:17:12.202411 /usr/lib/systemd/system-generators/torcx-generator[1904]: time="2024-02-09T19:17:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:17:12.205897 /usr/lib/systemd/system-generators/torcx-generator[1904]: time="2024-02-09T19:17:12Z" level=info msg="torcx already run" Feb 9 19:17:12.371067 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:17:12.371107 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:17:12.410050 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:17:12.606674 systemd[1]: Started kubelet.service. Feb 9 19:17:12.628971 systemd[1]: Starting coreos-metadata.service... Feb 9 19:17:12.733263 kubelet[1959]: E0209 19:17:12.733144 1959 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 19:17:12.737024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:17:12.737360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:17:12.802439 coreos-metadata[1967]: Feb 09 19:17:12.802 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:17:12.803877 coreos-metadata[1967]: Feb 09 19:17:12.803 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 9 19:17:12.804604 coreos-metadata[1967]: Feb 09 19:17:12.804 INFO Fetch successful Feb 9 19:17:12.804693 coreos-metadata[1967]: Feb 09 19:17:12.804 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 9 19:17:12.805357 coreos-metadata[1967]: Feb 09 19:17:12.805 INFO Fetch successful Feb 9 19:17:12.805444 coreos-metadata[1967]: Feb 09 19:17:12.805 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 9 19:17:12.806026 coreos-metadata[1967]: Feb 09 19:17:12.805 INFO Fetch successful Feb 9 19:17:12.806133 coreos-metadata[1967]: Feb 09 19:17:12.806 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 9 19:17:12.806994 coreos-metadata[1967]: Feb 09 19:17:12.806 INFO Fetch successful Feb 9 19:17:12.807080 coreos-metadata[1967]: Feb 09 19:17:12.806 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 9 19:17:12.807680 coreos-metadata[1967]: Feb 09 19:17:12.807 INFO Fetch successful Feb 9 19:17:12.807760 coreos-metadata[1967]: Feb 09 19:17:12.807 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 9 19:17:12.808426 coreos-metadata[1967]: Feb 09 19:17:12.808 INFO Fetch successful Feb 9 19:17:12.808532 coreos-metadata[1967]: Feb 09 19:17:12.808 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 9 19:17:12.809126 coreos-metadata[1967]: Feb 09 19:17:12.809 INFO Fetch successful Feb 9 19:17:12.809202 coreos-metadata[1967]: Feb 09 19:17:12.809 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 9 19:17:12.809832 coreos-metadata[1967]: Feb 09 19:17:12.809 INFO Fetch successful Feb 9 19:17:12.824251 systemd[1]: Finished coreos-metadata.service. Feb 9 19:17:13.272536 systemd[1]: Stopped kubelet.service. Feb 9 19:17:13.302125 systemd[1]: Reloading. Feb 9 19:17:13.454733 /usr/lib/systemd/system-generators/torcx-generator[2032]: time="2024-02-09T19:17:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:17:13.463247 /usr/lib/systemd/system-generators/torcx-generator[2032]: time="2024-02-09T19:17:13Z" level=info msg="torcx already run" Feb 9 19:17:13.596633 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:17:13.596889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:17:13.635522 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:17:13.840400 systemd[1]: Started kubelet.service. Feb 9 19:17:13.923948 kubelet[2079]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:17:13.923948 kubelet[2079]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:17:13.923948 kubelet[2079]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:17:13.924647 kubelet[2079]: I0209 19:17:13.924295 2079 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:17:15.651257 kubelet[2079]: I0209 19:17:15.651207 2079 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 19:17:15.651257 kubelet[2079]: I0209 19:17:15.651259 2079 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:17:15.652033 kubelet[2079]: I0209 19:17:15.651597 2079 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 19:17:15.658272 kubelet[2079]: W0209 19:17:15.658214 2079 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:17:15.658581 kubelet[2079]: I0209 19:17:15.658217 2079 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:17:15.659441 kubelet[2079]: I0209 19:17:15.659393 2079 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:17:15.659965 kubelet[2079]: I0209 19:17:15.659923 2079 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:17:15.660086 kubelet[2079]: I0209 19:17:15.660059 2079 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:17:15.660236 kubelet[2079]: I0209 19:17:15.660106 2079 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:17:15.660236 kubelet[2079]: I0209 19:17:15.660132 2079 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 19:17:15.660368 kubelet[2079]: I0209 19:17:15.660303 2079 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:17:15.664887 kubelet[2079]: I0209 19:17:15.664842 2079 kubelet.go:405] "Attempting to sync node with API server" Feb 9 19:17:15.664887 kubelet[2079]: I0209 19:17:15.664888 2079 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:17:15.665125 kubelet[2079]: I0209 19:17:15.664935 2079 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:17:15.665125 kubelet[2079]: I0209 19:17:15.664966 2079 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:17:15.665841 kubelet[2079]: E0209 19:17:15.665783 2079 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:15.665960 kubelet[2079]: E0209 19:17:15.665925 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:15.667240 kubelet[2079]: I0209 19:17:15.667207 2079 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:17:15.668103 kubelet[2079]: W0209 19:17:15.668079 2079 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:17:15.669231 kubelet[2079]: I0209 19:17:15.669202 2079 server.go:1168] "Started kubelet" Feb 9 19:17:15.672241 kubelet[2079]: E0209 19:17:15.672190 2079 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:17:15.672241 kubelet[2079]: E0209 19:17:15.672246 2079 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:17:15.676272 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:17:15.676898 kubelet[2079]: I0209 19:17:15.676848 2079 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:17:15.686933 kubelet[2079]: I0209 19:17:15.686889 2079 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:17:15.688232 kubelet[2079]: I0209 19:17:15.688190 2079 server.go:461] "Adding debug handlers to kubelet server" Feb 9 19:17:15.690255 kubelet[2079]: I0209 19:17:15.690216 2079 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:17:15.694114 kubelet[2079]: E0209 19:17:15.693228 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfe4d3404b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 669168203, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 669168203, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:15.694114 kubelet[2079]: W0209 19:17:15.693665 2079 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.23.38" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:17:15.694114 kubelet[2079]: E0209 19:17:15.693701 2079 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.23.38" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:17:15.697027 kubelet[2079]: W0209 19:17:15.693777 2079 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:17:15.697027 kubelet[2079]: E0209 19:17:15.693830 2079 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:17:15.697027 kubelet[2079]: I0209 19:17:15.696273 2079 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 19:17:15.697027 kubelet[2079]: I0209 19:17:15.696416 2079 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 19:17:15.701974 kubelet[2079]: E0209 19:17:15.701826 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfe501e5e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 672225255, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 672225255, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:15.702208 kubelet[2079]: W0209 19:17:15.702052 2079 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:17:15.702208 kubelet[2079]: E0209 19:17:15.702087 2079 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:17:15.702208 kubelet[2079]: E0209 19:17:15.702165 2079 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.23.38\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 19:17:15.760868 kubelet[2079]: I0209 19:17:15.760828 2079 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:17:15.761115 kubelet[2079]: E0209 19:17:15.760978 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea2fe2df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.38 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759125215, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759125215, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:15.761330 kubelet[2079]: I0209 19:17:15.761077 2079 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:17:15.761525 kubelet[2079]: I0209 19:17:15.761502 2079 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:17:15.763270 kubelet[2079]: E0209 19:17:15.763133 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea300d88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.38 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759136136, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759136136, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:15.767059 kubelet[2079]: E0209 19:17:15.765944 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea302728", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.38 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759142696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759142696, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:15.767885 kubelet[2079]: I0209 19:17:15.767849 2079 policy_none.go:49] "None policy: Start" Feb 9 19:17:15.769237 kubelet[2079]: I0209 19:17:15.769143 2079 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:17:15.769237 kubelet[2079]: I0209 19:17:15.769197 2079 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:17:15.779659 systemd[1]: Created slice kubepods.slice. Feb 9 19:17:15.789188 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:17:15.795970 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:17:15.800022 kubelet[2079]: I0209 19:17:15.799974 2079 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.38" Feb 9 19:17:15.803288 kubelet[2079]: E0209 19:17:15.803251 2079 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.38" Feb 9 19:17:15.805106 kubelet[2079]: E0209 19:17:15.803918 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea2fe2df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.38 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759125215, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 799892386, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.38.17b247dfea2fe2df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:15.805980 kubelet[2079]: I0209 19:17:15.805942 2079 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:17:15.806345 kubelet[2079]: I0209 19:17:15.806316 2079 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:17:15.808221 kubelet[2079]: E0209 19:17:15.807877 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea300d88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.38 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759136136, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 799900903, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.38.17b247dfea300d88" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:15.810324 kubelet[2079]: E0209 19:17:15.809936 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea302728", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.38 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759142696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 799908646, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.38.17b247dfea302728" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:15.811479 kubelet[2079]: E0209 19:17:15.811446 2079 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.23.38\" not found" Feb 9 19:17:15.813939 kubelet[2079]: E0209 19:17:15.813730 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfed4152ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 810599595, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 810599595, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:15.869188 kubelet[2079]: I0209 19:17:15.869144 2079 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:17:15.871445 kubelet[2079]: I0209 19:17:15.871395 2079 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:17:15.871445 kubelet[2079]: I0209 19:17:15.871450 2079 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 19:17:15.871666 kubelet[2079]: I0209 19:17:15.871486 2079 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 19:17:15.871666 kubelet[2079]: E0209 19:17:15.871580 2079 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:17:15.878207 kubelet[2079]: W0209 19:17:15.878157 2079 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:17:15.878207 kubelet[2079]: E0209 19:17:15.878211 2079 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:17:15.908628 kubelet[2079]: E0209 19:17:15.908505 2079 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.23.38\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 19:17:16.005278 kubelet[2079]: I0209 19:17:16.005236 2079 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.38" Feb 9 19:17:16.013969 kubelet[2079]: E0209 19:17:16.013917 2079 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.38" Feb 9 19:17:16.014281 kubelet[2079]: E0209 19:17:16.014153 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea2fe2df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.38 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759125215, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 16, 5188998, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.38.17b247dfea2fe2df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:16.018651 kubelet[2079]: E0209 19:17:16.018545 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea300d88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.38 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759136136, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 16, 5196638, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.38.17b247dfea300d88" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:16.020770 kubelet[2079]: E0209 19:17:16.020667 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea302728", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.38 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759142696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 16, 5201696, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.38.17b247dfea302728" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:16.318931 kubelet[2079]: E0209 19:17:16.318895 2079 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.23.38\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 19:17:16.415777 kubelet[2079]: I0209 19:17:16.415737 2079 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.38" Feb 9 19:17:16.418013 kubelet[2079]: E0209 19:17:16.417978 2079 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.38" Feb 9 19:17:16.418318 kubelet[2079]: E0209 19:17:16.417954 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea2fe2df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.38 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759125215, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 16, 415648043, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.38.17b247dfea2fe2df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:16.421333 kubelet[2079]: E0209 19:17:16.421213 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea300d88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.38 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759136136, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 16, 415671520, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.38.17b247dfea300d88" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:16.423038 kubelet[2079]: E0209 19:17:16.422909 2079 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.38.17b247dfea302728", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.38", UID:"172.31.23.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.38 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.38"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 15, 759142696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 16, 415677567, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.38.17b247dfea302728" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:17:16.658429 kubelet[2079]: I0209 19:17:16.658298 2079 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:17:16.666591 kubelet[2079]: E0209 19:17:16.666542 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:17.053652 kubelet[2079]: E0209 19:17:17.053500 2079 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.23.38" not found Feb 9 19:17:17.125436 kubelet[2079]: E0209 19:17:17.125401 2079 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.23.38\" not found" node="172.31.23.38" Feb 9 19:17:17.219523 kubelet[2079]: I0209 19:17:17.219488 2079 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.38" Feb 9 19:17:17.227037 kubelet[2079]: I0209 19:17:17.227003 2079 kubelet_node_status.go:73] "Successfully registered node" node="172.31.23.38" Feb 9 19:17:17.250038 kubelet[2079]: I0209 19:17:17.249960 2079 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:17:17.250711 env[1647]: time="2024-02-09T19:17:17.250627869Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:17:17.252076 kubelet[2079]: I0209 19:17:17.252047 2079 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:17:17.381244 sudo[1875]: pam_unix(sudo:session): session closed for user root Feb 9 19:17:17.406153 sshd[1872]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:17.411360 systemd-logind[1638]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:17:17.411770 systemd[1]: sshd@4-172.31.23.38:22-147.75.109.163:60276.service: Deactivated successfully. Feb 9 19:17:17.413100 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:17:17.415274 systemd-logind[1638]: Removed session 5. Feb 9 19:17:17.667539 kubelet[2079]: E0209 19:17:17.667420 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:17.668395 kubelet[2079]: I0209 19:17:17.668368 2079 apiserver.go:52] "Watching apiserver" Feb 9 19:17:17.672434 kubelet[2079]: I0209 19:17:17.672395 2079 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:17:17.672734 kubelet[2079]: I0209 19:17:17.672709 2079 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:17:17.683978 systemd[1]: Created slice kubepods-besteffort-pod9c62a343_e282_4cc8_8298_3e29f4949395.slice. Feb 9 19:17:17.699532 kubelet[2079]: I0209 19:17:17.697918 2079 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 19:17:17.699239 systemd[1]: Created slice kubepods-burstable-podaf5f6ec6_8de2_4d22_b616_62f9b8fa3e05.slice. Feb 9 19:17:17.707255 kubelet[2079]: I0209 19:17:17.707212 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-host-proc-sys-net\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.707445 kubelet[2079]: I0209 19:17:17.707286 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-hubble-tls\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.707445 kubelet[2079]: I0209 19:17:17.707336 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cni-path\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.707445 kubelet[2079]: I0209 19:17:17.707378 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-etc-cni-netd\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.707445 kubelet[2079]: I0209 19:17:17.707423 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-xtables-lock\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.707687 kubelet[2079]: I0209 19:17:17.707465 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-host-proc-sys-kernel\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.707687 kubelet[2079]: I0209 19:17:17.707510 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4n2h\" (UniqueName: \"kubernetes.io/projected/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-kube-api-access-k4n2h\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.707687 kubelet[2079]: I0209 19:17:17.707555 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c62a343-e282-4cc8-8298-3e29f4949395-kube-proxy\") pod \"kube-proxy-b75pl\" (UID: \"9c62a343-e282-4cc8-8298-3e29f4949395\") " pod="kube-system/kube-proxy-b75pl" Feb 9 19:17:17.707687 kubelet[2079]: I0209 19:17:17.707597 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-run\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.707687 kubelet[2079]: I0209 19:17:17.707637 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-cgroup\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.707687 kubelet[2079]: I0209 19:17:17.707678 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-lib-modules\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.708867 kubelet[2079]: I0209 19:17:17.707746 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c62a343-e282-4cc8-8298-3e29f4949395-xtables-lock\") pod \"kube-proxy-b75pl\" (UID: \"9c62a343-e282-4cc8-8298-3e29f4949395\") " pod="kube-system/kube-proxy-b75pl" Feb 9 19:17:17.708867 kubelet[2079]: I0209 19:17:17.707827 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c62a343-e282-4cc8-8298-3e29f4949395-lib-modules\") pod \"kube-proxy-b75pl\" (UID: \"9c62a343-e282-4cc8-8298-3e29f4949395\") " pod="kube-system/kube-proxy-b75pl" Feb 9 19:17:17.708867 kubelet[2079]: I0209 19:17:17.707877 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-hostproc\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.708867 kubelet[2079]: I0209 19:17:17.707924 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-config-path\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.708867 kubelet[2079]: I0209 19:17:17.707981 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4krw\" (UniqueName: \"kubernetes.io/projected/9c62a343-e282-4cc8-8298-3e29f4949395-kube-api-access-p4krw\") pod \"kube-proxy-b75pl\" (UID: \"9c62a343-e282-4cc8-8298-3e29f4949395\") " pod="kube-system/kube-proxy-b75pl" Feb 9 19:17:17.708867 kubelet[2079]: I0209 19:17:17.708023 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-bpf-maps\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.709200 kubelet[2079]: I0209 19:17:17.708067 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-clustermesh-secrets\") pod \"cilium-6kxbg\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " pod="kube-system/cilium-6kxbg" Feb 9 19:17:17.709200 kubelet[2079]: I0209 19:17:17.708085 2079 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:17:17.998248 env[1647]: time="2024-02-09T19:17:17.994844424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b75pl,Uid:9c62a343-e282-4cc8-8298-3e29f4949395,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:18.011527 env[1647]: time="2024-02-09T19:17:18.011458583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6kxbg,Uid:af5f6ec6-8de2-4d22-b616-62f9b8fa3e05,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:18.533810 env[1647]: time="2024-02-09T19:17:18.533719385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.536077 env[1647]: time="2024-02-09T19:17:18.536029935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.539991 env[1647]: time="2024-02-09T19:17:18.539922506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.542051 env[1647]: time="2024-02-09T19:17:18.542004549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.546152 env[1647]: time="2024-02-09T19:17:18.546085873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.547581 env[1647]: time="2024-02-09T19:17:18.547542834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.551866 env[1647]: time="2024-02-09T19:17:18.551781318Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.555780 env[1647]: time="2024-02-09T19:17:18.555704783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.598047 env[1647]: time="2024-02-09T19:17:18.597920113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:18.598223 env[1647]: time="2024-02-09T19:17:18.598072138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:18.598223 env[1647]: time="2024-02-09T19:17:18.598161457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:18.598556 env[1647]: time="2024-02-09T19:17:18.598475788Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f pid=2141 runtime=io.containerd.runc.v2 Feb 9 19:17:18.602997 env[1647]: time="2024-02-09T19:17:18.602820431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:18.602997 env[1647]: time="2024-02-09T19:17:18.602907979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:18.603337 env[1647]: time="2024-02-09T19:17:18.602935341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:18.603802 env[1647]: time="2024-02-09T19:17:18.603697464Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/562186ff4cc3950de24aaf2023c4461091ff4db5c49a8c9b13114f8a007b2174 pid=2142 runtime=io.containerd.runc.v2 Feb 9 19:17:18.624897 systemd[1]: Started cri-containerd-8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f.scope. Feb 9 19:17:18.650496 systemd[1]: Started cri-containerd-562186ff4cc3950de24aaf2023c4461091ff4db5c49a8c9b13114f8a007b2174.scope. Feb 9 19:17:18.668702 kubelet[2079]: E0209 19:17:18.668610 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:18.718635 env[1647]: time="2024-02-09T19:17:18.718555026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6kxbg,Uid:af5f6ec6-8de2-4d22-b616-62f9b8fa3e05,Namespace:kube-system,Attempt:0,} returns sandbox id \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\"" Feb 9 19:17:18.722143 env[1647]: time="2024-02-09T19:17:18.722071441Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:17:18.727018 env[1647]: time="2024-02-09T19:17:18.726955028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b75pl,Uid:9c62a343-e282-4cc8-8298-3e29f4949395,Namespace:kube-system,Attempt:0,} returns sandbox id \"562186ff4cc3950de24aaf2023c4461091ff4db5c49a8c9b13114f8a007b2174\"" Feb 9 19:17:18.824691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906620786.mount: Deactivated successfully. Feb 9 19:17:19.669575 kubelet[2079]: E0209 19:17:19.669455 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:20.670537 kubelet[2079]: E0209 19:17:20.670459 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:21.670769 kubelet[2079]: E0209 19:17:21.670666 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:22.671247 kubelet[2079]: E0209 19:17:22.671180 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:23.672243 kubelet[2079]: E0209 19:17:23.672154 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:24.672709 kubelet[2079]: E0209 19:17:24.672607 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:25.659796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3307243000.mount: Deactivated successfully. Feb 9 19:17:25.673350 kubelet[2079]: E0209 19:17:25.673289 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:26.674141 kubelet[2079]: E0209 19:17:26.674084 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:27.674960 kubelet[2079]: E0209 19:17:27.674883 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:28.675495 kubelet[2079]: E0209 19:17:28.675339 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:29.563741 env[1647]: time="2024-02-09T19:17:29.563654896Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:29.567757 env[1647]: time="2024-02-09T19:17:29.567695381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:29.576507 env[1647]: time="2024-02-09T19:17:29.576423137Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 19:17:29.579766 env[1647]: time="2024-02-09T19:17:29.579482220Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 19:17:29.580136 env[1647]: time="2024-02-09T19:17:29.580090468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:29.586033 env[1647]: time="2024-02-09T19:17:29.585964518Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:17:29.605326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630827491.mount: Deactivated successfully. Feb 9 19:17:29.614579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100062724.mount: Deactivated successfully. Feb 9 19:17:29.624982 env[1647]: time="2024-02-09T19:17:29.624916914Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\"" Feb 9 19:17:29.626228 env[1647]: time="2024-02-09T19:17:29.626156560Z" level=info msg="StartContainer for \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\"" Feb 9 19:17:29.662105 systemd[1]: Started cri-containerd-339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e.scope. Feb 9 19:17:29.675598 kubelet[2079]: E0209 19:17:29.675528 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:29.729084 env[1647]: time="2024-02-09T19:17:29.729002704Z" level=info msg="StartContainer for \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\" returns successfully" Feb 9 19:17:29.744110 systemd[1]: cri-containerd-339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e.scope: Deactivated successfully. Feb 9 19:17:30.599907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e-rootfs.mount: Deactivated successfully. Feb 9 19:17:30.676876 kubelet[2079]: E0209 19:17:30.676814 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:31.021860 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:17:31.393532 env[1647]: time="2024-02-09T19:17:31.393464975Z" level=info msg="shim disconnected" id=339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e Feb 9 19:17:31.394264 env[1647]: time="2024-02-09T19:17:31.394220517Z" level=warning msg="cleaning up after shim disconnected" id=339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e namespace=k8s.io Feb 9 19:17:31.394388 env[1647]: time="2024-02-09T19:17:31.394360332Z" level=info msg="cleaning up dead shim" Feb 9 19:17:31.440456 env[1647]: time="2024-02-09T19:17:31.440392442Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2263 runtime=io.containerd.runc.v2\n" Feb 9 19:17:31.677957 kubelet[2079]: E0209 19:17:31.677357 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:31.927215 env[1647]: time="2024-02-09T19:17:31.927133086Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:17:31.955084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4111073187.mount: Deactivated successfully. Feb 9 19:17:31.967824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881681371.mount: Deactivated successfully. Feb 9 19:17:31.986677 env[1647]: time="2024-02-09T19:17:31.986591407Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\"" Feb 9 19:17:31.987769 env[1647]: time="2024-02-09T19:17:31.987680244Z" level=info msg="StartContainer for \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\"" Feb 9 19:17:32.040673 systemd[1]: Started cri-containerd-301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0.scope. Feb 9 19:17:32.112018 env[1647]: time="2024-02-09T19:17:32.111955978Z" level=info msg="StartContainer for \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\" returns successfully" Feb 9 19:17:32.130000 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:17:32.131706 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:17:32.132015 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:17:32.140089 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:17:32.140894 systemd[1]: cri-containerd-301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0.scope: Deactivated successfully. Feb 9 19:17:32.158807 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:17:32.289445 env[1647]: time="2024-02-09T19:17:32.288588764Z" level=info msg="shim disconnected" id=301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0 Feb 9 19:17:32.289857 env[1647]: time="2024-02-09T19:17:32.289815067Z" level=warning msg="cleaning up after shim disconnected" id=301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0 namespace=k8s.io Feb 9 19:17:32.289995 env[1647]: time="2024-02-09T19:17:32.289966780Z" level=info msg="cleaning up dead shim" Feb 9 19:17:32.305115 env[1647]: time="2024-02-09T19:17:32.305055902Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2330 runtime=io.containerd.runc.v2\n" Feb 9 19:17:32.677917 kubelet[2079]: E0209 19:17:32.677873 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:32.833618 amazon-ssm-agent[1666]: 2024-02-09 19:17:32 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 19:17:32.921281 env[1647]: time="2024-02-09T19:17:32.921223744Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:17:32.948962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0-rootfs.mount: Deactivated successfully. Feb 9 19:17:32.949145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777982229.mount: Deactivated successfully. Feb 9 19:17:32.954610 env[1647]: time="2024-02-09T19:17:32.954548149Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\"" Feb 9 19:17:32.956830 env[1647]: time="2024-02-09T19:17:32.956746307Z" level=info msg="StartContainer for \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\"" Feb 9 19:17:33.011122 systemd[1]: Started cri-containerd-5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866.scope. Feb 9 19:17:33.061806 env[1647]: time="2024-02-09T19:17:33.061725541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:33.067578 env[1647]: time="2024-02-09T19:17:33.067504284Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:33.072427 env[1647]: time="2024-02-09T19:17:33.072348365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:33.081829 env[1647]: time="2024-02-09T19:17:33.081733715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:33.082653 env[1647]: time="2024-02-09T19:17:33.082604067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 9 19:17:33.087229 env[1647]: time="2024-02-09T19:17:33.087153911Z" level=info msg="CreateContainer within sandbox \"562186ff4cc3950de24aaf2023c4461091ff4db5c49a8c9b13114f8a007b2174\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:17:33.111885 systemd[1]: cri-containerd-5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866.scope: Deactivated successfully. Feb 9 19:17:33.113692 env[1647]: time="2024-02-09T19:17:33.113611100Z" level=info msg="StartContainer for \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\" returns successfully" Feb 9 19:17:33.127887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649117675.mount: Deactivated successfully. Feb 9 19:17:33.138129 env[1647]: time="2024-02-09T19:17:33.138048422Z" level=info msg="CreateContainer within sandbox \"562186ff4cc3950de24aaf2023c4461091ff4db5c49a8c9b13114f8a007b2174\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c01f351f9a6f30003caf27945b58c8476992b4c2de109d845887fc218c560657\"" Feb 9 19:17:33.139111 env[1647]: time="2024-02-09T19:17:33.139045163Z" level=info msg="StartContainer for \"c01f351f9a6f30003caf27945b58c8476992b4c2de109d845887fc218c560657\"" Feb 9 19:17:33.187317 systemd[1]: Started cri-containerd-c01f351f9a6f30003caf27945b58c8476992b4c2de109d845887fc218c560657.scope. Feb 9 19:17:33.282429 env[1647]: time="2024-02-09T19:17:33.282292185Z" level=info msg="StartContainer for \"c01f351f9a6f30003caf27945b58c8476992b4c2de109d845887fc218c560657\" returns successfully" Feb 9 19:17:33.380170 env[1647]: time="2024-02-09T19:17:33.380104175Z" level=info msg="shim disconnected" id=5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866 Feb 9 19:17:33.380655 env[1647]: time="2024-02-09T19:17:33.380607656Z" level=warning msg="cleaning up after shim disconnected" id=5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866 namespace=k8s.io Feb 9 19:17:33.380869 env[1647]: time="2024-02-09T19:17:33.380774073Z" level=info msg="cleaning up dead shim" Feb 9 19:17:33.403686 env[1647]: time="2024-02-09T19:17:33.403621954Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2442 runtime=io.containerd.runc.v2\n" Feb 9 19:17:33.678363 kubelet[2079]: E0209 19:17:33.678271 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:33.930156 env[1647]: time="2024-02-09T19:17:33.929318318Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:17:33.950653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866-rootfs.mount: Deactivated successfully. Feb 9 19:17:33.961530 env[1647]: time="2024-02-09T19:17:33.961340491Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\"" Feb 9 19:17:33.962515 env[1647]: time="2024-02-09T19:17:33.962449102Z" level=info msg="StartContainer for \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\"" Feb 9 19:17:33.963181 kubelet[2079]: I0209 19:17:33.963120 2079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b75pl" podStartSLOduration=2.608576463 podCreationTimestamp="2024-02-09 19:17:17 +0000 UTC" firstStartedPulling="2024-02-09 19:17:18.729011601 +0000 UTC m=+4.881406439" lastFinishedPulling="2024-02-09 19:17:33.08348405 +0000 UTC m=+19.235878900" observedRunningTime="2024-02-09 19:17:33.93822052 +0000 UTC m=+20.090615394" watchObservedRunningTime="2024-02-09 19:17:33.963048924 +0000 UTC m=+20.115443798" Feb 9 19:17:33.997491 systemd[1]: run-containerd-runc-k8s.io-bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff-runc.lIDK7b.mount: Deactivated successfully. Feb 9 19:17:34.009059 systemd[1]: Started cri-containerd-bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff.scope. Feb 9 19:17:34.065036 systemd[1]: cri-containerd-bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff.scope: Deactivated successfully. Feb 9 19:17:34.069505 env[1647]: time="2024-02-09T19:17:34.069360985Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf5f6ec6_8de2_4d22_b616_62f9b8fa3e05.slice/cri-containerd-bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff.scope/memory.events\": no such file or directory" Feb 9 19:17:34.072746 env[1647]: time="2024-02-09T19:17:34.072689540Z" level=info msg="StartContainer for \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\" returns successfully" Feb 9 19:17:34.103977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff-rootfs.mount: Deactivated successfully. Feb 9 19:17:34.120227 env[1647]: time="2024-02-09T19:17:34.120154489Z" level=info msg="shim disconnected" id=bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff Feb 9 19:17:34.120227 env[1647]: time="2024-02-09T19:17:34.120222881Z" level=warning msg="cleaning up after shim disconnected" id=bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff namespace=k8s.io Feb 9 19:17:34.120563 env[1647]: time="2024-02-09T19:17:34.120245687Z" level=info msg="cleaning up dead shim" Feb 9 19:17:34.134042 env[1647]: time="2024-02-09T19:17:34.133962874Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2602 runtime=io.containerd.runc.v2\n" Feb 9 19:17:34.678527 kubelet[2079]: E0209 19:17:34.678462 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:34.941040 env[1647]: time="2024-02-09T19:17:34.940550174Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:17:34.978672 env[1647]: time="2024-02-09T19:17:34.978605282Z" level=info msg="CreateContainer within sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\"" Feb 9 19:17:34.980072 env[1647]: time="2024-02-09T19:17:34.980000813Z" level=info msg="StartContainer for \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\"" Feb 9 19:17:35.016462 systemd[1]: Started cri-containerd-5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982.scope. Feb 9 19:17:35.087320 env[1647]: time="2024-02-09T19:17:35.087256437Z" level=info msg="StartContainer for \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\" returns successfully" Feb 9 19:17:35.311906 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:17:35.340266 kubelet[2079]: I0209 19:17:35.339163 2079 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:17:35.665394 kubelet[2079]: E0209 19:17:35.665261 2079 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:35.679576 kubelet[2079]: E0209 19:17:35.679525 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:35.960115 kernel: Initializing XFRM netlink socket Feb 9 19:17:35.966880 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:17:35.970424 kubelet[2079]: I0209 19:17:35.970371 2079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6kxbg" podStartSLOduration=8.112948865 podCreationTimestamp="2024-02-09 19:17:17 +0000 UTC" firstStartedPulling="2024-02-09 19:17:18.72110293 +0000 UTC m=+4.873497768" lastFinishedPulling="2024-02-09 19:17:29.578434376 +0000 UTC m=+15.730829238" observedRunningTime="2024-02-09 19:17:35.967412904 +0000 UTC m=+22.119807766" watchObservedRunningTime="2024-02-09 19:17:35.970280335 +0000 UTC m=+22.122675197" Feb 9 19:17:36.680182 kubelet[2079]: E0209 19:17:36.680121 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:37.681106 kubelet[2079]: E0209 19:17:37.681065 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:37.753180 systemd-networkd[1460]: cilium_host: Link UP Feb 9 19:17:37.761590 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:17:37.761722 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:17:37.755950 systemd-networkd[1460]: cilium_net: Link UP Feb 9 19:17:37.758375 systemd-networkd[1460]: cilium_net: Gained carrier Feb 9 19:17:37.763295 (udev-worker)[2496]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:17:37.764545 systemd-networkd[1460]: cilium_host: Gained carrier Feb 9 19:17:37.765437 (udev-worker)[2738]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:17:37.921050 (udev-worker)[2495]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:17:37.936095 systemd-networkd[1460]: cilium_vxlan: Link UP Feb 9 19:17:37.936110 systemd-networkd[1460]: cilium_vxlan: Gained carrier Feb 9 19:17:38.134963 systemd-networkd[1460]: cilium_host: Gained IPv6LL Feb 9 19:17:38.398842 kernel: NET: Registered PF_ALG protocol family Feb 9 19:17:38.407011 systemd-networkd[1460]: cilium_net: Gained IPv6LL Feb 9 19:17:38.682222 kubelet[2079]: E0209 19:17:38.682020 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:39.558995 systemd-networkd[1460]: cilium_vxlan: Gained IPv6LL Feb 9 19:17:39.667840 systemd-networkd[1460]: lxc_health: Link UP Feb 9 19:17:39.680259 systemd-networkd[1460]: lxc_health: Gained carrier Feb 9 19:17:39.681421 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:17:39.682981 kubelet[2079]: E0209 19:17:39.682915 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:40.684076 kubelet[2079]: E0209 19:17:40.684000 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:41.287085 systemd-networkd[1460]: lxc_health: Gained IPv6LL Feb 9 19:17:41.374466 kubelet[2079]: I0209 19:17:41.374395 2079 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:17:41.651974 kubelet[2079]: I0209 19:17:41.651608 2079 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:17:41.662333 systemd[1]: Created slice kubepods-besteffort-pod55a7d9e0_0a5c_4d6a_a65c_c6e422026b73.slice. Feb 9 19:17:41.663432 kubelet[2079]: I0209 19:17:41.663385 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75qhk\" (UniqueName: \"kubernetes.io/projected/55a7d9e0-0a5c-4d6a-a65c-c6e422026b73-kube-api-access-75qhk\") pod \"nginx-deployment-845c78c8b9-nrpqt\" (UID: \"55a7d9e0-0a5c-4d6a-a65c-c6e422026b73\") " pod="default/nginx-deployment-845c78c8b9-nrpqt" Feb 9 19:17:41.684866 kubelet[2079]: E0209 19:17:41.684805 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:41.976490 env[1647]: time="2024-02-09T19:17:41.975115483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-nrpqt,Uid:55a7d9e0-0a5c-4d6a-a65c-c6e422026b73,Namespace:default,Attempt:0,}" Feb 9 19:17:42.058390 systemd-networkd[1460]: lxcb80728559b70: Link UP Feb 9 19:17:42.067920 kernel: eth0: renamed from tmp19b7c Feb 9 19:17:42.081283 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:17:42.081410 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb80728559b70: link becomes ready Feb 9 19:17:42.082681 systemd-networkd[1460]: lxcb80728559b70: Gained carrier Feb 9 19:17:42.685966 kubelet[2079]: E0209 19:17:42.685904 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:43.686591 kubelet[2079]: E0209 19:17:43.686542 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:44.039181 systemd-networkd[1460]: lxcb80728559b70: Gained IPv6LL Feb 9 19:17:44.688170 kubelet[2079]: E0209 19:17:44.688127 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:45.466424 update_engine[1639]: I0209 19:17:45.465868 1639 update_attempter.cc:509] Updating boot flags... Feb 9 19:17:45.704843 kubelet[2079]: E0209 19:17:45.691381 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:46.692449 kubelet[2079]: E0209 19:17:46.692404 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:47.692921 kubelet[2079]: E0209 19:17:47.692873 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:48.694611 kubelet[2079]: E0209 19:17:48.694559 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:49.076859 env[1647]: time="2024-02-09T19:17:49.076667833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:49.077484 env[1647]: time="2024-02-09T19:17:49.076775895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:49.077484 env[1647]: time="2024-02-09T19:17:49.076887906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:49.077663 env[1647]: time="2024-02-09T19:17:49.077612021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19b7c8803d45ab118e0f146787611a2bb9c69d04df81d0138c6c6c9cd5b5d7c8 pid=3381 runtime=io.containerd.runc.v2 Feb 9 19:17:49.111528 systemd[1]: run-containerd-runc-k8s.io-19b7c8803d45ab118e0f146787611a2bb9c69d04df81d0138c6c6c9cd5b5d7c8-runc.THlALO.mount: Deactivated successfully. Feb 9 19:17:49.122289 systemd[1]: Started cri-containerd-19b7c8803d45ab118e0f146787611a2bb9c69d04df81d0138c6c6c9cd5b5d7c8.scope. Feb 9 19:17:49.185869 env[1647]: time="2024-02-09T19:17:49.185771276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-nrpqt,Uid:55a7d9e0-0a5c-4d6a-a65c-c6e422026b73,Namespace:default,Attempt:0,} returns sandbox id \"19b7c8803d45ab118e0f146787611a2bb9c69d04df81d0138c6c6c9cd5b5d7c8\"" Feb 9 19:17:49.188913 env[1647]: time="2024-02-09T19:17:49.188860297Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:17:49.695739 kubelet[2079]: E0209 19:17:49.695681 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:50.696397 kubelet[2079]: E0209 19:17:50.696322 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:51.697286 kubelet[2079]: E0209 19:17:51.697220 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:52.698138 kubelet[2079]: E0209 19:17:52.698059 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:53.122741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount23222819.mount: Deactivated successfully. Feb 9 19:17:53.698480 kubelet[2079]: E0209 19:17:53.698416 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:54.641365 env[1647]: time="2024-02-09T19:17:54.641285988Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:54.644456 env[1647]: time="2024-02-09T19:17:54.644407275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:54.647668 env[1647]: time="2024-02-09T19:17:54.647619126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:54.650635 env[1647]: time="2024-02-09T19:17:54.650582713Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:54.652453 env[1647]: time="2024-02-09T19:17:54.652231670Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 19:17:54.657105 env[1647]: time="2024-02-09T19:17:54.657039304Z" level=info msg="CreateContainer within sandbox \"19b7c8803d45ab118e0f146787611a2bb9c69d04df81d0138c6c6c9cd5b5d7c8\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:17:54.696138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527460198.mount: Deactivated successfully. Feb 9 19:17:54.699497 kubelet[2079]: E0209 19:17:54.699299 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:54.706245 env[1647]: time="2024-02-09T19:17:54.706181855Z" level=info msg="CreateContainer within sandbox \"19b7c8803d45ab118e0f146787611a2bb9c69d04df81d0138c6c6c9cd5b5d7c8\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"20dfb4bddfbba5c98fcabddc1abad6b3d20501e71c936cc1d69aa8c08d131549\"" Feb 9 19:17:54.707659 env[1647]: time="2024-02-09T19:17:54.707591166Z" level=info msg="StartContainer for \"20dfb4bddfbba5c98fcabddc1abad6b3d20501e71c936cc1d69aa8c08d131549\"" Feb 9 19:17:54.742668 systemd[1]: Started cri-containerd-20dfb4bddfbba5c98fcabddc1abad6b3d20501e71c936cc1d69aa8c08d131549.scope. Feb 9 19:17:54.815067 env[1647]: time="2024-02-09T19:17:54.814990001Z" level=info msg="StartContainer for \"20dfb4bddfbba5c98fcabddc1abad6b3d20501e71c936cc1d69aa8c08d131549\" returns successfully" Feb 9 19:17:55.023704 kubelet[2079]: I0209 19:17:55.022556 2079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-nrpqt" podStartSLOduration=8.557775071 podCreationTimestamp="2024-02-09 19:17:41 +0000 UTC" firstStartedPulling="2024-02-09 19:17:49.188166684 +0000 UTC m=+35.340561522" lastFinishedPulling="2024-02-09 19:17:54.652872132 +0000 UTC m=+40.805266994" observedRunningTime="2024-02-09 19:17:55.022048572 +0000 UTC m=+41.174443446" watchObservedRunningTime="2024-02-09 19:17:55.022480543 +0000 UTC m=+41.174875405" Feb 9 19:17:55.666041 kubelet[2079]: E0209 19:17:55.665981 2079 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:55.700224 kubelet[2079]: E0209 19:17:55.700182 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:56.701102 kubelet[2079]: E0209 19:17:56.701044 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:57.702196 kubelet[2079]: E0209 19:17:57.702154 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:58.703198 kubelet[2079]: E0209 19:17:58.703133 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:17:59.703613 kubelet[2079]: E0209 19:17:59.703574 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:00.705277 kubelet[2079]: E0209 19:18:00.705233 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:00.863293 kubelet[2079]: I0209 19:18:00.863216 2079 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:18:00.872882 systemd[1]: Created slice kubepods-besteffort-pod10c1c09e_6b18_42d4_b711_e63960e0901d.slice. Feb 9 19:18:00.888348 kubelet[2079]: I0209 19:18:00.888310 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/10c1c09e-6b18-42d4-b711-e63960e0901d-data\") pod \"nfs-server-provisioner-0\" (UID: \"10c1c09e-6b18-42d4-b711-e63960e0901d\") " pod="default/nfs-server-provisioner-0" Feb 9 19:18:00.888579 kubelet[2079]: I0209 19:18:00.888556 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkffh\" (UniqueName: \"kubernetes.io/projected/10c1c09e-6b18-42d4-b711-e63960e0901d-kube-api-access-qkffh\") pod \"nfs-server-provisioner-0\" (UID: \"10c1c09e-6b18-42d4-b711-e63960e0901d\") " pod="default/nfs-server-provisioner-0" Feb 9 19:18:01.184693 env[1647]: time="2024-02-09T19:18:01.184034166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:10c1c09e-6b18-42d4-b711-e63960e0901d,Namespace:default,Attempt:0,}" Feb 9 19:18:01.233970 systemd-networkd[1460]: lxc8ef53cc737c7: Link UP Feb 9 19:18:01.239837 kernel: eth0: renamed from tmp18924 Feb 9 19:18:01.253365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:18:01.253543 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8ef53cc737c7: link becomes ready Feb 9 19:18:01.253736 (udev-worker)[3490]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:01.253845 systemd-networkd[1460]: lxc8ef53cc737c7: Gained carrier Feb 9 19:18:01.682691 env[1647]: time="2024-02-09T19:18:01.682334309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:01.682691 env[1647]: time="2024-02-09T19:18:01.682414510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:01.682691 env[1647]: time="2024-02-09T19:18:01.682440831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:01.683047 env[1647]: time="2024-02-09T19:18:01.682844656Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/189247f8817159ec2ef784eed1623b4837a010f9e4b7941430590c8e6eb18716 pid=3506 runtime=io.containerd.runc.v2 Feb 9 19:18:01.706910 kubelet[2079]: E0209 19:18:01.706848 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:01.721672 systemd[1]: Started cri-containerd-189247f8817159ec2ef784eed1623b4837a010f9e4b7941430590c8e6eb18716.scope. Feb 9 19:18:01.793984 env[1647]: time="2024-02-09T19:18:01.793877222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:10c1c09e-6b18-42d4-b711-e63960e0901d,Namespace:default,Attempt:0,} returns sandbox id \"189247f8817159ec2ef784eed1623b4837a010f9e4b7941430590c8e6eb18716\"" Feb 9 19:18:01.799530 env[1647]: time="2024-02-09T19:18:01.799277330Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:18:02.707925 kubelet[2079]: E0209 19:18:02.707848 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:03.047213 systemd-networkd[1460]: lxc8ef53cc737c7: Gained IPv6LL Feb 9 19:18:03.708412 kubelet[2079]: E0209 19:18:03.708343 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:04.709430 kubelet[2079]: E0209 19:18:04.709353 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:05.197034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201647435.mount: Deactivated successfully. Feb 9 19:18:05.710195 kubelet[2079]: E0209 19:18:05.710129 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:06.710543 kubelet[2079]: E0209 19:18:06.710481 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:07.711545 kubelet[2079]: E0209 19:18:07.711470 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:08.712543 kubelet[2079]: E0209 19:18:08.712475 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:08.713220 env[1647]: time="2024-02-09T19:18:08.712478987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:08.715825 env[1647]: time="2024-02-09T19:18:08.715736709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:08.718881 env[1647]: time="2024-02-09T19:18:08.718832836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:08.721962 env[1647]: time="2024-02-09T19:18:08.721915268Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:08.723512 env[1647]: time="2024-02-09T19:18:08.723467862Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 19:18:08.727867 env[1647]: time="2024-02-09T19:18:08.727768715Z" level=info msg="CreateContainer within sandbox \"189247f8817159ec2ef784eed1623b4837a010f9e4b7941430590c8e6eb18716\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:18:08.749144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117543229.mount: Deactivated successfully. Feb 9 19:18:08.761060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3190401594.mount: Deactivated successfully. Feb 9 19:18:08.771716 env[1647]: time="2024-02-09T19:18:08.771600471Z" level=info msg="CreateContainer within sandbox \"189247f8817159ec2ef784eed1623b4837a010f9e4b7941430590c8e6eb18716\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"56576e9bdfdb08397d85930b470e6f45848ab823647ac92ffc998e6042233ffd\"" Feb 9 19:18:08.773098 env[1647]: time="2024-02-09T19:18:08.773033297Z" level=info msg="StartContainer for \"56576e9bdfdb08397d85930b470e6f45848ab823647ac92ffc998e6042233ffd\"" Feb 9 19:18:08.806635 systemd[1]: Started cri-containerd-56576e9bdfdb08397d85930b470e6f45848ab823647ac92ffc998e6042233ffd.scope. Feb 9 19:18:08.876970 env[1647]: time="2024-02-09T19:18:08.876877542Z" level=info msg="StartContainer for \"56576e9bdfdb08397d85930b470e6f45848ab823647ac92ffc998e6042233ffd\" returns successfully" Feb 9 19:18:09.056628 kubelet[2079]: I0209 19:18:09.056455 2079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.130268549 podCreationTimestamp="2024-02-09 19:18:00 +0000 UTC" firstStartedPulling="2024-02-09 19:18:01.79788674 +0000 UTC m=+47.950281590" lastFinishedPulling="2024-02-09 19:18:08.723987345 +0000 UTC m=+54.876382183" observedRunningTime="2024-02-09 19:18:09.054838459 +0000 UTC m=+55.207233345" watchObservedRunningTime="2024-02-09 19:18:09.056369142 +0000 UTC m=+55.208764004" Feb 9 19:18:09.713650 kubelet[2079]: E0209 19:18:09.713579 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:10.714068 kubelet[2079]: E0209 19:18:10.714025 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:11.715695 kubelet[2079]: E0209 19:18:11.715653 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:12.717244 kubelet[2079]: E0209 19:18:12.717199 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:13.718655 kubelet[2079]: E0209 19:18:13.718571 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:14.719776 kubelet[2079]: E0209 19:18:14.719729 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:15.666088 kubelet[2079]: E0209 19:18:15.665991 2079 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:15.720802 kubelet[2079]: E0209 19:18:15.720730 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:16.721939 kubelet[2079]: E0209 19:18:16.721870 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:17.722572 kubelet[2079]: E0209 19:18:17.722509 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:18.502939 kubelet[2079]: I0209 19:18:18.502896 2079 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:18:18.512727 systemd[1]: Created slice kubepods-besteffort-podb1656d65_a4dc_4baa_af9a_bab14a4a47c6.slice. Feb 9 19:18:18.694349 kubelet[2079]: I0209 19:18:18.694285 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqfht\" (UniqueName: \"kubernetes.io/projected/b1656d65-a4dc-4baa-af9a-bab14a4a47c6-kube-api-access-pqfht\") pod \"test-pod-1\" (UID: \"b1656d65-a4dc-4baa-af9a-bab14a4a47c6\") " pod="default/test-pod-1" Feb 9 19:18:18.694553 kubelet[2079]: I0209 19:18:18.694473 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-447c3d49-1d83-4f55-8aef-2f49c4188276\" (UniqueName: \"kubernetes.io/nfs/b1656d65-a4dc-4baa-af9a-bab14a4a47c6-pvc-447c3d49-1d83-4f55-8aef-2f49c4188276\") pod \"test-pod-1\" (UID: \"b1656d65-a4dc-4baa-af9a-bab14a4a47c6\") " pod="default/test-pod-1" Feb 9 19:18:18.723329 kubelet[2079]: E0209 19:18:18.723222 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:18.836816 kernel: FS-Cache: Loaded Feb 9 19:18:18.881845 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:18:18.882013 kernel: RPC: Registered udp transport module. Feb 9 19:18:18.883922 kernel: RPC: Registered tcp transport module. Feb 9 19:18:18.886328 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:18:18.941395 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:18:19.203575 kernel: NFS: Registering the id_resolver key type Feb 9 19:18:19.203754 kernel: Key type id_resolver registered Feb 9 19:18:19.205391 kernel: Key type id_legacy registered Feb 9 19:18:19.247932 nfsidmap[3627]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 19:18:19.254046 nfsidmap[3628]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 19:18:19.419956 env[1647]: time="2024-02-09T19:18:19.419868626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b1656d65-a4dc-4baa-af9a-bab14a4a47c6,Namespace:default,Attempt:0,}" Feb 9 19:18:19.470829 (udev-worker)[3612]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:19.472105 systemd-networkd[1460]: lxc270a79d01277: Link UP Feb 9 19:18:19.482552 (udev-worker)[3626]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:19.484822 kernel: eth0: renamed from tmpaa181 Feb 9 19:18:19.494909 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:18:19.495057 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc270a79d01277: link becomes ready Feb 9 19:18:19.495083 systemd-networkd[1460]: lxc270a79d01277: Gained carrier Feb 9 19:18:19.723823 kubelet[2079]: E0209 19:18:19.723629 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:19.916730 env[1647]: time="2024-02-09T19:18:19.916373504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:19.916730 env[1647]: time="2024-02-09T19:18:19.916446654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:19.916730 env[1647]: time="2024-02-09T19:18:19.916473070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:19.917439 env[1647]: time="2024-02-09T19:18:19.917356085Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa181fcdbd9ba8e837e1719080bcb544672bec42832cd52335e4faaa5c6eb9b8 pid=3657 runtime=io.containerd.runc.v2 Feb 9 19:18:19.956908 systemd[1]: Started cri-containerd-aa181fcdbd9ba8e837e1719080bcb544672bec42832cd52335e4faaa5c6eb9b8.scope. Feb 9 19:18:20.028198 env[1647]: time="2024-02-09T19:18:20.028116525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b1656d65-a4dc-4baa-af9a-bab14a4a47c6,Namespace:default,Attempt:0,} returns sandbox id \"aa181fcdbd9ba8e837e1719080bcb544672bec42832cd52335e4faaa5c6eb9b8\"" Feb 9 19:18:20.031493 env[1647]: time="2024-02-09T19:18:20.031429324Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:18:20.372336 env[1647]: time="2024-02-09T19:18:20.372276793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:20.375493 env[1647]: time="2024-02-09T19:18:20.375446990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:20.378318 env[1647]: time="2024-02-09T19:18:20.378274086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:20.381100 env[1647]: time="2024-02-09T19:18:20.381046431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:20.382550 env[1647]: time="2024-02-09T19:18:20.382505939Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 19:18:20.395328 env[1647]: time="2024-02-09T19:18:20.395258457Z" level=info msg="CreateContainer within sandbox \"aa181fcdbd9ba8e837e1719080bcb544672bec42832cd52335e4faaa5c6eb9b8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:18:20.421642 env[1647]: time="2024-02-09T19:18:20.421555752Z" level=info msg="CreateContainer within sandbox \"aa181fcdbd9ba8e837e1719080bcb544672bec42832cd52335e4faaa5c6eb9b8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0b3c214292970cdc24405f8f57a5b50f693167fb6155490d6a5ce5a99921681a\"" Feb 9 19:18:20.422882 env[1647]: time="2024-02-09T19:18:20.422835020Z" level=info msg="StartContainer for \"0b3c214292970cdc24405f8f57a5b50f693167fb6155490d6a5ce5a99921681a\"" Feb 9 19:18:20.454826 systemd[1]: Started cri-containerd-0b3c214292970cdc24405f8f57a5b50f693167fb6155490d6a5ce5a99921681a.scope. Feb 9 19:18:20.513108 env[1647]: time="2024-02-09T19:18:20.513034880Z" level=info msg="StartContainer for \"0b3c214292970cdc24405f8f57a5b50f693167fb6155490d6a5ce5a99921681a\" returns successfully" Feb 9 19:18:20.724801 kubelet[2079]: E0209 19:18:20.724630 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:20.967137 systemd-networkd[1460]: lxc270a79d01277: Gained IPv6LL Feb 9 19:18:21.088877 kubelet[2079]: I0209 19:18:21.088667 2079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.736163189 podCreationTimestamp="2024-02-09 19:18:01 +0000 UTC" firstStartedPulling="2024-02-09 19:18:20.030613891 +0000 UTC m=+66.183008741" lastFinishedPulling="2024-02-09 19:18:20.383062835 +0000 UTC m=+66.535457685" observedRunningTime="2024-02-09 19:18:21.087004626 +0000 UTC m=+67.239399500" watchObservedRunningTime="2024-02-09 19:18:21.088612133 +0000 UTC m=+67.241006995" Feb 9 19:18:21.725049 kubelet[2079]: E0209 19:18:21.724981 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:22.725939 kubelet[2079]: E0209 19:18:22.725875 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:23.726299 kubelet[2079]: E0209 19:18:23.726251 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:24.727685 kubelet[2079]: E0209 19:18:24.727639 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:25.728480 kubelet[2079]: E0209 19:18:25.728414 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:26.728681 kubelet[2079]: E0209 19:18:26.728603 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:26.932839 systemd[1]: run-containerd-runc-k8s.io-5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982-runc.SDjQPf.mount: Deactivated successfully. Feb 9 19:18:26.963740 env[1647]: time="2024-02-09T19:18:26.963633698Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:18:26.974864 env[1647]: time="2024-02-09T19:18:26.974758937Z" level=info msg="StopContainer for \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\" with timeout 1 (s)" Feb 9 19:18:26.975542 env[1647]: time="2024-02-09T19:18:26.975492620Z" level=info msg="Stop container \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\" with signal terminated" Feb 9 19:18:26.988243 systemd-networkd[1460]: lxc_health: Link DOWN Feb 9 19:18:26.988256 systemd-networkd[1460]: lxc_health: Lost carrier Feb 9 19:18:27.024837 systemd[1]: cri-containerd-5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982.scope: Deactivated successfully. Feb 9 19:18:27.025397 systemd[1]: cri-containerd-5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982.scope: Consumed 14.354s CPU time. Feb 9 19:18:27.060211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982-rootfs.mount: Deactivated successfully. Feb 9 19:18:27.353648 env[1647]: time="2024-02-09T19:18:27.353570009Z" level=info msg="shim disconnected" id=5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982 Feb 9 19:18:27.354079 env[1647]: time="2024-02-09T19:18:27.353647802Z" level=warning msg="cleaning up after shim disconnected" id=5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982 namespace=k8s.io Feb 9 19:18:27.354079 env[1647]: time="2024-02-09T19:18:27.353682762Z" level=info msg="cleaning up dead shim" Feb 9 19:18:27.366939 env[1647]: time="2024-02-09T19:18:27.366866344Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3786 runtime=io.containerd.runc.v2\n" Feb 9 19:18:27.370676 env[1647]: time="2024-02-09T19:18:27.370606220Z" level=info msg="StopContainer for \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\" returns successfully" Feb 9 19:18:27.371673 env[1647]: time="2024-02-09T19:18:27.371626806Z" level=info msg="StopPodSandbox for \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\"" Feb 9 19:18:27.372130 env[1647]: time="2024-02-09T19:18:27.372057728Z" level=info msg="Container to stop \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:27.372314 env[1647]: time="2024-02-09T19:18:27.372280150Z" level=info msg="Container to stop \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:27.372461 env[1647]: time="2024-02-09T19:18:27.372427527Z" level=info msg="Container to stop \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:27.372607 env[1647]: time="2024-02-09T19:18:27.372573020Z" level=info msg="Container to stop \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:27.372751 env[1647]: time="2024-02-09T19:18:27.372718885Z" level=info msg="Container to stop \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:27.375701 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f-shm.mount: Deactivated successfully. Feb 9 19:18:27.387150 systemd[1]: cri-containerd-8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f.scope: Deactivated successfully. Feb 9 19:18:27.424473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f-rootfs.mount: Deactivated successfully. Feb 9 19:18:27.435737 env[1647]: time="2024-02-09T19:18:27.435672247Z" level=info msg="shim disconnected" id=8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f Feb 9 19:18:27.436161 env[1647]: time="2024-02-09T19:18:27.436124783Z" level=warning msg="cleaning up after shim disconnected" id=8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f namespace=k8s.io Feb 9 19:18:27.436299 env[1647]: time="2024-02-09T19:18:27.436270408Z" level=info msg="cleaning up dead shim" Feb 9 19:18:27.450854 env[1647]: time="2024-02-09T19:18:27.450762010Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3817 runtime=io.containerd.runc.v2\n" Feb 9 19:18:27.451702 env[1647]: time="2024-02-09T19:18:27.451656162Z" level=info msg="TearDown network for sandbox \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" successfully" Feb 9 19:18:27.451938 env[1647]: time="2024-02-09T19:18:27.451902095Z" level=info msg="StopPodSandbox for \"8307764023dd30ab648c9c9893aad7cc81b4f1ead5988a418edc85c2c1c6f79f\" returns successfully" Feb 9 19:18:27.550433 kubelet[2079]: I0209 19:18:27.550380 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-hostproc\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.550733 kubelet[2079]: I0209 19:18:27.550711 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-bpf-maps\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.550939 kubelet[2079]: I0209 19:18:27.550898 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.551082 kubelet[2079]: I0209 19:18:27.550852 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-hostproc" (OuterVolumeSpecName: "hostproc") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.551082 kubelet[2079]: I0209 19:18:27.551070 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cni-path" (OuterVolumeSpecName: "cni-path") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.551491 kubelet[2079]: I0209 19:18:27.551237 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cni-path\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.551491 kubelet[2079]: I0209 19:18:27.551326 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-etc-cni-netd\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.551491 kubelet[2079]: I0209 19:18:27.551377 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-host-proc-sys-kernel\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.551491 kubelet[2079]: I0209 19:18:27.551401 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.551491 kubelet[2079]: I0209 19:18:27.551444 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-lib-modules\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.551852 kubelet[2079]: I0209 19:18:27.551454 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.551852 kubelet[2079]: I0209 19:18:27.551585 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.552519 kubelet[2079]: I0209 19:18:27.552037 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-clustermesh-secrets\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.552519 kubelet[2079]: I0209 19:18:27.552111 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-host-proc-sys-net\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.552519 kubelet[2079]: I0209 19:18:27.552180 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-cgroup\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.552519 kubelet[2079]: I0209 19:18:27.552225 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-run\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.552519 kubelet[2079]: I0209 19:18:27.552300 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-config-path\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.552519 kubelet[2079]: I0209 19:18:27.552375 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-hubble-tls\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.552974 kubelet[2079]: I0209 19:18:27.552441 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-xtables-lock\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.552974 kubelet[2079]: I0209 19:18:27.552491 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4n2h\" (UniqueName: \"kubernetes.io/projected/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-kube-api-access-k4n2h\") pod \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\" (UID: \"af5f6ec6-8de2-4d22-b616-62f9b8fa3e05\") " Feb 9 19:18:27.553479 kubelet[2079]: I0209 19:18:27.553236 2079 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cni-path\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.553479 kubelet[2079]: I0209 19:18:27.553273 2079 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-etc-cni-netd\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.553479 kubelet[2079]: I0209 19:18:27.553276 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.553479 kubelet[2079]: I0209 19:18:27.553322 2079 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-host-proc-sys-kernel\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.553479 kubelet[2079]: I0209 19:18:27.553334 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.553479 kubelet[2079]: I0209 19:18:27.553353 2079 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-hostproc\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.553902 kubelet[2079]: I0209 19:18:27.553380 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.553902 kubelet[2079]: I0209 19:18:27.553401 2079 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-bpf-maps\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.553902 kubelet[2079]: I0209 19:18:27.553432 2079 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-lib-modules\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.554670 kubelet[2079]: I0209 19:18:27.554604 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:27.555433 kubelet[2079]: W0209 19:18:27.555345 2079 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:18:27.561035 kubelet[2079]: I0209 19:18:27.560974 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:18:27.562648 kubelet[2079]: I0209 19:18:27.562591 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:18:27.564169 kubelet[2079]: I0209 19:18:27.564097 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:18:27.566816 kubelet[2079]: I0209 19:18:27.566720 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-kube-api-access-k4n2h" (OuterVolumeSpecName: "kube-api-access-k4n2h") pod "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" (UID: "af5f6ec6-8de2-4d22-b616-62f9b8fa3e05"). InnerVolumeSpecName "kube-api-access-k4n2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:18:27.654211 kubelet[2079]: I0209 19:18:27.654079 2079 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-clustermesh-secrets\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.654211 kubelet[2079]: I0209 19:18:27.654129 2079 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-host-proc-sys-net\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.654211 kubelet[2079]: I0209 19:18:27.654156 2079 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-cgroup\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.654211 kubelet[2079]: I0209 19:18:27.654181 2079 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-config-path\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.654972 kubelet[2079]: I0209 19:18:27.654944 2079 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-hubble-tls\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.655178 kubelet[2079]: I0209 19:18:27.655156 2079 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-xtables-lock\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.655333 kubelet[2079]: I0209 19:18:27.655313 2079 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-k4n2h\" (UniqueName: \"kubernetes.io/projected/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-kube-api-access-k4n2h\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.655478 kubelet[2079]: I0209 19:18:27.655459 2079 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05-cilium-run\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:27.729671 kubelet[2079]: E0209 19:18:27.729627 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:27.883958 systemd[1]: Removed slice kubepods-burstable-podaf5f6ec6_8de2_4d22_b616_62f9b8fa3e05.slice. Feb 9 19:18:27.884171 systemd[1]: kubepods-burstable-podaf5f6ec6_8de2_4d22_b616_62f9b8fa3e05.slice: Consumed 14.572s CPU time. Feb 9 19:18:27.925484 systemd[1]: var-lib-kubelet-pods-af5f6ec6\x2d8de2\x2d4d22\x2db616\x2d62f9b8fa3e05-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:18:27.925667 systemd[1]: var-lib-kubelet-pods-af5f6ec6\x2d8de2\x2d4d22\x2db616\x2d62f9b8fa3e05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk4n2h.mount: Deactivated successfully. Feb 9 19:18:27.925850 systemd[1]: var-lib-kubelet-pods-af5f6ec6\x2d8de2\x2d4d22\x2db616\x2d62f9b8fa3e05-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:18:28.111428 kubelet[2079]: I0209 19:18:28.111248 2079 scope.go:115] "RemoveContainer" containerID="5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982" Feb 9 19:18:28.115337 env[1647]: time="2024-02-09T19:18:28.114809217Z" level=info msg="RemoveContainer for \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\"" Feb 9 19:18:28.119355 env[1647]: time="2024-02-09T19:18:28.119292009Z" level=info msg="RemoveContainer for \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\" returns successfully" Feb 9 19:18:28.119941 kubelet[2079]: I0209 19:18:28.119909 2079 scope.go:115] "RemoveContainer" containerID="bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff" Feb 9 19:18:28.122347 env[1647]: time="2024-02-09T19:18:28.122299759Z" level=info msg="RemoveContainer for \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\"" Feb 9 19:18:28.127353 env[1647]: time="2024-02-09T19:18:28.127295729Z" level=info msg="RemoveContainer for \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\" returns successfully" Feb 9 19:18:28.127983 kubelet[2079]: I0209 19:18:28.127943 2079 scope.go:115] "RemoveContainer" containerID="5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866" Feb 9 19:18:28.130242 env[1647]: time="2024-02-09T19:18:28.130191782Z" level=info msg="RemoveContainer for \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\"" Feb 9 19:18:28.135322 env[1647]: time="2024-02-09T19:18:28.135265654Z" level=info msg="RemoveContainer for \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\" returns successfully" Feb 9 19:18:28.136759 kubelet[2079]: I0209 19:18:28.136698 2079 scope.go:115] "RemoveContainer" containerID="301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0" Feb 9 19:18:28.146268 env[1647]: time="2024-02-09T19:18:28.145861180Z" level=info msg="RemoveContainer for \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\"" Feb 9 19:18:28.150073 env[1647]: time="2024-02-09T19:18:28.150016838Z" level=info msg="RemoveContainer for \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\" returns successfully" Feb 9 19:18:28.150718 kubelet[2079]: I0209 19:18:28.150688 2079 scope.go:115] "RemoveContainer" containerID="339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e" Feb 9 19:18:28.152970 env[1647]: time="2024-02-09T19:18:28.152917332Z" level=info msg="RemoveContainer for \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\"" Feb 9 19:18:28.156960 env[1647]: time="2024-02-09T19:18:28.156906566Z" level=info msg="RemoveContainer for \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\" returns successfully" Feb 9 19:18:28.157482 kubelet[2079]: I0209 19:18:28.157455 2079 scope.go:115] "RemoveContainer" containerID="5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982" Feb 9 19:18:28.158337 env[1647]: time="2024-02-09T19:18:28.158171860Z" level=error msg="ContainerStatus for \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\": not found" Feb 9 19:18:28.158712 kubelet[2079]: E0209 19:18:28.158685 2079 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\": not found" containerID="5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982" Feb 9 19:18:28.158927 kubelet[2079]: I0209 19:18:28.158899 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982} err="failed to get container status \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\": rpc error: code = NotFound desc = an error occurred when try to find container \"5548357a117389cdcd7bc9c379d0a6d9be291e135190c5c2529226c5e49f4982\": not found" Feb 9 19:18:28.159076 kubelet[2079]: I0209 19:18:28.159040 2079 scope.go:115] "RemoveContainer" containerID="bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff" Feb 9 19:18:28.159566 env[1647]: time="2024-02-09T19:18:28.159482891Z" level=error msg="ContainerStatus for \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\": not found" Feb 9 19:18:28.159957 kubelet[2079]: E0209 19:18:28.159931 2079 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\": not found" containerID="bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff" Feb 9 19:18:28.160136 kubelet[2079]: I0209 19:18:28.160114 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff} err="failed to get container status \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"bebfe28bc5b66af93ad4530437d245cb537ae1b28c577eaa68cd479fd2a427ff\": not found" Feb 9 19:18:28.160254 kubelet[2079]: I0209 19:18:28.160233 2079 scope.go:115] "RemoveContainer" containerID="5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866" Feb 9 19:18:28.160668 env[1647]: time="2024-02-09T19:18:28.160593714Z" level=error msg="ContainerStatus for \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\": not found" Feb 9 19:18:28.161024 kubelet[2079]: E0209 19:18:28.161001 2079 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\": not found" containerID="5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866" Feb 9 19:18:28.161288 kubelet[2079]: I0209 19:18:28.161262 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866} err="failed to get container status \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f8b584bf9fc6f24562b11cce936fded5305fd5f776c795cd11456b166329866\": not found" Feb 9 19:18:28.161434 kubelet[2079]: I0209 19:18:28.161411 2079 scope.go:115] "RemoveContainer" containerID="301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0" Feb 9 19:18:28.161926 env[1647]: time="2024-02-09T19:18:28.161844774Z" level=error msg="ContainerStatus for \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\": not found" Feb 9 19:18:28.162276 kubelet[2079]: E0209 19:18:28.162250 2079 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\": not found" containerID="301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0" Feb 9 19:18:28.162424 kubelet[2079]: I0209 19:18:28.162402 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0} err="failed to get container status \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\": rpc error: code = NotFound desc = an error occurred when try to find container \"301245ff84342a71dc579d5772c80ca4ed34196f5c7fbce52c1e2b100971aba0\": not found" Feb 9 19:18:28.162566 kubelet[2079]: I0209 19:18:28.162544 2079 scope.go:115] "RemoveContainer" containerID="339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e" Feb 9 19:18:28.163020 env[1647]: time="2024-02-09T19:18:28.162945397Z" level=error msg="ContainerStatus for \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\": not found" Feb 9 19:18:28.163476 kubelet[2079]: E0209 19:18:28.163411 2079 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\": not found" containerID="339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e" Feb 9 19:18:28.163566 kubelet[2079]: I0209 19:18:28.163502 2079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e} err="failed to get container status \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\": rpc error: code = NotFound desc = an error occurred when try to find container \"339c1b161ad4f3b13c39dd13be423eeaf6fb0d4555e9236bb342504263f4003e\": not found" Feb 9 19:18:28.730956 kubelet[2079]: E0209 19:18:28.730871 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:29.731683 kubelet[2079]: E0209 19:18:29.731613 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:29.877761 kubelet[2079]: I0209 19:18:29.877703 2079 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=af5f6ec6-8de2-4d22-b616-62f9b8fa3e05 path="/var/lib/kubelet/pods/af5f6ec6-8de2-4d22-b616-62f9b8fa3e05/volumes" Feb 9 19:18:30.627087 kubelet[2079]: I0209 19:18:30.625632 2079 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:18:30.627087 kubelet[2079]: E0209 19:18:30.625734 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" containerName="mount-cgroup" Feb 9 19:18:30.627087 kubelet[2079]: E0209 19:18:30.625757 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" containerName="mount-bpf-fs" Feb 9 19:18:30.627087 kubelet[2079]: E0209 19:18:30.625777 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" containerName="cilium-agent" Feb 9 19:18:30.627087 kubelet[2079]: E0209 19:18:30.625819 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" containerName="apply-sysctl-overwrites" Feb 9 19:18:30.627087 kubelet[2079]: E0209 19:18:30.625838 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" containerName="clean-cilium-state" Feb 9 19:18:30.627087 kubelet[2079]: I0209 19:18:30.625877 2079 memory_manager.go:346] "RemoveStaleState removing state" podUID="af5f6ec6-8de2-4d22-b616-62f9b8fa3e05" containerName="cilium-agent" Feb 9 19:18:30.635264 systemd[1]: Created slice kubepods-besteffort-pod6100cf20_5255_4a4f_bda7_b6b279b3bd88.slice. Feb 9 19:18:30.644752 kubelet[2079]: W0209 19:18:30.644696 2079 reflector.go:533] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.23.38" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.38' and this object Feb 9 19:18:30.644964 kubelet[2079]: E0209 19:18:30.644773 2079 reflector.go:148] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.23.38" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.38' and this object Feb 9 19:18:30.680323 kubelet[2079]: I0209 19:18:30.680284 2079 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:18:30.690192 systemd[1]: Created slice kubepods-burstable-pod955b7c2f_5a0f_4a42_9744_9dc4c5be8afb.slice. Feb 9 19:18:30.731860 kubelet[2079]: E0209 19:18:30.731808 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:30.771808 kubelet[2079]: I0209 19:18:30.771750 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-ipsec-secrets\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.771967 kubelet[2079]: I0209 19:18:30.771853 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-host-proc-sys-net\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.771967 kubelet[2079]: I0209 19:18:30.771928 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-bpf-maps\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772135 kubelet[2079]: I0209 19:18:30.772005 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-cgroup\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772135 kubelet[2079]: I0209 19:18:30.772082 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cni-path\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772135 kubelet[2079]: I0209 19:18:30.772130 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-config-path\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772324 kubelet[2079]: I0209 19:18:30.772199 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-host-proc-sys-kernel\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772324 kubelet[2079]: I0209 19:18:30.772270 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-run\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772459 kubelet[2079]: I0209 19:18:30.772340 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-hostproc\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772459 kubelet[2079]: I0209 19:18:30.772391 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6100cf20-5255-4a4f-bda7-b6b279b3bd88-cilium-config-path\") pod \"cilium-operator-574c4bb98d-wzfmb\" (UID: \"6100cf20-5255-4a4f-bda7-b6b279b3bd88\") " pod="kube-system/cilium-operator-574c4bb98d-wzfmb" Feb 9 19:18:30.772576 kubelet[2079]: I0209 19:18:30.772461 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhwz\" (UniqueName: \"kubernetes.io/projected/6100cf20-5255-4a4f-bda7-b6b279b3bd88-kube-api-access-cwhwz\") pod \"cilium-operator-574c4bb98d-wzfmb\" (UID: \"6100cf20-5255-4a4f-bda7-b6b279b3bd88\") " pod="kube-system/cilium-operator-574c4bb98d-wzfmb" Feb 9 19:18:30.772576 kubelet[2079]: I0209 19:18:30.772534 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-xtables-lock\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772712 kubelet[2079]: I0209 19:18:30.772602 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6plqq\" (UniqueName: \"kubernetes.io/projected/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-kube-api-access-6plqq\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772712 kubelet[2079]: I0209 19:18:30.772659 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-etc-cni-netd\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772870 kubelet[2079]: I0209 19:18:30.772744 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-lib-modules\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.772870 kubelet[2079]: I0209 19:18:30.772829 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-clustermesh-secrets\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.773006 kubelet[2079]: I0209 19:18:30.772899 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-hubble-tls\") pod \"cilium-rj7sm\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " pod="kube-system/cilium-rj7sm" Feb 9 19:18:30.827738 kubelet[2079]: E0209 19:18:30.827692 2079 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:18:31.732335 kubelet[2079]: E0209 19:18:31.732262 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:31.840404 env[1647]: time="2024-02-09T19:18:31.840341250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-wzfmb,Uid:6100cf20-5255-4a4f-bda7-b6b279b3bd88,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:31.868456 env[1647]: time="2024-02-09T19:18:31.868285884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:31.868643 env[1647]: time="2024-02-09T19:18:31.868498919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:31.868643 env[1647]: time="2024-02-09T19:18:31.868597714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:31.869221 env[1647]: time="2024-02-09T19:18:31.869120589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2537dc09549d7433834b3982da6e87dcf0f1a37eaa43981c303d559f54b56c2 pid=3845 runtime=io.containerd.runc.v2 Feb 9 19:18:31.902221 systemd[1]: run-containerd-runc-k8s.io-c2537dc09549d7433834b3982da6e87dcf0f1a37eaa43981c303d559f54b56c2-runc.b2QwsE.mount: Deactivated successfully. Feb 9 19:18:31.906510 env[1647]: time="2024-02-09T19:18:31.906051487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rj7sm,Uid:955b7c2f-5a0f-4a42-9744-9dc4c5be8afb,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:31.912005 systemd[1]: Started cri-containerd-c2537dc09549d7433834b3982da6e87dcf0f1a37eaa43981c303d559f54b56c2.scope. Feb 9 19:18:31.956812 env[1647]: time="2024-02-09T19:18:31.951683763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:31.956812 env[1647]: time="2024-02-09T19:18:31.951758135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:31.956812 env[1647]: time="2024-02-09T19:18:31.951812369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:31.956812 env[1647]: time="2024-02-09T19:18:31.952156532Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef pid=3877 runtime=io.containerd.runc.v2 Feb 9 19:18:31.986238 systemd[1]: Started cri-containerd-4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef.scope. Feb 9 19:18:32.034946 env[1647]: time="2024-02-09T19:18:32.034742377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-wzfmb,Uid:6100cf20-5255-4a4f-bda7-b6b279b3bd88,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2537dc09549d7433834b3982da6e87dcf0f1a37eaa43981c303d559f54b56c2\"" Feb 9 19:18:32.039380 env[1647]: time="2024-02-09T19:18:32.039317798Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:18:32.061314 env[1647]: time="2024-02-09T19:18:32.061256483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rj7sm,Uid:955b7c2f-5a0f-4a42-9744-9dc4c5be8afb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef\"" Feb 9 19:18:32.066871 env[1647]: time="2024-02-09T19:18:32.066817105Z" level=info msg="CreateContainer within sandbox \"4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:18:32.093008 env[1647]: time="2024-02-09T19:18:32.092920454Z" level=info msg="CreateContainer within sandbox \"4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3\"" Feb 9 19:18:32.094112 env[1647]: time="2024-02-09T19:18:32.094064672Z" level=info msg="StartContainer for \"f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3\"" Feb 9 19:18:32.133488 systemd[1]: Started cri-containerd-f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3.scope. Feb 9 19:18:32.161042 systemd[1]: cri-containerd-f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3.scope: Deactivated successfully. Feb 9 19:18:32.194519 env[1647]: time="2024-02-09T19:18:32.194446678Z" level=info msg="shim disconnected" id=f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3 Feb 9 19:18:32.194933 env[1647]: time="2024-02-09T19:18:32.194897736Z" level=warning msg="cleaning up after shim disconnected" id=f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3 namespace=k8s.io Feb 9 19:18:32.195070 env[1647]: time="2024-02-09T19:18:32.195041692Z" level=info msg="cleaning up dead shim" Feb 9 19:18:32.210164 env[1647]: time="2024-02-09T19:18:32.210096370Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3940 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:18:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:18:32.210964 env[1647]: time="2024-02-09T19:18:32.210777373Z" level=error msg="copy shim log" error="read /proc/self/fd/59: file already closed" Feb 9 19:18:32.214994 env[1647]: time="2024-02-09T19:18:32.211159147Z" level=error msg="Failed to pipe stdout of container \"f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3\"" error="reading from a closed fifo" Feb 9 19:18:32.215228 env[1647]: time="2024-02-09T19:18:32.214922498Z" level=error msg="Failed to pipe stderr of container \"f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3\"" error="reading from a closed fifo" Feb 9 19:18:32.217809 env[1647]: time="2024-02-09T19:18:32.217650519Z" level=error msg="StartContainer for \"f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:18:32.218259 kubelet[2079]: E0209 19:18:32.218207 2079 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3" Feb 9 19:18:32.218422 kubelet[2079]: E0209 19:18:32.218389 2079 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:18:32.218422 kubelet[2079]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:18:32.218422 kubelet[2079]: rm /hostbin/cilium-mount Feb 9 19:18:32.218612 kubelet[2079]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6plqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-rj7sm_kube-system(955b7c2f-5a0f-4a42-9744-9dc4c5be8afb): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:18:32.218612 kubelet[2079]: E0209 19:18:32.218470 2079 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rj7sm" podUID=955b7c2f-5a0f-4a42-9744-9dc4c5be8afb Feb 9 19:18:32.733360 kubelet[2079]: E0209 19:18:32.733295 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:33.153321 env[1647]: time="2024-02-09T19:18:33.153265166Z" level=info msg="StopPodSandbox for \"4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef\"" Feb 9 19:18:33.154074 env[1647]: time="2024-02-09T19:18:33.154027873Z" level=info msg="Container to stop \"f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:33.157118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef-shm.mount: Deactivated successfully. Feb 9 19:18:33.179439 systemd[1]: cri-containerd-4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef.scope: Deactivated successfully. Feb 9 19:18:33.239406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef-rootfs.mount: Deactivated successfully. Feb 9 19:18:33.280209 env[1647]: time="2024-02-09T19:18:33.280134102Z" level=info msg="shim disconnected" id=4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef Feb 9 19:18:33.281539 env[1647]: time="2024-02-09T19:18:33.281398560Z" level=warning msg="cleaning up after shim disconnected" id=4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef namespace=k8s.io Feb 9 19:18:33.281539 env[1647]: time="2024-02-09T19:18:33.281450742Z" level=info msg="cleaning up dead shim" Feb 9 19:18:33.305233 env[1647]: time="2024-02-09T19:18:33.305124568Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3971 runtime=io.containerd.runc.v2\n" Feb 9 19:18:33.305876 env[1647]: time="2024-02-09T19:18:33.305824292Z" level=info msg="TearDown network for sandbox \"4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef\" successfully" Feb 9 19:18:33.306008 env[1647]: time="2024-02-09T19:18:33.305877398Z" level=info msg="StopPodSandbox for \"4682f46f56690f2a3253a4ac7515df448200f1e1f4d88ef12a66021b52de40ef\" returns successfully" Feb 9 19:18:33.328269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350440020.mount: Deactivated successfully. Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.489083 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-clustermesh-secrets\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.489685 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-etc-cni-netd\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.489731 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-lib-modules\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.489774 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-host-proc-sys-net\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.489850 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-bpf-maps\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.489897 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6plqq\" (UniqueName: \"kubernetes.io/projected/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-kube-api-access-6plqq\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.489944 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-config-path\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.489988 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-hubble-tls\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.490031 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-ipsec-secrets\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.490069 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-hostproc\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.490106 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-xtables-lock\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.490147 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-run\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.490191 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-cgroup\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.490229 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cni-path\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.490271 2079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-host-proc-sys-kernel\") pod \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\" (UID: \"955b7c2f-5a0f-4a42-9744-9dc4c5be8afb\") " Feb 9 19:18:33.492930 kubelet[2079]: I0209 19:18:33.490335 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.494077 kubelet[2079]: I0209 19:18:33.490383 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.494077 kubelet[2079]: I0209 19:18:33.490427 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.494077 kubelet[2079]: I0209 19:18:33.490468 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.494077 kubelet[2079]: I0209 19:18:33.490506 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.494077 kubelet[2079]: W0209 19:18:33.491096 2079 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:18:33.495178 kubelet[2079]: I0209 19:18:33.495134 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-hostproc" (OuterVolumeSpecName: "hostproc") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.495492 kubelet[2079]: I0209 19:18:33.495287 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.495492 kubelet[2079]: I0209 19:18:33.495318 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.495661 kubelet[2079]: I0209 19:18:33.495372 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.495661 kubelet[2079]: I0209 19:18:33.495403 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cni-path" (OuterVolumeSpecName: "cni-path") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:33.503314 kubelet[2079]: I0209 19:18:33.503259 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:18:33.504857 kubelet[2079]: I0209 19:18:33.504771 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:18:33.512304 kubelet[2079]: I0209 19:18:33.512223 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:18:33.514098 kubelet[2079]: I0209 19:18:33.514031 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:18:33.516229 kubelet[2079]: I0209 19:18:33.516185 2079 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-kube-api-access-6plqq" (OuterVolumeSpecName: "kube-api-access-6plqq") pod "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" (UID: "955b7c2f-5a0f-4a42-9744-9dc4c5be8afb"). InnerVolumeSpecName "kube-api-access-6plqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591111 2079 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-config-path\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591164 2079 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-hubble-tls\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591190 2079 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-ipsec-secrets\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591213 2079 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-bpf-maps\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591238 2079 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6plqq\" (UniqueName: \"kubernetes.io/projected/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-kube-api-access-6plqq\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591261 2079 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-run\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591288 2079 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cilium-cgroup\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591311 2079 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-cni-path\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591336 2079 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-hostproc\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591358 2079 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-xtables-lock\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591381 2079 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-host-proc-sys-kernel\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591403 2079 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-etc-cni-netd\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591426 2079 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-lib-modules\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591449 2079 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-host-proc-sys-net\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.591539 kubelet[2079]: I0209 19:18:33.591473 2079 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb-clustermesh-secrets\") on node \"172.31.23.38\" DevicePath \"\"" Feb 9 19:18:33.733657 kubelet[2079]: E0209 19:18:33.733587 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:33.884407 systemd[1]: Removed slice kubepods-burstable-pod955b7c2f_5a0f_4a42_9744_9dc4c5be8afb.slice. Feb 9 19:18:33.897485 systemd[1]: var-lib-kubelet-pods-955b7c2f\x2d5a0f\x2d4a42\x2d9744\x2d9dc4c5be8afb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6plqq.mount: Deactivated successfully. Feb 9 19:18:33.897656 systemd[1]: var-lib-kubelet-pods-955b7c2f\x2d5a0f\x2d4a42\x2d9744\x2d9dc4c5be8afb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:18:33.897814 systemd[1]: var-lib-kubelet-pods-955b7c2f\x2d5a0f\x2d4a42\x2d9744\x2d9dc4c5be8afb-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:18:33.897963 systemd[1]: var-lib-kubelet-pods-955b7c2f\x2d5a0f\x2d4a42\x2d9744\x2d9dc4c5be8afb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:18:34.155161 kubelet[2079]: I0209 19:18:34.155041 2079 scope.go:115] "RemoveContainer" containerID="f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3" Feb 9 19:18:34.161840 env[1647]: time="2024-02-09T19:18:34.161624813Z" level=info msg="RemoveContainer for \"f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3\"" Feb 9 19:18:34.167006 env[1647]: time="2024-02-09T19:18:34.166943826Z" level=info msg="RemoveContainer for \"f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3\" returns successfully" Feb 9 19:18:34.214178 kubelet[2079]: I0209 19:18:34.214133 2079 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:18:34.214489 kubelet[2079]: E0209 19:18:34.214464 2079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" containerName="mount-cgroup" Feb 9 19:18:34.214671 kubelet[2079]: I0209 19:18:34.214648 2079 memory_manager.go:346] "RemoveStaleState removing state" podUID="955b7c2f-5a0f-4a42-9744-9dc4c5be8afb" containerName="mount-cgroup" Feb 9 19:18:34.226142 systemd[1]: Created slice kubepods-burstable-pod7674e83a_9528_4b8a_898e_31a72d394529.slice. Feb 9 19:18:34.299807 kubelet[2079]: I0209 19:18:34.299734 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7674e83a-9528-4b8a-898e-31a72d394529-cilium-config-path\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300283 kubelet[2079]: I0209 19:18:34.300240 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-bpf-maps\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300373 kubelet[2079]: I0209 19:18:34.300307 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-etc-cni-netd\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300373 kubelet[2079]: I0209 19:18:34.300355 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-lib-modules\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300505 kubelet[2079]: I0209 19:18:34.300399 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-xtables-lock\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300505 kubelet[2079]: I0209 19:18:34.300443 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7674e83a-9528-4b8a-898e-31a72d394529-clustermesh-secrets\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300505 kubelet[2079]: I0209 19:18:34.300488 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7674e83a-9528-4b8a-898e-31a72d394529-hubble-tls\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300698 kubelet[2079]: I0209 19:18:34.300533 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnzxs\" (UniqueName: \"kubernetes.io/projected/7674e83a-9528-4b8a-898e-31a72d394529-kube-api-access-pnzxs\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300698 kubelet[2079]: I0209 19:18:34.300579 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-cilium-run\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300698 kubelet[2079]: I0209 19:18:34.300620 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-hostproc\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300698 kubelet[2079]: I0209 19:18:34.300667 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-host-proc-sys-kernel\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300989 kubelet[2079]: I0209 19:18:34.300713 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-cilium-cgroup\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300989 kubelet[2079]: I0209 19:18:34.300755 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-cni-path\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300989 kubelet[2079]: I0209 19:18:34.300820 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7674e83a-9528-4b8a-898e-31a72d394529-cilium-ipsec-secrets\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.300989 kubelet[2079]: I0209 19:18:34.300865 2079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7674e83a-9528-4b8a-898e-31a72d394529-host-proc-sys-net\") pod \"cilium-mgzsq\" (UID: \"7674e83a-9528-4b8a-898e-31a72d394529\") " pod="kube-system/cilium-mgzsq" Feb 9 19:18:34.318361 env[1647]: time="2024-02-09T19:18:34.318274683Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:34.322972 env[1647]: time="2024-02-09T19:18:34.322261167Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:34.325075 env[1647]: time="2024-02-09T19:18:34.324998040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:34.326194 env[1647]: time="2024-02-09T19:18:34.326051574Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 19:18:34.330826 env[1647]: time="2024-02-09T19:18:34.330561159Z" level=info msg="CreateContainer within sandbox \"c2537dc09549d7433834b3982da6e87dcf0f1a37eaa43981c303d559f54b56c2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:18:34.352376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621165478.mount: Deactivated successfully. Feb 9 19:18:34.364809 env[1647]: time="2024-02-09T19:18:34.364671715Z" level=info msg="CreateContainer within sandbox \"c2537dc09549d7433834b3982da6e87dcf0f1a37eaa43981c303d559f54b56c2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8813c19162723c2f96a0491fd083bfb30023eeffbff0c45cd1f1c0d8148fc885\"" Feb 9 19:18:34.365847 env[1647]: time="2024-02-09T19:18:34.365764613Z" level=info msg="StartContainer for \"8813c19162723c2f96a0491fd083bfb30023eeffbff0c45cd1f1c0d8148fc885\"" Feb 9 19:18:34.403453 systemd[1]: Started cri-containerd-8813c19162723c2f96a0491fd083bfb30023eeffbff0c45cd1f1c0d8148fc885.scope. Feb 9 19:18:34.485854 env[1647]: time="2024-02-09T19:18:34.485740137Z" level=info msg="StartContainer for \"8813c19162723c2f96a0491fd083bfb30023eeffbff0c45cd1f1c0d8148fc885\" returns successfully" Feb 9 19:18:34.540077 env[1647]: time="2024-02-09T19:18:34.540021172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgzsq,Uid:7674e83a-9528-4b8a-898e-31a72d394529,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:34.567192 env[1647]: time="2024-02-09T19:18:34.567052400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:34.567368 env[1647]: time="2024-02-09T19:18:34.567220922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:34.567368 env[1647]: time="2024-02-09T19:18:34.567304535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:34.567876 env[1647]: time="2024-02-09T19:18:34.567768697Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118 pid=4041 runtime=io.containerd.runc.v2 Feb 9 19:18:34.601488 systemd[1]: Started cri-containerd-36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118.scope. Feb 9 19:18:34.668963 env[1647]: time="2024-02-09T19:18:34.668754082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgzsq,Uid:7674e83a-9528-4b8a-898e-31a72d394529,Namespace:kube-system,Attempt:0,} returns sandbox id \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\"" Feb 9 19:18:34.674472 env[1647]: time="2024-02-09T19:18:34.674416236Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:18:34.695892 env[1647]: time="2024-02-09T19:18:34.695822273Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"32ade3dac154b89addf8594aab40f1d049bcc4c8ea3e2cc6602a0e28f6b84fa0\"" Feb 9 19:18:34.696973 env[1647]: time="2024-02-09T19:18:34.696914188Z" level=info msg="StartContainer for \"32ade3dac154b89addf8594aab40f1d049bcc4c8ea3e2cc6602a0e28f6b84fa0\"" Feb 9 19:18:34.734523 kubelet[2079]: E0209 19:18:34.734421 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:34.747749 systemd[1]: Started cri-containerd-32ade3dac154b89addf8594aab40f1d049bcc4c8ea3e2cc6602a0e28f6b84fa0.scope. Feb 9 19:18:34.824155 env[1647]: time="2024-02-09T19:18:34.824075959Z" level=info msg="StartContainer for \"32ade3dac154b89addf8594aab40f1d049bcc4c8ea3e2cc6602a0e28f6b84fa0\" returns successfully" Feb 9 19:18:34.850873 systemd[1]: cri-containerd-32ade3dac154b89addf8594aab40f1d049bcc4c8ea3e2cc6602a0e28f6b84fa0.scope: Deactivated successfully. Feb 9 19:18:35.087079 env[1647]: time="2024-02-09T19:18:35.087011729Z" level=info msg="shim disconnected" id=32ade3dac154b89addf8594aab40f1d049bcc4c8ea3e2cc6602a0e28f6b84fa0 Feb 9 19:18:35.087079 env[1647]: time="2024-02-09T19:18:35.087081517Z" level=warning msg="cleaning up after shim disconnected" id=32ade3dac154b89addf8594aab40f1d049bcc4c8ea3e2cc6602a0e28f6b84fa0 namespace=k8s.io Feb 9 19:18:35.087466 env[1647]: time="2024-02-09T19:18:35.087104355Z" level=info msg="cleaning up dead shim" Feb 9 19:18:35.101523 env[1647]: time="2024-02-09T19:18:35.101457684Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4126 runtime=io.containerd.runc.v2\n" Feb 9 19:18:35.168065 env[1647]: time="2024-02-09T19:18:35.168008408Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:18:35.194053 env[1647]: time="2024-02-09T19:18:35.193987567Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d\"" Feb 9 19:18:35.195552 env[1647]: time="2024-02-09T19:18:35.195482220Z" level=info msg="StartContainer for \"dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d\"" Feb 9 19:18:35.202001 kubelet[2079]: I0209 19:18:35.201944 2079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-wzfmb" podStartSLOduration=2.91397756 podCreationTimestamp="2024-02-09 19:18:30 +0000 UTC" firstStartedPulling="2024-02-09 19:18:32.038689589 +0000 UTC m=+78.191084451" lastFinishedPulling="2024-02-09 19:18:34.326595641 +0000 UTC m=+80.478990479" observedRunningTime="2024-02-09 19:18:35.176657066 +0000 UTC m=+81.329051928" watchObservedRunningTime="2024-02-09 19:18:35.201883588 +0000 UTC m=+81.354278486" Feb 9 19:18:35.239185 systemd[1]: run-containerd-runc-k8s.io-dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d-runc.P8HEJA.mount: Deactivated successfully. Feb 9 19:18:35.249280 systemd[1]: Started cri-containerd-dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d.scope. Feb 9 19:18:35.300722 env[1647]: time="2024-02-09T19:18:35.300649799Z" level=info msg="StartContainer for \"dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d\" returns successfully" Feb 9 19:18:35.301534 kubelet[2079]: W0209 19:18:35.301446 2079 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod955b7c2f_5a0f_4a42_9744_9dc4c5be8afb.slice/cri-containerd-f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3.scope WatchSource:0}: container "f4dd2dab8710ef9938e1145d515921aed86bb9d452f07705b75a4af68b63a0f3" in namespace "k8s.io": not found Feb 9 19:18:35.327691 systemd[1]: cri-containerd-dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d.scope: Deactivated successfully. Feb 9 19:18:35.376466 env[1647]: time="2024-02-09T19:18:35.376285428Z" level=info msg="shim disconnected" id=dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d Feb 9 19:18:35.376858 env[1647]: time="2024-02-09T19:18:35.376817205Z" level=warning msg="cleaning up after shim disconnected" id=dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d namespace=k8s.io Feb 9 19:18:35.377002 env[1647]: time="2024-02-09T19:18:35.376974542Z" level=info msg="cleaning up dead shim" Feb 9 19:18:35.393223 env[1647]: time="2024-02-09T19:18:35.393165196Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4190 runtime=io.containerd.runc.v2\n" Feb 9 19:18:35.666070 kubelet[2079]: E0209 19:18:35.665917 2079 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:35.734580 kubelet[2079]: E0209 19:18:35.734522 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:35.828942 kubelet[2079]: E0209 19:18:35.828889 2079 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:18:35.876691 kubelet[2079]: I0209 19:18:35.876655 2079 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=955b7c2f-5a0f-4a42-9744-9dc4c5be8afb path="/var/lib/kubelet/pods/955b7c2f-5a0f-4a42-9744-9dc4c5be8afb/volumes" Feb 9 19:18:35.897822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d-rootfs.mount: Deactivated successfully. Feb 9 19:18:36.179097 env[1647]: time="2024-02-09T19:18:36.179043170Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:18:36.212185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3351262148.mount: Deactivated successfully. Feb 9 19:18:36.225150 env[1647]: time="2024-02-09T19:18:36.225055854Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22\"" Feb 9 19:18:36.227066 env[1647]: time="2024-02-09T19:18:36.227009529Z" level=info msg="StartContainer for \"8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22\"" Feb 9 19:18:36.265725 systemd[1]: Started cri-containerd-8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22.scope. Feb 9 19:18:36.330386 systemd[1]: cri-containerd-8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22.scope: Deactivated successfully. Feb 9 19:18:36.333851 env[1647]: time="2024-02-09T19:18:36.332477434Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7674e83a_9528_4b8a_898e_31a72d394529.slice/cri-containerd-8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22.scope/memory.events\": no such file or directory" Feb 9 19:18:36.339305 env[1647]: time="2024-02-09T19:18:36.339226846Z" level=info msg="StartContainer for \"8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22\" returns successfully" Feb 9 19:18:36.383763 env[1647]: time="2024-02-09T19:18:36.383689984Z" level=info msg="shim disconnected" id=8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22 Feb 9 19:18:36.384110 env[1647]: time="2024-02-09T19:18:36.383761950Z" level=warning msg="cleaning up after shim disconnected" id=8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22 namespace=k8s.io Feb 9 19:18:36.384110 env[1647]: time="2024-02-09T19:18:36.383896848Z" level=info msg="cleaning up dead shim" Feb 9 19:18:36.397228 env[1647]: time="2024-02-09T19:18:36.397154491Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4248 runtime=io.containerd.runc.v2\n" Feb 9 19:18:36.735465 kubelet[2079]: E0209 19:18:36.735395 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:37.184521 env[1647]: time="2024-02-09T19:18:37.184453363Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:18:37.204652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583526806.mount: Deactivated successfully. Feb 9 19:18:37.215320 env[1647]: time="2024-02-09T19:18:37.215257215Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6\"" Feb 9 19:18:37.216570 env[1647]: time="2024-02-09T19:18:37.216521555Z" level=info msg="StartContainer for \"a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6\"" Feb 9 19:18:37.247561 systemd[1]: Started cri-containerd-a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6.scope. Feb 9 19:18:37.300100 systemd[1]: cri-containerd-a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6.scope: Deactivated successfully. Feb 9 19:18:37.307011 env[1647]: time="2024-02-09T19:18:37.306943261Z" level=info msg="StartContainer for \"a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6\" returns successfully" Feb 9 19:18:37.354237 env[1647]: time="2024-02-09T19:18:37.354172929Z" level=info msg="shim disconnected" id=a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6 Feb 9 19:18:37.354587 env[1647]: time="2024-02-09T19:18:37.354554013Z" level=warning msg="cleaning up after shim disconnected" id=a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6 namespace=k8s.io Feb 9 19:18:37.354728 env[1647]: time="2024-02-09T19:18:37.354701042Z" level=info msg="cleaning up dead shim" Feb 9 19:18:37.371466 env[1647]: time="2024-02-09T19:18:37.371393680Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4303 runtime=io.containerd.runc.v2\n" Feb 9 19:18:37.736225 kubelet[2079]: E0209 19:18:37.736176 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:37.898041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6-rootfs.mount: Deactivated successfully. Feb 9 19:18:38.191034 env[1647]: time="2024-02-09T19:18:38.190920109Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:18:38.216875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935612463.mount: Deactivated successfully. Feb 9 19:18:38.230767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947471660.mount: Deactivated successfully. Feb 9 19:18:38.232690 env[1647]: time="2024-02-09T19:18:38.232599742Z" level=info msg="CreateContainer within sandbox \"36083fb57262291ba5716853435645016321c17192fa028f6d009bb30ef9f118\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b7d1541fb746ac396c85b31138fa744f5584d6afeaa0058646005d1e72422a4\"" Feb 9 19:18:38.233738 env[1647]: time="2024-02-09T19:18:38.233692882Z" level=info msg="StartContainer for \"3b7d1541fb746ac396c85b31138fa744f5584d6afeaa0058646005d1e72422a4\"" Feb 9 19:18:38.265820 systemd[1]: Started cri-containerd-3b7d1541fb746ac396c85b31138fa744f5584d6afeaa0058646005d1e72422a4.scope. Feb 9 19:18:38.333474 env[1647]: time="2024-02-09T19:18:38.333374543Z" level=info msg="StartContainer for \"3b7d1541fb746ac396c85b31138fa744f5584d6afeaa0058646005d1e72422a4\" returns successfully" Feb 9 19:18:38.433838 kubelet[2079]: W0209 19:18:38.433750 2079 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7674e83a_9528_4b8a_898e_31a72d394529.slice/cri-containerd-32ade3dac154b89addf8594aab40f1d049bcc4c8ea3e2cc6602a0e28f6b84fa0.scope WatchSource:0}: task 32ade3dac154b89addf8594aab40f1d049bcc4c8ea3e2cc6602a0e28f6b84fa0 not found: not found Feb 9 19:18:38.737916 kubelet[2079]: E0209 19:18:38.737814 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:38.807395 kubelet[2079]: I0209 19:18:38.807361 2079 setters.go:548] "Node became not ready" node="172.31.23.38" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:18:38.807284079 +0000 UTC m=+84.959678929 LastTransitionTime:2024-02-09 19:18:38.807284079 +0000 UTC m=+84.959678929 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:18:39.051834 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 19:18:39.225741 kubelet[2079]: I0209 19:18:39.225688 2079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mgzsq" podStartSLOduration=5.225608797 podCreationTimestamp="2024-02-09 19:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:39.222981663 +0000 UTC m=+85.375376525" watchObservedRunningTime="2024-02-09 19:18:39.225608797 +0000 UTC m=+85.378003647" Feb 9 19:18:39.738617 kubelet[2079]: E0209 19:18:39.738579 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:40.321488 systemd[1]: run-containerd-runc-k8s.io-3b7d1541fb746ac396c85b31138fa744f5584d6afeaa0058646005d1e72422a4-runc.wD5WWQ.mount: Deactivated successfully. Feb 9 19:18:40.740152 kubelet[2079]: E0209 19:18:40.739990 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:41.553078 kubelet[2079]: W0209 19:18:41.553027 2079 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7674e83a_9528_4b8a_898e_31a72d394529.slice/cri-containerd-dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d.scope WatchSource:0}: task dd528564ef2539cd51ad3fcab454663157b20380da610d15260c5be5de63542d not found: not found Feb 9 19:18:41.740818 kubelet[2079]: E0209 19:18:41.740739 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:42.661733 systemd[1]: run-containerd-runc-k8s.io-3b7d1541fb746ac396c85b31138fa744f5584d6afeaa0058646005d1e72422a4-runc.ZgA9YV.mount: Deactivated successfully. Feb 9 19:18:42.740991 kubelet[2079]: E0209 19:18:42.740916 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:42.914533 (udev-worker)[4880]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:42.916142 (udev-worker)[4881]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:42.925407 systemd-networkd[1460]: lxc_health: Link UP Feb 9 19:18:42.969829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:18:42.971141 systemd-networkd[1460]: lxc_health: Gained carrier Feb 9 19:18:43.741418 kubelet[2079]: E0209 19:18:43.741347 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:44.583059 systemd-networkd[1460]: lxc_health: Gained IPv6LL Feb 9 19:18:44.666933 kubelet[2079]: W0209 19:18:44.666859 2079 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7674e83a_9528_4b8a_898e_31a72d394529.slice/cri-containerd-8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22.scope WatchSource:0}: task 8af460e39fe58f67682d483c51b86b07128c0702c69ecd2e4e6e539e90a2ca22 not found: not found Feb 9 19:18:44.741899 kubelet[2079]: E0209 19:18:44.741832 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:44.989314 systemd[1]: run-containerd-runc-k8s.io-3b7d1541fb746ac396c85b31138fa744f5584d6afeaa0058646005d1e72422a4-runc.1bfJvI.mount: Deactivated successfully. Feb 9 19:18:45.742956 kubelet[2079]: E0209 19:18:45.742885 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:46.743892 kubelet[2079]: E0209 19:18:46.743841 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:47.283928 systemd[1]: run-containerd-runc-k8s.io-3b7d1541fb746ac396c85b31138fa744f5584d6afeaa0058646005d1e72422a4-runc.dc8Iwv.mount: Deactivated successfully. Feb 9 19:18:47.745686 kubelet[2079]: E0209 19:18:47.745608 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:47.788551 kubelet[2079]: W0209 19:18:47.788504 2079 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7674e83a_9528_4b8a_898e_31a72d394529.slice/cri-containerd-a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6.scope WatchSource:0}: task a0d813c633c7941bc39bdc36e557112be22d2ba9e9746363f328bc665cf7b7d6 not found: not found Feb 9 19:18:48.746547 kubelet[2079]: E0209 19:18:48.746495 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:49.601503 systemd[1]: run-containerd-runc-k8s.io-3b7d1541fb746ac396c85b31138fa744f5584d6afeaa0058646005d1e72422a4-runc.EK532f.mount: Deactivated successfully. Feb 9 19:18:49.748387 kubelet[2079]: E0209 19:18:49.748291 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:50.748867 kubelet[2079]: E0209 19:18:50.748776 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:51.749041 kubelet[2079]: E0209 19:18:51.748989 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:52.750800 kubelet[2079]: E0209 19:18:52.750728 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:53.751681 kubelet[2079]: E0209 19:18:53.751614 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:54.752368 kubelet[2079]: E0209 19:18:54.752304 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:55.665690 kubelet[2079]: E0209 19:18:55.665642 2079 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:55.753193 kubelet[2079]: E0209 19:18:55.753157 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:56.754869 kubelet[2079]: E0209 19:18:56.754823 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:57.755806 kubelet[2079]: E0209 19:18:57.755737 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:58.756908 kubelet[2079]: E0209 19:18:58.756865 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:18:58.824553 kubelet[2079]: E0209 19:18:58.824493 2079 controller.go:193] "Failed to update lease" err="Put \"https://172.31.17.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.38?timeout=10s\": dial tcp 172.31.17.194:6443: connect: connection refused" Feb 9 19:18:58.825280 kubelet[2079]: E0209 19:18:58.825231 2079 controller.go:193] "Failed to update lease" err="Put \"https://172.31.17.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.38?timeout=10s\": dial tcp 172.31.17.194:6443: connect: connection refused" Feb 9 19:18:58.825901 kubelet[2079]: E0209 19:18:58.825871 2079 controller.go:193] "Failed to update lease" err="Put \"https://172.31.17.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.38?timeout=10s\": dial tcp 172.31.17.194:6443: connect: connection refused" Feb 9 19:18:58.826617 kubelet[2079]: E0209 19:18:58.826569 2079 controller.go:193] "Failed to update lease" err="Put \"https://172.31.17.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.38?timeout=10s\": dial tcp 172.31.17.194:6443: connect: connection refused" Feb 9 19:18:58.827260 kubelet[2079]: E0209 19:18:58.827207 2079 controller.go:193] "Failed to update lease" err="Put \"https://172.31.17.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.38?timeout=10s\": dial tcp 172.31.17.194:6443: connect: connection refused" Feb 9 19:18:58.827361 kubelet[2079]: I0209 19:18:58.827275 2079 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 9 19:18:58.827830 kubelet[2079]: E0209 19:18:58.827775 2079 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.38?timeout=10s\": dial tcp 172.31.17.194:6443: connect: connection refused" interval="200ms" Feb 9 19:18:59.758433 kubelet[2079]: E0209 19:18:59.758366 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:00.758967 kubelet[2079]: E0209 19:19:00.758899 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:01.759716 kubelet[2079]: E0209 19:19:01.759669 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:02.760922 kubelet[2079]: E0209 19:19:02.760847 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:03.761346 kubelet[2079]: E0209 19:19:03.761243 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:04.762302 kubelet[2079]: E0209 19:19:04.762261 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:05.763814 kubelet[2079]: E0209 19:19:05.763723 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:06.764902 kubelet[2079]: E0209 19:19:06.764860 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:07.766321 kubelet[2079]: E0209 19:19:07.766257 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:08.766812 kubelet[2079]: E0209 19:19:08.766743 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:08.826421 systemd[1]: cri-containerd-8813c19162723c2f96a0491fd083bfb30023eeffbff0c45cd1f1c0d8148fc885.scope: Deactivated successfully. Feb 9 19:19:08.861164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8813c19162723c2f96a0491fd083bfb30023eeffbff0c45cd1f1c0d8148fc885-rootfs.mount: Deactivated successfully. Feb 9 19:19:08.876912 env[1647]: time="2024-02-09T19:19:08.876809184Z" level=info msg="shim disconnected" id=8813c19162723c2f96a0491fd083bfb30023eeffbff0c45cd1f1c0d8148fc885 Feb 9 19:19:08.876912 env[1647]: time="2024-02-09T19:19:08.876892428Z" level=warning msg="cleaning up after shim disconnected" id=8813c19162723c2f96a0491fd083bfb30023eeffbff0c45cd1f1c0d8148fc885 namespace=k8s.io Feb 9 19:19:08.876912 env[1647]: time="2024-02-09T19:19:08.876915839Z" level=info msg="cleaning up dead shim" Feb 9 19:19:08.892318 env[1647]: time="2024-02-09T19:19:08.892253540Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4994 runtime=io.containerd.runc.v2\n" Feb 9 19:19:09.029584 kubelet[2079]: E0209 19:19:09.029437 2079 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.38?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 9 19:19:09.096458 kubelet[2079]: E0209 19:19:09.096107 2079 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.23.38\": Get \"https://172.31.17.194:6443/api/v1/nodes/172.31.23.38?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 9 19:19:09.278147 kubelet[2079]: I0209 19:19:09.277205 2079 scope.go:115] "RemoveContainer" containerID="8813c19162723c2f96a0491fd083bfb30023eeffbff0c45cd1f1c0d8148fc885" Feb 9 19:19:09.281408 env[1647]: time="2024-02-09T19:19:09.281246028Z" level=info msg="CreateContainer within sandbox \"c2537dc09549d7433834b3982da6e87dcf0f1a37eaa43981c303d559f54b56c2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Feb 9 19:19:09.302757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268549484.mount: Deactivated successfully. Feb 9 19:19:09.314837 env[1647]: time="2024-02-09T19:19:09.314737068Z" level=info msg="CreateContainer within sandbox \"c2537dc09549d7433834b3982da6e87dcf0f1a37eaa43981c303d559f54b56c2\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"5da1701de6fda52f68a64a5604b5fc0c5954525e44fe6261c67b9d0404a86a8d\"" Feb 9 19:19:09.315913 env[1647]: time="2024-02-09T19:19:09.315868377Z" level=info msg="StartContainer for \"5da1701de6fda52f68a64a5604b5fc0c5954525e44fe6261c67b9d0404a86a8d\"" Feb 9 19:19:09.354066 systemd[1]: Started cri-containerd-5da1701de6fda52f68a64a5604b5fc0c5954525e44fe6261c67b9d0404a86a8d.scope. Feb 9 19:19:09.423419 env[1647]: time="2024-02-09T19:19:09.423350706Z" level=info msg="StartContainer for \"5da1701de6fda52f68a64a5604b5fc0c5954525e44fe6261c67b9d0404a86a8d\" returns successfully" Feb 9 19:19:09.768495 kubelet[2079]: E0209 19:19:09.768403 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:10.769359 kubelet[2079]: E0209 19:19:10.769318 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:11.770707 kubelet[2079]: E0209 19:19:11.770645 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:12.770923 kubelet[2079]: E0209 19:19:12.770852 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:19:13.771234 kubelet[2079]: E0209 19:19:13.771170 2079 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"