Oct 2 19:39:12.970060 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:39:12.970079 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:39:12.970087 kernel: BIOS-provided physical RAM map: Oct 2 19:39:12.970093 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 19:39:12.970098 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 19:39:12.970104 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 19:39:12.970110 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Oct 2 19:39:12.970116 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Oct 2 19:39:12.970123 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 2 19:39:12.970128 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 19:39:12.970134 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 2 19:39:12.970139 kernel: NX (Execute Disable) protection: active Oct 2 19:39:12.970145 kernel: SMBIOS 2.8 present. Oct 2 19:39:12.970150 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 2 19:39:12.970159 kernel: Hypervisor detected: KVM Oct 2 19:39:12.970165 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:39:12.970171 kernel: kvm-clock: cpu 0, msr 52f8a001, primary cpu clock Oct 2 19:39:12.970177 kernel: kvm-clock: using sched offset of 2519558491 cycles Oct 2 19:39:12.970183 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:39:12.970189 kernel: tsc: Detected 2794.748 MHz processor Oct 2 19:39:12.970196 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:39:12.970202 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:39:12.970208 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Oct 2 19:39:12.970216 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:39:12.970222 kernel: Using GB pages for direct mapping Oct 2 19:39:12.970228 kernel: ACPI: Early table checksum verification disabled Oct 2 19:39:12.970234 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Oct 2 19:39:12.970240 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:39:12.970246 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:39:12.970252 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:39:12.970258 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 2 19:39:12.970265 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:39:12.970272 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:39:12.970286 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:39:12.970292 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Oct 2 19:39:12.970298 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Oct 2 19:39:12.970304 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 2 19:39:12.970310 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Oct 2 19:39:12.970316 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Oct 2 19:39:12.970322 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Oct 2 19:39:12.970332 kernel: No NUMA configuration found Oct 2 19:39:12.970339 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Oct 2 19:39:12.970345 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Oct 2 19:39:12.970352 kernel: Zone ranges: Oct 2 19:39:12.970359 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:39:12.970365 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Oct 2 19:39:12.970373 kernel: Normal empty Oct 2 19:39:12.970379 kernel: Movable zone start for each node Oct 2 19:39:12.970386 kernel: Early memory node ranges Oct 2 19:39:12.970392 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 19:39:12.970398 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Oct 2 19:39:12.970405 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Oct 2 19:39:12.970411 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:39:12.970418 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 19:39:12.970424 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Oct 2 19:39:12.970432 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 2 19:39:12.970439 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:39:12.970445 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:39:12.970452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:39:12.970458 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:39:12.970465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:39:12.970471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:39:12.970477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:39:12.970484 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:39:12.970492 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:39:12.970498 kernel: TSC deadline timer available Oct 2 19:39:12.970504 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 2 19:39:12.970511 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 2 19:39:12.970517 kernel: kvm-guest: setup PV sched yield Oct 2 19:39:12.970523 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Oct 2 19:39:12.970530 kernel: Booting paravirtualized kernel on KVM Oct 2 19:39:12.970546 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:39:12.970553 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 2 19:39:12.970561 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Oct 2 19:39:12.970567 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Oct 2 19:39:12.970573 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 2 19:39:12.970580 kernel: kvm-guest: setup async PF for cpu 0 Oct 2 19:39:12.970586 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Oct 2 19:39:12.970593 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:39:12.970599 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:39:12.970606 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Oct 2 19:39:12.970612 kernel: Policy zone: DMA32 Oct 2 19:39:12.970621 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:39:12.970628 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:39:12.970635 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:39:12.970641 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:39:12.970648 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:39:12.970655 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 132728K reserved, 0K cma-reserved) Oct 2 19:39:12.970661 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:39:12.970668 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:39:12.970675 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:39:12.970682 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:39:12.970689 kernel: rcu: RCU event tracing is enabled. Oct 2 19:39:12.970695 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:39:12.970702 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:39:12.970708 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:39:12.970715 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:39:12.970721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:39:12.970728 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 2 19:39:12.970736 kernel: random: crng init done Oct 2 19:39:12.970742 kernel: Console: colour VGA+ 80x25 Oct 2 19:39:12.970749 kernel: printk: console [ttyS0] enabled Oct 2 19:39:12.970755 kernel: ACPI: Core revision 20210730 Oct 2 19:39:12.970762 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 2 19:39:12.970769 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:39:12.970775 kernel: x2apic enabled Oct 2 19:39:12.970781 kernel: Switched APIC routing to physical x2apic. Oct 2 19:39:12.970788 kernel: kvm-guest: setup PV IPIs Oct 2 19:39:12.970794 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:39:12.970802 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:39:12.970809 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 2 19:39:12.970815 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 2 19:39:12.970822 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 2 19:39:12.970828 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 2 19:39:12.970835 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:39:12.970841 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:39:12.970848 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:39:12.970856 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:39:12.970868 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 2 19:39:12.970874 kernel: RETBleed: Mitigation: untrained return thunk Oct 2 19:39:12.970883 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:39:12.970890 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:39:12.970897 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:39:12.970903 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:39:12.970910 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:39:12.970917 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:39:12.970924 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:39:12.970933 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:39:12.970939 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:39:12.970946 kernel: LSM: Security Framework initializing Oct 2 19:39:12.970953 kernel: SELinux: Initializing. Oct 2 19:39:12.970960 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:39:12.970967 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:39:12.970974 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 2 19:39:12.970982 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 2 19:39:12.970988 kernel: ... version: 0 Oct 2 19:39:12.970995 kernel: ... bit width: 48 Oct 2 19:39:12.971002 kernel: ... generic registers: 6 Oct 2 19:39:12.971009 kernel: ... value mask: 0000ffffffffffff Oct 2 19:39:12.971016 kernel: ... max period: 00007fffffffffff Oct 2 19:39:12.971022 kernel: ... fixed-purpose events: 0 Oct 2 19:39:12.971029 kernel: ... event mask: 000000000000003f Oct 2 19:39:12.971036 kernel: signal: max sigframe size: 1776 Oct 2 19:39:12.971044 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:39:12.971051 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:39:12.971057 kernel: x86: Booting SMP configuration: Oct 2 19:39:12.971064 kernel: .... node #0, CPUs: #1 Oct 2 19:39:12.971071 kernel: kvm-clock: cpu 1, msr 52f8a041, secondary cpu clock Oct 2 19:39:12.971078 kernel: kvm-guest: setup async PF for cpu 1 Oct 2 19:39:12.971084 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Oct 2 19:39:12.971091 kernel: #2 Oct 2 19:39:12.971098 kernel: kvm-clock: cpu 2, msr 52f8a081, secondary cpu clock Oct 2 19:39:12.971106 kernel: kvm-guest: setup async PF for cpu 2 Oct 2 19:39:12.971113 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Oct 2 19:39:12.971120 kernel: #3 Oct 2 19:39:12.971126 kernel: kvm-clock: cpu 3, msr 52f8a0c1, secondary cpu clock Oct 2 19:39:12.971133 kernel: kvm-guest: setup async PF for cpu 3 Oct 2 19:39:12.971140 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Oct 2 19:39:12.971147 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:39:12.971153 kernel: smpboot: Max logical packages: 1 Oct 2 19:39:12.971160 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 2 19:39:12.971167 kernel: devtmpfs: initialized Oct 2 19:39:12.971175 kernel: x86/mm: Memory block size: 128MB Oct 2 19:39:12.971182 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:39:12.971189 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:39:12.971196 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:39:12.971203 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:39:12.971209 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:39:12.971216 kernel: audit: type=2000 audit(1696275552.226:1): state=initialized audit_enabled=0 res=1 Oct 2 19:39:12.971223 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:39:12.971230 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:39:12.971238 kernel: cpuidle: using governor menu Oct 2 19:39:12.971245 kernel: ACPI: bus type PCI registered Oct 2 19:39:12.971254 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:39:12.971261 kernel: dca service started, version 1.12.1 Oct 2 19:39:12.971269 kernel: PCI: Using configuration type 1 for base access Oct 2 19:39:12.971287 kernel: PCI: Using configuration type 1 for extended access Oct 2 19:39:12.971294 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:39:12.971301 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:39:12.971308 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:39:12.971316 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:39:12.971322 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:39:12.971329 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:39:12.971336 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:39:12.971343 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:39:12.971350 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:39:12.971356 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:39:12.971363 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:39:12.971370 kernel: ACPI: Interpreter enabled Oct 2 19:39:12.971378 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:39:12.971385 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:39:12.971392 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:39:12.971398 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:39:12.971405 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:39:12.971554 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:39:12.971566 kernel: acpiphp: Slot [3] registered Oct 2 19:39:12.971573 kernel: acpiphp: Slot [4] registered Oct 2 19:39:12.971582 kernel: acpiphp: Slot [5] registered Oct 2 19:39:12.971589 kernel: acpiphp: Slot [6] registered Oct 2 19:39:12.971596 kernel: acpiphp: Slot [7] registered Oct 2 19:39:12.971603 kernel: acpiphp: Slot [8] registered Oct 2 19:39:12.971609 kernel: acpiphp: Slot [9] registered Oct 2 19:39:12.971616 kernel: acpiphp: Slot [10] registered Oct 2 19:39:12.971623 kernel: acpiphp: Slot [11] registered Oct 2 19:39:12.971630 kernel: acpiphp: Slot [12] registered Oct 2 19:39:12.971637 kernel: acpiphp: Slot [13] registered Oct 2 19:39:12.971645 kernel: acpiphp: Slot [14] registered Oct 2 19:39:12.971651 kernel: acpiphp: Slot [15] registered Oct 2 19:39:12.971658 kernel: acpiphp: Slot [16] registered Oct 2 19:39:12.971665 kernel: acpiphp: Slot [17] registered Oct 2 19:39:12.971672 kernel: acpiphp: Slot [18] registered Oct 2 19:39:12.971678 kernel: acpiphp: Slot [19] registered Oct 2 19:39:12.971685 kernel: acpiphp: Slot [20] registered Oct 2 19:39:12.971692 kernel: acpiphp: Slot [21] registered Oct 2 19:39:12.971699 kernel: acpiphp: Slot [22] registered Oct 2 19:39:12.971705 kernel: acpiphp: Slot [23] registered Oct 2 19:39:12.971713 kernel: acpiphp: Slot [24] registered Oct 2 19:39:12.971720 kernel: acpiphp: Slot [25] registered Oct 2 19:39:12.971727 kernel: acpiphp: Slot [26] registered Oct 2 19:39:12.971733 kernel: acpiphp: Slot [27] registered Oct 2 19:39:12.971740 kernel: acpiphp: Slot [28] registered Oct 2 19:39:12.971747 kernel: acpiphp: Slot [29] registered Oct 2 19:39:12.971754 kernel: acpiphp: Slot [30] registered Oct 2 19:39:12.971760 kernel: acpiphp: Slot [31] registered Oct 2 19:39:12.971767 kernel: PCI host bridge to bus 0000:00 Oct 2 19:39:12.971869 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:39:12.971936 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:39:12.972002 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:39:12.972091 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Oct 2 19:39:12.972161 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 2 19:39:12.972226 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:39:12.972334 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:39:12.972420 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:39:12.972511 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:39:12.972602 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Oct 2 19:39:12.972677 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:39:12.972750 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:39:12.972823 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:39:12.972903 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:39:12.972984 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:39:12.973057 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 2 19:39:12.973145 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 2 19:39:12.973243 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Oct 2 19:39:12.973329 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 2 19:39:12.973402 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 2 19:39:12.973479 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 2 19:39:12.973566 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:39:12.973678 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:39:12.973756 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:39:12.973839 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 2 19:39:12.973915 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 2 19:39:12.973995 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:39:12.974072 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:39:12.974146 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 2 19:39:12.974219 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 2 19:39:12.974316 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:39:12.974392 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Oct 2 19:39:12.974466 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 2 19:39:12.974553 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 2 19:39:12.974638 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 2 19:39:12.974664 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:39:12.974687 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:39:12.974706 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:39:12.974715 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:39:12.974724 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:39:12.974733 kernel: iommu: Default domain type: Translated Oct 2 19:39:12.974740 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:39:12.974877 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:39:12.974976 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:39:12.975050 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:39:12.975059 kernel: vgaarb: loaded Oct 2 19:39:12.975066 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:39:12.975073 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:39:12.975080 kernel: PTP clock support registered Oct 2 19:39:12.975087 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:39:12.975094 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:39:12.975104 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 19:39:12.975111 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Oct 2 19:39:12.975118 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 2 19:39:12.975125 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 2 19:39:12.975132 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:39:12.975139 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:39:12.975146 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:39:12.975153 kernel: pnp: PnP ACPI init Oct 2 19:39:12.975247 kernel: pnp 00:02: [dma 2] Oct 2 19:39:12.975260 kernel: pnp: PnP ACPI: found 6 devices Oct 2 19:39:12.975267 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:39:12.975274 kernel: NET: Registered PF_INET protocol family Oct 2 19:39:12.975287 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:39:12.975295 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:39:12.975301 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:39:12.975308 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:39:12.975315 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:39:12.975324 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:39:12.975331 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:39:12.975338 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:39:12.975345 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:39:12.975351 kernel: NET: Registered PF_XDP protocol family Oct 2 19:39:12.975421 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:39:12.975487 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:39:12.975570 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:39:12.975639 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Oct 2 19:39:12.975704 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 2 19:39:12.975780 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:39:12.975853 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:39:12.975971 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:39:12.975982 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:39:12.975993 kernel: Initialise system trusted keyrings Oct 2 19:39:12.976000 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:39:12.976010 kernel: Key type asymmetric registered Oct 2 19:39:12.976017 kernel: Asymmetric key parser 'x509' registered Oct 2 19:39:12.976024 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:39:12.976031 kernel: io scheduler mq-deadline registered Oct 2 19:39:12.976038 kernel: io scheduler kyber registered Oct 2 19:39:12.976045 kernel: io scheduler bfq registered Oct 2 19:39:12.976052 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:39:12.976059 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:39:12.976066 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:39:12.976073 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:39:12.976081 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:39:12.976088 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:39:12.976095 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:39:12.976102 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:39:12.976109 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:39:12.976189 kernel: rtc_cmos 00:05: RTC can wake from S4 Oct 2 19:39:12.976199 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:39:12.976266 kernel: rtc_cmos 00:05: registered as rtc0 Oct 2 19:39:12.976349 kernel: rtc_cmos 00:05: setting system clock to 2023-10-02T19:39:12 UTC (1696275552) Oct 2 19:39:12.976417 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 2 19:39:12.976426 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:39:12.976433 kernel: Segment Routing with IPv6 Oct 2 19:39:12.976440 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:39:12.976447 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:39:12.976454 kernel: Key type dns_resolver registered Oct 2 19:39:12.976461 kernel: IPI shorthand broadcast: enabled Oct 2 19:39:12.976470 kernel: sched_clock: Marking stable (404073137, 92374416)->(542734261, -46286708) Oct 2 19:39:12.976477 kernel: registered taskstats version 1 Oct 2 19:39:12.976484 kernel: Loading compiled-in X.509 certificates Oct 2 19:39:12.976491 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:39:12.976498 kernel: Key type .fscrypt registered Oct 2 19:39:12.976505 kernel: Key type fscrypt-provisioning registered Oct 2 19:39:12.976512 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:39:12.976519 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:39:12.976526 kernel: ima: No architecture policies found Oct 2 19:39:12.976545 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:39:12.976552 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:39:12.976559 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:39:12.976566 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:39:12.976573 kernel: Run /init as init process Oct 2 19:39:12.976580 kernel: with arguments: Oct 2 19:39:12.976587 kernel: /init Oct 2 19:39:12.976594 kernel: with environment: Oct 2 19:39:12.976610 kernel: HOME=/ Oct 2 19:39:12.976619 kernel: TERM=linux Oct 2 19:39:12.976626 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:39:12.976636 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:39:12.976645 systemd[1]: Detected virtualization kvm. Oct 2 19:39:12.976653 systemd[1]: Detected architecture x86-64. Oct 2 19:39:12.976660 systemd[1]: Running in initrd. Oct 2 19:39:12.976667 systemd[1]: No hostname configured, using default hostname. Oct 2 19:39:12.976677 systemd[1]: Hostname set to . Oct 2 19:39:12.976685 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:39:12.976692 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:39:12.976700 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:39:12.976722 systemd[1]: Reached target cryptsetup.target. Oct 2 19:39:12.976734 systemd[1]: Reached target paths.target. Oct 2 19:39:12.976742 systemd[1]: Reached target slices.target. Oct 2 19:39:12.976749 systemd[1]: Reached target swap.target. Oct 2 19:39:12.976757 systemd[1]: Reached target timers.target. Oct 2 19:39:12.976767 systemd[1]: Listening on iscsid.socket. Oct 2 19:39:12.976775 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:39:12.976782 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:39:12.976790 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:39:12.976798 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:39:12.976805 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:39:12.976813 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:39:12.976822 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:39:12.976829 systemd[1]: Reached target sockets.target. Oct 2 19:39:12.976837 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:39:12.976845 systemd[1]: Finished network-cleanup.service. Oct 2 19:39:12.976852 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:39:12.976860 systemd[1]: Starting systemd-journald.service... Oct 2 19:39:12.976867 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:39:12.976876 systemd[1]: Starting systemd-resolved.service... Oct 2 19:39:12.976884 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:39:12.976892 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:39:12.976899 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:39:12.976907 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:39:12.976917 systemd-journald[197]: Journal started Oct 2 19:39:12.976958 systemd-journald[197]: Runtime Journal (/run/log/journal/e2e213a7159f4644931aea20de1f45b9) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:39:12.974921 systemd-modules-load[198]: Inserted module 'overlay' Oct 2 19:39:12.992908 systemd-resolved[199]: Positive Trust Anchors: Oct 2 19:39:13.000394 systemd[1]: Started systemd-journald.service. Oct 2 19:39:13.000416 kernel: audit: type=1130 audit(1696275552.997:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.992920 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:39:13.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.992947 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:39:13.009949 kernel: audit: type=1130 audit(1696275553.000:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.009970 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:39:13.009990 kernel: audit: type=1130 audit(1696275553.003:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.995113 systemd-resolved[199]: Defaulting to hostname 'linux'. Oct 2 19:39:13.015180 kernel: audit: type=1130 audit(1696275553.005:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.015199 kernel: Bridge firewalling registered Oct 2 19:39:13.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:12.998483 systemd[1]: Started systemd-resolved.service. Oct 2 19:39:13.001484 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:39:13.004678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:39:13.006224 systemd[1]: Reached target nss-lookup.target. Oct 2 19:39:13.013059 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:39:13.018517 systemd-modules-load[198]: Inserted module 'br_netfilter' Oct 2 19:39:13.029910 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:39:13.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.030697 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:39:13.034295 kernel: audit: type=1130 audit(1696275553.029:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.034323 kernel: SCSI subsystem initialized Oct 2 19:39:13.039495 dracut-cmdline[216]: dracut-dracut-053 Oct 2 19:39:13.041687 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:39:13.046655 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:39:13.046678 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:39:13.047556 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:39:13.050070 systemd-modules-load[198]: Inserted module 'dm_multipath' Oct 2 19:39:13.050680 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:39:13.053783 kernel: audit: type=1130 audit(1696275553.050:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.051815 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:39:13.059635 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:39:13.062744 kernel: audit: type=1130 audit(1696275553.059:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.094566 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:39:13.104562 kernel: iscsi: registered transport (tcp) Oct 2 19:39:13.122562 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:39:13.122582 kernel: QLogic iSCSI HBA Driver Oct 2 19:39:13.148311 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:39:13.151836 kernel: audit: type=1130 audit(1696275553.148:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.149672 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:39:13.204567 kernel: raid6: avx2x4 gen() 30102 MB/s Oct 2 19:39:13.221554 kernel: raid6: avx2x4 xor() 7541 MB/s Oct 2 19:39:13.238557 kernel: raid6: avx2x2 gen() 31675 MB/s Oct 2 19:39:13.255551 kernel: raid6: avx2x2 xor() 19245 MB/s Oct 2 19:39:13.272554 kernel: raid6: avx2x1 gen() 26482 MB/s Oct 2 19:39:13.289554 kernel: raid6: avx2x1 xor() 15326 MB/s Oct 2 19:39:13.306558 kernel: raid6: sse2x4 gen() 14790 MB/s Oct 2 19:39:13.323577 kernel: raid6: sse2x4 xor() 7124 MB/s Oct 2 19:39:13.340570 kernel: raid6: sse2x2 gen() 16201 MB/s Oct 2 19:39:13.357567 kernel: raid6: sse2x2 xor() 9800 MB/s Oct 2 19:39:13.374583 kernel: raid6: sse2x1 gen() 11868 MB/s Oct 2 19:39:13.391935 kernel: raid6: sse2x1 xor() 7675 MB/s Oct 2 19:39:13.392002 kernel: raid6: using algorithm avx2x2 gen() 31675 MB/s Oct 2 19:39:13.392028 kernel: raid6: .... xor() 19245 MB/s, rmw enabled Oct 2 19:39:13.392039 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:39:13.403567 kernel: xor: automatically using best checksumming function avx Oct 2 19:39:13.495570 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:39:13.504911 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:39:13.507891 kernel: audit: type=1130 audit(1696275553.505:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.507000 audit: BPF prog-id=7 op=LOAD Oct 2 19:39:13.507000 audit: BPF prog-id=8 op=LOAD Oct 2 19:39:13.508182 systemd[1]: Starting systemd-udevd.service... Oct 2 19:39:13.520749 systemd-udevd[399]: Using default interface naming scheme 'v252'. Oct 2 19:39:13.525205 systemd[1]: Started systemd-udevd.service. Oct 2 19:39:13.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.527443 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:39:13.538851 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Oct 2 19:39:13.564032 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:39:13.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.565461 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:39:13.605149 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:39:13.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:13.631558 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:39:13.633560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:39:13.637559 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:39:13.648419 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:39:13.648468 kernel: AES CTR mode by8 optimization enabled Oct 2 19:39:13.648478 kernel: libata version 3.00 loaded. Oct 2 19:39:13.651552 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:39:13.653756 kernel: scsi host0: ata_piix Oct 2 19:39:13.653882 kernel: scsi host1: ata_piix Oct 2 19:39:13.654647 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Oct 2 19:39:13.654669 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Oct 2 19:39:13.814568 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 2 19:39:13.814646 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 2 19:39:13.830063 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:39:13.832563 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Oct 2 19:39:13.836359 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:39:13.840010 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:39:13.840875 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:39:13.845245 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 2 19:39:13.845467 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:39:13.850601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:39:13.851968 systemd[1]: Starting disk-uuid.service... Oct 2 19:39:13.860577 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:39:13.861553 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:39:13.865564 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:39:14.886565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:39:14.886640 disk-uuid[512]: The operation has completed successfully. Oct 2 19:39:14.915344 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:39:14.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:14.915446 systemd[1]: Finished disk-uuid.service. Oct 2 19:39:14.919824 systemd[1]: Starting verity-setup.service... Oct 2 19:39:14.935571 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:39:15.008497 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:39:15.010263 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:39:15.013008 systemd[1]: Finished verity-setup.service. Oct 2 19:39:15.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.094564 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:39:15.094723 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:39:15.095313 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:39:15.096215 systemd[1]: Starting ignition-setup.service... Oct 2 19:39:15.097709 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:39:15.105907 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:39:15.106011 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:39:15.106025 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:39:15.115623 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:39:15.177482 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:39:15.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.178000 audit: BPF prog-id=9 op=LOAD Oct 2 19:39:15.179447 systemd[1]: Starting systemd-networkd.service... Oct 2 19:39:15.199358 systemd-networkd[683]: lo: Link UP Oct 2 19:39:15.199366 systemd-networkd[683]: lo: Gained carrier Oct 2 19:39:15.200726 systemd-networkd[683]: Enumeration completed Oct 2 19:39:15.200818 systemd[1]: Started systemd-networkd.service. Oct 2 19:39:15.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.201270 systemd[1]: Reached target network.target. Oct 2 19:39:15.203376 systemd[1]: Starting iscsiuio.service... Oct 2 19:39:15.204411 systemd-networkd[683]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:39:15.206342 systemd-networkd[683]: eth0: Link UP Oct 2 19:39:15.206348 systemd-networkd[683]: eth0: Gained carrier Oct 2 19:39:15.251982 systemd[1]: Started iscsiuio.service. Oct 2 19:39:15.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.253240 systemd[1]: Starting iscsid.service... Oct 2 19:39:15.257163 iscsid[688]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:39:15.257163 iscsid[688]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:39:15.257163 iscsid[688]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:39:15.257163 iscsid[688]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:39:15.257163 iscsid[688]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:39:15.257163 iscsid[688]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:39:15.266189 systemd[1]: Started iscsid.service. Oct 2 19:39:15.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.267250 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:39:15.270620 systemd-networkd[683]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:39:15.276932 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:39:15.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.277299 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:39:15.278114 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:39:15.278350 systemd[1]: Reached target remote-fs.target. Oct 2 19:39:15.280622 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:39:15.287137 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:39:15.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.366738 systemd[1]: Finished ignition-setup.service. Oct 2 19:39:15.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.368275 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:39:15.698043 ignition[703]: Ignition 2.14.0 Oct 2 19:39:15.698059 ignition[703]: Stage: fetch-offline Oct 2 19:39:15.698154 ignition[703]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:39:15.698167 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:39:15.698333 ignition[703]: parsed url from cmdline: "" Oct 2 19:39:15.698337 ignition[703]: no config URL provided Oct 2 19:39:15.698344 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:39:15.698352 ignition[703]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:39:15.698387 ignition[703]: op(1): [started] loading QEMU firmware config module Oct 2 19:39:15.698393 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:39:15.707288 ignition[703]: op(1): [finished] loading QEMU firmware config module Oct 2 19:39:15.717642 ignition[703]: parsing config with SHA512: 7470cec856285a1c7ba89e20b41715919f0245c5c727737800a564e9cf97a34771e064224bb8b46ad7329976b4ebc787ac9b57f90f77e0b8a0214825ddfb5266 Oct 2 19:39:15.742145 unknown[703]: fetched base config from "system" Oct 2 19:39:15.742162 unknown[703]: fetched user config from "qemu" Oct 2 19:39:15.742620 ignition[703]: fetch-offline: fetch-offline passed Oct 2 19:39:15.742700 ignition[703]: Ignition finished successfully Oct 2 19:39:15.744530 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:39:15.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.745405 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:39:15.746243 systemd[1]: Starting ignition-kargs.service... Oct 2 19:39:15.756438 ignition[711]: Ignition 2.14.0 Oct 2 19:39:15.756447 ignition[711]: Stage: kargs Oct 2 19:39:15.756563 ignition[711]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:39:15.756576 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:39:15.758907 systemd[1]: Finished ignition-kargs.service. Oct 2 19:39:15.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.757496 ignition[711]: kargs: kargs passed Oct 2 19:39:15.757606 ignition[711]: Ignition finished successfully Oct 2 19:39:15.760875 systemd[1]: Starting ignition-disks.service... Oct 2 19:39:15.768359 ignition[717]: Ignition 2.14.0 Oct 2 19:39:15.768370 ignition[717]: Stage: disks Oct 2 19:39:15.768481 ignition[717]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:39:15.768490 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:39:15.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.770105 systemd[1]: Finished ignition-disks.service. Oct 2 19:39:15.769427 ignition[717]: disks: disks passed Oct 2 19:39:15.770753 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:39:15.769471 ignition[717]: Ignition finished successfully Oct 2 19:39:15.771830 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:39:15.772363 systemd[1]: Reached target local-fs.target. Oct 2 19:39:15.772865 systemd[1]: Reached target sysinit.target. Oct 2 19:39:15.773763 systemd[1]: Reached target basic.target. Oct 2 19:39:15.775132 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:39:15.784331 systemd-fsck[725]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:39:15.788815 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:39:15.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.791085 systemd[1]: Mounting sysroot.mount... Oct 2 19:39:15.798409 systemd[1]: Mounted sysroot.mount. Oct 2 19:39:15.800058 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:39:15.799031 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:39:15.801004 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:39:15.801896 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:39:15.801937 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:39:15.801966 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:39:15.803618 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:39:15.805118 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:39:15.808686 initrd-setup-root[735]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:39:15.812598 initrd-setup-root[743]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:39:15.815482 initrd-setup-root[751]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:39:15.818311 initrd-setup-root[759]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:39:15.847246 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:39:15.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.849410 systemd[1]: Starting ignition-mount.service... Oct 2 19:39:15.850877 systemd[1]: Starting sysroot-boot.service... Oct 2 19:39:15.854818 bash[776]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:39:15.913527 ignition[777]: INFO : Ignition 2.14.0 Oct 2 19:39:15.913527 ignition[777]: INFO : Stage: mount Oct 2 19:39:15.914904 ignition[777]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:39:15.914904 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:39:15.914904 ignition[777]: INFO : mount: mount passed Oct 2 19:39:15.914904 ignition[777]: INFO : Ignition finished successfully Oct 2 19:39:15.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:15.915408 systemd[1]: Finished ignition-mount.service. Oct 2 19:39:15.919046 systemd[1]: Finished sysroot-boot.service. Oct 2 19:39:15.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:16.025513 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:39:16.036934 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (786) Oct 2 19:39:16.039033 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:39:16.039066 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:39:16.039080 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:39:16.042829 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:39:16.044957 systemd[1]: Starting ignition-files.service... Oct 2 19:39:16.114479 ignition[806]: INFO : Ignition 2.14.0 Oct 2 19:39:16.114479 ignition[806]: INFO : Stage: files Oct 2 19:39:16.116045 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:39:16.116045 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:39:16.117446 ignition[806]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:39:16.119284 ignition[806]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:39:16.119284 ignition[806]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:39:16.121635 ignition[806]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:39:16.124255 ignition[806]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:39:16.125695 unknown[806]: wrote ssh authorized keys file for user: core Oct 2 19:39:16.126562 ignition[806]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:39:16.127541 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:39:16.127541 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 19:39:16.318446 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:39:16.480248 ignition[806]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 19:39:16.482657 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:39:16.482657 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:39:16.482657 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 19:39:16.730191 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:39:16.860699 systemd-networkd[683]: eth0: Gained IPv6LL Oct 2 19:39:16.868582 ignition[806]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 19:39:16.870789 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:39:16.870789 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:39:16.870789 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:39:16.966246 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:39:17.588170 ignition[806]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 19:39:17.590479 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:39:17.590479 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:39:17.590479 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:39:17.658365 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:39:19.267579 ignition[806]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 19:39:19.270030 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:39:19.270030 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:39:19.270030 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:39:19.270030 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:39:19.270030 ignition[806]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:39:19.270030 ignition[806]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:39:19.291328 ignition[806]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:39:19.291328 ignition[806]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:39:19.291328 ignition[806]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:39:19.291328 ignition[806]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:39:19.291328 ignition[806]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:39:19.361359 ignition[806]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:39:19.362625 ignition[806]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:39:19.362625 ignition[806]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:39:19.362625 ignition[806]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:39:19.362625 ignition[806]: INFO : files: files passed Oct 2 19:39:19.362625 ignition[806]: INFO : Ignition finished successfully Oct 2 19:39:19.373401 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:39:19.373424 kernel: audit: type=1130 audit(1696275559.364:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.373440 kernel: audit: type=1130 audit(1696275559.372:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.362909 systemd[1]: Finished ignition-files.service. Oct 2 19:39:19.381306 kernel: audit: type=1131 audit(1696275559.372:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.381328 kernel: audit: type=1130 audit(1696275559.377:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.365615 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:39:19.369080 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:39:19.383476 initrd-setup-root-after-ignition[830]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:39:19.369669 systemd[1]: Starting ignition-quench.service... Oct 2 19:39:19.385167 initrd-setup-root-after-ignition[834]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:39:19.372413 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:39:19.372499 systemd[1]: Finished ignition-quench.service. Oct 2 19:39:19.373559 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:39:19.377574 systemd[1]: Reached target ignition-complete.target. Oct 2 19:39:19.380386 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:39:19.392959 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:39:19.393033 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:39:19.398450 kernel: audit: type=1130 audit(1696275559.393:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.398475 kernel: audit: type=1131 audit(1696275559.393:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.394270 systemd[1]: Reached target initrd-fs.target. Oct 2 19:39:19.398857 systemd[1]: Reached target initrd.target. Oct 2 19:39:19.399778 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:39:19.400390 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:39:19.409372 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:39:19.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.411210 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:39:19.413626 kernel: audit: type=1130 audit(1696275559.409:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.419913 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:39:19.420313 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:39:19.421251 systemd[1]: Stopped target timers.target. Oct 2 19:39:19.422242 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:39:19.425616 kernel: audit: type=1131 audit(1696275559.422:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.422340 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:39:19.423188 systemd[1]: Stopped target initrd.target. Oct 2 19:39:19.425970 systemd[1]: Stopped target basic.target. Oct 2 19:39:19.426872 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:39:19.427789 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:39:19.428835 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:39:19.429881 systemd[1]: Stopped target remote-fs.target. Oct 2 19:39:19.432105 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:39:19.433181 systemd[1]: Stopped target sysinit.target. Oct 2 19:39:19.434149 systemd[1]: Stopped target local-fs.target. Oct 2 19:39:19.435125 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:39:19.436157 systemd[1]: Stopped target swap.target. Oct 2 19:39:19.437057 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:39:19.437706 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:39:19.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.438799 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:39:19.441638 kernel: audit: type=1131 audit(1696275559.438:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.441676 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:39:19.442309 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:39:19.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.443346 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:39:19.445890 kernel: audit: type=1131 audit(1696275559.442:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.443428 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:39:19.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.447028 systemd[1]: Stopped target paths.target. Oct 2 19:39:19.447938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:39:19.452587 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:39:19.453730 systemd[1]: Stopped target slices.target. Oct 2 19:39:19.454681 systemd[1]: Stopped target sockets.target. Oct 2 19:39:19.455633 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:39:19.456177 systemd[1]: Closed iscsid.socket. Oct 2 19:39:19.457045 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:39:19.457785 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:39:19.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.459006 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:39:19.459630 systemd[1]: Stopped ignition-files.service. Oct 2 19:39:19.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.461469 systemd[1]: Stopping ignition-mount.service... Oct 2 19:39:19.462577 systemd[1]: Stopping iscsiuio.service... Oct 2 19:39:19.464008 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:39:19.464974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:39:19.465114 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:39:19.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.466864 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:39:19.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.467012 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:39:19.469635 ignition[847]: INFO : Ignition 2.14.0 Oct 2 19:39:19.469635 ignition[847]: INFO : Stage: umount Oct 2 19:39:19.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.471232 ignition[847]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:39:19.471232 ignition[847]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:39:19.471232 ignition[847]: INFO : umount: umount passed Oct 2 19:39:19.471232 ignition[847]: INFO : Ignition finished successfully Oct 2 19:39:19.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.469986 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:39:19.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.470091 systemd[1]: Stopped iscsiuio.service. Oct 2 19:39:19.471807 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:39:19.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.471871 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:39:19.472587 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:39:19.472660 systemd[1]: Stopped ignition-mount.service. Oct 2 19:39:19.474202 systemd[1]: Stopped target network.target. Oct 2 19:39:19.475182 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:39:19.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.475211 systemd[1]: Closed iscsiuio.socket. Oct 2 19:39:19.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.475663 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:39:19.475695 systemd[1]: Stopped ignition-disks.service. Oct 2 19:39:19.476683 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:39:19.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.476714 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:39:19.477663 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:39:19.477693 systemd[1]: Stopped ignition-setup.service. Oct 2 19:39:19.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.478806 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:39:19.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.479919 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:39:19.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.481714 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:39:19.482255 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:39:19.482319 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:39:19.482948 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:39:19.482979 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:39:19.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.483606 systemd-networkd[683]: eth0: DHCPv6 lease lost Oct 2 19:39:19.497000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:39:19.484792 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:39:19.484864 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:39:19.486759 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:39:19.500000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:39:19.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.486784 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:39:19.488348 systemd[1]: Stopping network-cleanup.service... Oct 2 19:39:19.488927 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:39:19.488986 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:39:19.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.490260 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:39:19.490355 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:39:19.491684 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:39:19.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.491721 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:39:19.492983 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:39:19.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.495283 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:39:19.495646 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:39:19.495715 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:39:19.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.499696 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:39:19.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.499769 systemd[1]: Stopped network-cleanup.service. Oct 2 19:39:19.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.502075 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:39:19.502235 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:39:19.503367 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:39:19.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:19.503407 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:39:19.504489 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:39:19.504530 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:39:19.505670 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:39:19.505707 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:39:19.506689 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:39:19.506721 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:39:19.507784 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:39:19.507816 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:39:19.509370 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:39:19.510720 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:39:19.510781 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:39:19.512420 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:39:19.512470 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:39:19.513081 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:39:19.513133 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:39:19.515280 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:39:19.515645 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:39:19.515707 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:39:19.516511 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:39:19.518146 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:39:19.533553 systemd[1]: Switching root. Oct 2 19:39:19.552910 iscsid[688]: iscsid shutting down. Oct 2 19:39:19.553464 systemd-journald[197]: Journal stopped Oct 2 19:39:23.597820 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Oct 2 19:39:23.597989 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:39:23.598019 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:39:23.598041 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:39:23.598063 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:39:23.598077 kernel: SELinux: policy capability open_perms=1 Oct 2 19:39:23.598108 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:39:23.598124 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:39:23.598142 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:39:23.598162 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:39:23.598177 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:39:23.598190 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:39:23.598206 systemd[1]: Successfully loaded SELinux policy in 51.004ms. Oct 2 19:39:23.598252 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.996ms. Oct 2 19:39:23.598284 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:39:23.598301 systemd[1]: Detected virtualization kvm. Oct 2 19:39:23.598325 systemd[1]: Detected architecture x86-64. Oct 2 19:39:23.598344 systemd[1]: Detected first boot. Oct 2 19:39:23.598364 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:39:23.598388 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:39:23.598410 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:23.598441 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:23.598459 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:23.598495 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:39:23.598515 systemd[1]: Stopped iscsid.service. Oct 2 19:39:23.598667 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:39:23.598700 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:39:23.598724 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:39:23.598739 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:39:23.598759 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:39:23.598773 systemd[1]: Created slice system-getty.slice. Oct 2 19:39:23.598787 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:39:23.598800 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:39:23.598814 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:39:23.598829 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:39:23.598858 systemd[1]: Created slice user.slice. Oct 2 19:39:23.598873 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:39:23.598891 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:39:23.598906 systemd[1]: Set up automount boot.automount. Oct 2 19:39:23.598919 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:39:23.598938 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:39:23.598953 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:39:23.598967 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:39:23.599030 systemd[1]: Reached target integritysetup.target. Oct 2 19:39:23.599050 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:39:23.599061 systemd[1]: Reached target remote-fs.target. Oct 2 19:39:23.599071 systemd[1]: Reached target slices.target. Oct 2 19:39:23.599081 systemd[1]: Reached target swap.target. Oct 2 19:39:23.599097 systemd[1]: Reached target torcx.target. Oct 2 19:39:23.599107 systemd[1]: Reached target veritysetup.target. Oct 2 19:39:23.599118 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:39:23.599128 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:39:23.599141 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:39:23.599158 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:39:23.599169 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:39:23.599179 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:39:23.599190 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:39:23.599200 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:39:23.599210 systemd[1]: Mounting media.mount... Oct 2 19:39:23.599221 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:39:23.599232 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:39:23.599246 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:39:23.599260 systemd[1]: Mounting tmp.mount... Oct 2 19:39:23.599270 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:39:23.599281 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:39:23.599293 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:39:23.599305 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:39:23.599316 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:39:23.599331 systemd[1]: Starting modprobe@drm.service... Oct 2 19:39:23.599342 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:39:23.599352 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:39:23.599368 systemd[1]: Starting modprobe@loop.service... Oct 2 19:39:23.599379 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:39:23.599393 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:39:23.599403 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:39:23.599413 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:39:23.599424 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:39:23.599434 systemd[1]: Stopped systemd-journald.service. Oct 2 19:39:23.599445 systemd[1]: Starting systemd-journald.service... Oct 2 19:39:23.599455 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:39:23.599471 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:39:23.599481 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:39:23.599492 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:39:23.599502 kernel: loop: module loaded Oct 2 19:39:23.599512 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:39:23.599522 systemd[1]: Stopped verity-setup.service. Oct 2 19:39:23.599544 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:39:23.599576 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:39:23.599588 kernel: fuse: init (API version 7.34) Oct 2 19:39:23.599605 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:39:23.599618 systemd[1]: Mounted media.mount. Oct 2 19:39:23.599628 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:39:23.599638 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:39:23.599648 systemd[1]: Mounted tmp.mount. Oct 2 19:39:23.599658 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:39:23.599669 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:39:23.599683 systemd-journald[949]: Journal started Oct 2 19:39:23.599739 systemd-journald[949]: Runtime Journal (/run/log/journal/e2e213a7159f4644931aea20de1f45b9) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:39:19.638000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:39:20.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:39:20.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:39:20.340000 audit: BPF prog-id=10 op=LOAD Oct 2 19:39:20.340000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:39:20.340000 audit: BPF prog-id=11 op=LOAD Oct 2 19:39:20.340000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:39:23.395000 audit: BPF prog-id=12 op=LOAD Oct 2 19:39:23.395000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:39:23.396000 audit: BPF prog-id=13 op=LOAD Oct 2 19:39:23.398000 audit: BPF prog-id=14 op=LOAD Oct 2 19:39:23.399000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:39:23.400000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:39:23.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.439000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:39:23.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.561000 audit: BPF prog-id=15 op=LOAD Oct 2 19:39:23.561000 audit: BPF prog-id=16 op=LOAD Oct 2 19:39:23.561000 audit: BPF prog-id=17 op=LOAD Oct 2 19:39:23.561000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:39:23.561000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:39:23.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.595000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:39:23.595000 audit[949]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe4714df90 a2=4000 a3=7ffe4714e02c items=0 ppid=1 pid=949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:23.595000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:39:23.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:20.496978 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:23.600634 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:39:23.386967 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:39:20.497405 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:39:23.386984 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:39:20.497421 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:39:23.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.403636 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:39:20.497455 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:39:20.497465 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:39:20.497496 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:39:20.497508 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:39:20.497735 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:39:20.497768 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:39:20.497779 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:39:20.498179 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:39:20.498209 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:39:20.498225 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:39:20.498238 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:39:20.498252 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:39:20.498264 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:39:23.063498 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:39:23.064196 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:39:23.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.064313 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:39:23.064544 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:39:23.064603 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:39:23.602553 systemd[1]: Started systemd-journald.service. Oct 2 19:39:23.064678 /usr/lib/systemd/system-generators/torcx-generator[880]: time="2023-10-02T19:39:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:39:23.602864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:39:23.602990 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:39:23.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.603853 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:39:23.603999 systemd[1]: Finished modprobe@drm.service. Oct 2 19:39:23.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.604759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:39:23.604902 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:39:23.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.605845 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:39:23.606015 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:39:23.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.606779 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:39:23.606914 systemd[1]: Finished modprobe@loop.service. Oct 2 19:39:23.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.607881 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:39:23.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.608717 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:39:23.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.609672 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:39:23.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.610806 systemd[1]: Reached target network-pre.target. Oct 2 19:39:23.612656 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:39:23.614657 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:39:23.615497 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:39:23.617835 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:39:23.620152 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:39:23.620832 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:39:23.622012 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:39:23.622782 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:39:23.624514 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:39:23.625401 systemd-journald[949]: Time spent on flushing to /var/log/journal/e2e213a7159f4644931aea20de1f45b9 is 27.844ms for 1078 entries. Oct 2 19:39:23.625401 systemd-journald[949]: System Journal (/var/log/journal/e2e213a7159f4644931aea20de1f45b9) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:39:23.660781 systemd-journald[949]: Received client request to flush runtime journal. Oct 2 19:39:23.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.628626 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:39:23.629502 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:39:23.630293 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:39:23.634463 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:39:23.639424 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:39:23.640295 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:39:23.645186 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:39:23.654946 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:39:23.656929 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:39:23.661744 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:39:23.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.662639 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:39:23.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:23.664518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:39:23.669438 udevadm[984]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:39:23.681150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:39:23.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.587870 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:39:24.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.591240 kernel: kauditd_printk_skb: 86 callbacks suppressed Oct 2 19:39:24.591296 kernel: audit: type=1130 audit(1696275564.588:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.591318 kernel: audit: type=1334 audit(1696275564.590:129): prog-id=18 op=LOAD Oct 2 19:39:24.590000 audit: BPF prog-id=18 op=LOAD Oct 2 19:39:24.591000 audit: BPF prog-id=19 op=LOAD Oct 2 19:39:24.592666 kernel: audit: type=1334 audit(1696275564.591:130): prog-id=19 op=LOAD Oct 2 19:39:24.592705 kernel: audit: type=1334 audit(1696275564.591:131): prog-id=7 op=UNLOAD Oct 2 19:39:24.591000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:39:24.593008 systemd[1]: Starting systemd-udevd.service... Oct 2 19:39:24.591000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:39:24.593565 kernel: audit: type=1334 audit(1696275564.591:132): prog-id=8 op=UNLOAD Oct 2 19:39:24.612343 systemd-udevd[988]: Using default interface naming scheme 'v252'. Oct 2 19:39:24.629824 systemd[1]: Started systemd-udevd.service. Oct 2 19:39:24.637989 kernel: audit: type=1130 audit(1696275564.630:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.638067 kernel: audit: type=1334 audit(1696275564.631:134): prog-id=20 op=LOAD Oct 2 19:39:24.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.631000 audit: BPF prog-id=20 op=LOAD Oct 2 19:39:24.632586 systemd[1]: Starting systemd-networkd.service... Oct 2 19:39:24.645938 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:39:24.644000 audit: BPF prog-id=21 op=LOAD Oct 2 19:39:24.644000 audit: BPF prog-id=22 op=LOAD Oct 2 19:39:24.648321 kernel: audit: type=1334 audit(1696275564.644:135): prog-id=21 op=LOAD Oct 2 19:39:24.648351 kernel: audit: type=1334 audit(1696275564.644:136): prog-id=22 op=LOAD Oct 2 19:39:24.648368 kernel: audit: type=1334 audit(1696275564.644:137): prog-id=23 op=LOAD Oct 2 19:39:24.644000 audit: BPF prog-id=23 op=LOAD Oct 2 19:39:24.661188 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:39:24.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.678014 systemd[1]: Started systemd-userdbd.service. Oct 2 19:39:24.696736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:39:24.709601 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:39:24.721904 (udev-worker)[999]: could not read from '/sys/module/pcc_cpufreq/initstate': No such device Oct 2 19:39:24.722000 audit[989]: AVC avc: denied { confidentiality } for pid=989 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:39:24.728563 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:39:24.740246 systemd-networkd[996]: lo: Link UP Oct 2 19:39:24.740260 systemd-networkd[996]: lo: Gained carrier Oct 2 19:39:24.741440 systemd-networkd[996]: Enumeration completed Oct 2 19:39:24.741700 systemd[1]: Started systemd-networkd.service. Oct 2 19:39:24.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.742978 systemd-networkd[996]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:39:24.744813 systemd-networkd[996]: eth0: Link UP Oct 2 19:39:24.744823 systemd-networkd[996]: eth0: Gained carrier Oct 2 19:39:24.722000 audit[989]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559ec1ab96f0 a1=32194 a2=7fe0d92c3bc5 a3=5 items=106 ppid=988 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:24.722000 audit: CWD cwd="/" Oct 2 19:39:24.722000 audit: PATH item=0 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=1 name=(null) inode=15450 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=2 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=3 name=(null) inode=15451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=4 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=5 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=6 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=7 name=(null) inode=15453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=8 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=9 name=(null) inode=15454 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=10 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=11 name=(null) inode=15455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=12 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=13 name=(null) inode=15456 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=14 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=15 name=(null) inode=15457 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=16 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=17 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=18 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=19 name=(null) inode=15459 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=20 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=21 name=(null) inode=15460 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=22 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=23 name=(null) inode=15461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=24 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=25 name=(null) inode=15462 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=26 name=(null) inode=15458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=27 name=(null) inode=15463 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=28 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=29 name=(null) inode=15464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=30 name=(null) inode=15464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=31 name=(null) inode=15465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=32 name=(null) inode=15464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=33 name=(null) inode=15466 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=34 name=(null) inode=15464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=35 name=(null) inode=15467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=36 name=(null) inode=15464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=37 name=(null) inode=15468 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=38 name=(null) inode=15464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=39 name=(null) inode=15469 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=40 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=41 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=42 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=43 name=(null) inode=15471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=44 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=45 name=(null) inode=15472 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=46 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=47 name=(null) inode=15473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=48 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=49 name=(null) inode=15474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=50 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=51 name=(null) inode=15475 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=52 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=53 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=54 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=55 name=(null) inode=15477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=56 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=57 name=(null) inode=15478 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=58 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=59 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=60 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=61 name=(null) inode=15480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=62 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=63 name=(null) inode=15481 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=64 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=65 name=(null) inode=15482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=66 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=67 name=(null) inode=15483 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=68 name=(null) inode=15479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=69 name=(null) inode=15484 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=70 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=71 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=72 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=73 name=(null) inode=15486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=74 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=75 name=(null) inode=15487 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=76 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=77 name=(null) inode=15488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=78 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=79 name=(null) inode=15489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=80 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=81 name=(null) inode=15490 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=82 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=83 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=84 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=85 name=(null) inode=15492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=86 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=87 name=(null) inode=15493 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=88 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=89 name=(null) inode=15494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=90 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=91 name=(null) inode=15495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=92 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=93 name=(null) inode=15496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=94 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=95 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=96 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=97 name=(null) inode=15498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=98 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=99 name=(null) inode=15499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=100 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=101 name=(null) inode=15500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=102 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=103 name=(null) inode=15501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=104 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PATH item=105 name=(null) inode=15502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:39:24.722000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:39:24.758802 systemd-networkd[996]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:39:24.763586 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:39:24.766565 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 2 19:39:24.780603 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:39:24.852896 kernel: kvm: Nested Virtualization enabled Oct 2 19:39:24.853113 kernel: SVM: kvm: Nested Paging enabled Oct 2 19:39:24.869564 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:39:24.888993 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:39:24.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.890859 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:39:24.908060 lvm[1023]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:39:24.934804 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:39:24.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.935577 systemd[1]: Reached target cryptsetup.target. Oct 2 19:39:24.937210 systemd[1]: Starting lvm2-activation.service... Oct 2 19:39:24.942274 lvm[1024]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:39:24.972915 systemd[1]: Finished lvm2-activation.service. Oct 2 19:39:24.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:24.973772 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:39:24.974421 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:39:24.974455 systemd[1]: Reached target local-fs.target. Oct 2 19:39:24.975063 systemd[1]: Reached target machines.target. Oct 2 19:39:24.977156 systemd[1]: Starting ldconfig.service... Oct 2 19:39:24.978116 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:39:24.978183 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:39:24.979430 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:39:24.981032 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:39:24.982984 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:39:24.983779 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:39:24.983844 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:39:24.985203 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:39:24.992726 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1026 (bootctl) Oct 2 19:39:24.994469 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:39:24.995968 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:39:24.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.001811 systemd-tmpfiles[1029]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:39:25.006271 systemd-tmpfiles[1029]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:39:25.008654 systemd-tmpfiles[1029]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:39:25.491063 systemd-fsck[1034]: fsck.fat 4.2 (2021-01-31) Oct 2 19:39:25.491063 systemd-fsck[1034]: /dev/vda1: 789 files, 115069/258078 clusters Oct 2 19:39:25.494918 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:39:25.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.538369 systemd[1]: Mounting boot.mount... Oct 2 19:39:25.681740 systemd[1]: Mounted boot.mount. Oct 2 19:39:25.817629 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:39:25.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.869937 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:39:25.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.872319 systemd[1]: Starting audit-rules.service... Oct 2 19:39:25.874029 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:39:25.876037 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:39:25.877000 audit: BPF prog-id=24 op=LOAD Oct 2 19:39:25.879210 systemd[1]: Starting systemd-resolved.service... Oct 2 19:39:25.880000 audit: BPF prog-id=25 op=LOAD Oct 2 19:39:25.883840 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:39:25.884785 systemd-networkd[996]: eth0: Gained IPv6LL Oct 2 19:39:25.885994 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:39:25.887371 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:39:25.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.888522 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:39:25.892000 audit[1049]: SYSTEM_BOOT pid=1049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:39:25.896993 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:39:25.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:26.118509 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:39:26.135138 systemd-timesyncd[1047]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:39:26.135217 systemd-timesyncd[1047]: Initial clock synchronization to Mon 2023-10-02 19:39:26.297026 UTC. Oct 2 19:39:26.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:26.136369 systemd[1]: Reached target time-set.target. Oct 2 19:39:26.142376 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:39:26.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:26.143824 systemd-resolved[1042]: Positive Trust Anchors: Oct 2 19:39:26.143842 systemd-resolved[1042]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:39:26.143881 systemd-resolved[1042]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:39:26.166000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:39:26.166000 audit[1060]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe4451ca30 a2=420 a3=0 items=0 ppid=1038 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:26.166000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:39:26.167749 augenrules[1060]: No rules Oct 2 19:39:26.168395 systemd[1]: Finished audit-rules.service. Oct 2 19:39:26.250164 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:39:26.250964 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:39:26.251992 systemd-resolved[1042]: Defaulting to hostname 'linux'. Oct 2 19:39:26.253953 systemd[1]: Started systemd-resolved.service. Oct 2 19:39:26.254655 systemd[1]: Reached target network.target. Oct 2 19:39:26.255262 systemd[1]: Reached target nss-lookup.target. Oct 2 19:39:26.767596 ldconfig[1025]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:39:26.772037 systemd[1]: Finished ldconfig.service. Oct 2 19:39:26.774459 systemd[1]: Starting systemd-update-done.service... Oct 2 19:39:26.780244 systemd[1]: Finished systemd-update-done.service. Oct 2 19:39:26.781079 systemd[1]: Reached target sysinit.target. Oct 2 19:39:26.781795 systemd[1]: Started motdgen.path. Oct 2 19:39:26.782420 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:39:26.783455 systemd[1]: Started logrotate.timer. Oct 2 19:39:26.784161 systemd[1]: Started mdadm.timer. Oct 2 19:39:26.784732 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:39:26.785420 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:39:26.785466 systemd[1]: Reached target paths.target. Oct 2 19:39:26.786065 systemd[1]: Reached target timers.target. Oct 2 19:39:26.787131 systemd[1]: Listening on dbus.socket. Oct 2 19:39:26.788818 systemd[1]: Starting docker.socket... Oct 2 19:39:26.791735 systemd[1]: Listening on sshd.socket. Oct 2 19:39:26.792386 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:39:26.792759 systemd[1]: Listening on docker.socket. Oct 2 19:39:26.793334 systemd[1]: Reached target sockets.target. Oct 2 19:39:26.793906 systemd[1]: Reached target basic.target. Oct 2 19:39:26.794710 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:39:26.794734 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:39:26.795835 systemd[1]: Starting containerd.service... Oct 2 19:39:26.797491 systemd[1]: Starting dbus.service... Oct 2 19:39:26.798873 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:39:26.800632 systemd[1]: Starting extend-filesystems.service... Oct 2 19:39:26.801253 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:39:26.803360 systemd[1]: Starting motdgen.service... Oct 2 19:39:26.805729 jq[1070]: false Oct 2 19:39:26.807702 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:39:26.810706 systemd[1]: Starting prepare-critools.service... Oct 2 19:39:26.812779 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:39:26.814163 extend-filesystems[1071]: Found sr0 Oct 2 19:39:26.814994 extend-filesystems[1071]: Found vda Oct 2 19:39:26.815034 systemd[1]: Starting sshd-keygen.service... Oct 2 19:39:26.815446 extend-filesystems[1071]: Found vda1 Oct 2 19:39:26.817324 extend-filesystems[1071]: Found vda2 Oct 2 19:39:26.817324 extend-filesystems[1071]: Found vda3 Oct 2 19:39:26.817324 extend-filesystems[1071]: Found usr Oct 2 19:39:26.817324 extend-filesystems[1071]: Found vda4 Oct 2 19:39:26.817324 extend-filesystems[1071]: Found vda6 Oct 2 19:39:26.817324 extend-filesystems[1071]: Found vda7 Oct 2 19:39:26.817324 extend-filesystems[1071]: Found vda9 Oct 2 19:39:26.817324 extend-filesystems[1071]: Checking size of /dev/vda9 Oct 2 19:39:26.819736 systemd[1]: Starting systemd-logind.service... Oct 2 19:39:26.820221 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:39:26.820290 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:39:26.820800 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:39:26.821580 systemd[1]: Starting update-engine.service... Oct 2 19:39:26.824458 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:39:26.827056 jq[1086]: true Oct 2 19:39:26.827081 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:39:26.827919 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:39:26.829943 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:39:26.830152 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:39:26.837675 jq[1095]: true Oct 2 19:39:26.837776 extend-filesystems[1071]: Old size kept for /dev/vda9 Oct 2 19:39:26.837863 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:39:26.838683 systemd[1]: Finished extend-filesystems.service. Oct 2 19:39:26.874428 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:39:26.875132 systemd[1]: Finished motdgen.service. Oct 2 19:39:26.891347 systemd-logind[1082]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:39:26.891371 systemd-logind[1082]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:39:26.891579 systemd-logind[1082]: New seat seat0. Oct 2 19:39:26.900402 env[1096]: time="2023-10-02T19:39:26.900327024Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:39:26.915951 dbus-daemon[1069]: [system] SELinux support is enabled Oct 2 19:39:26.916519 systemd[1]: Started dbus.service. Oct 2 19:39:26.918822 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:39:26.918846 systemd[1]: Reached target system-config.target. Oct 2 19:39:26.919572 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:39:26.919589 systemd[1]: Reached target user-config.target. Oct 2 19:39:26.920271 systemd[1]: Started systemd-logind.service. Oct 2 19:39:26.921127 dbus-daemon[1069]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 19:39:26.922481 env[1096]: time="2023-10-02T19:39:26.922443219Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:39:26.922763 env[1096]: time="2023-10-02T19:39:26.922742240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:26.924229 env[1096]: time="2023-10-02T19:39:26.924156873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:39:26.924229 env[1096]: time="2023-10-02T19:39:26.924197389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:26.924551 env[1096]: time="2023-10-02T19:39:26.924440034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:39:26.924551 env[1096]: time="2023-10-02T19:39:26.924460302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:26.924551 env[1096]: time="2023-10-02T19:39:26.924473236Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:39:26.924551 env[1096]: time="2023-10-02T19:39:26.924482574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:26.924668 env[1096]: time="2023-10-02T19:39:26.924603621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:26.925037 env[1096]: time="2023-10-02T19:39:26.924876422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:39:26.925037 env[1096]: time="2023-10-02T19:39:26.925018349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:39:26.925037 env[1096]: time="2023-10-02T19:39:26.925036433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:39:26.925136 env[1096]: time="2023-10-02T19:39:26.925090985Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:39:26.925136 env[1096]: time="2023-10-02T19:39:26.925107877Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:39:26.928329 tar[1092]: ./ Oct 2 19:39:26.928329 tar[1092]: ./macvlan Oct 2 19:39:26.928989 tar[1093]: crictl Oct 2 19:39:26.978590 tar[1092]: ./static Oct 2 19:39:27.000626 tar[1092]: ./vlan Oct 2 19:39:27.030850 tar[1092]: ./portmap Oct 2 19:39:27.064442 tar[1092]: ./host-local Oct 2 19:39:27.149415 tar[1092]: ./vrf Oct 2 19:39:27.181282 tar[1092]: ./bridge Oct 2 19:39:27.228942 tar[1092]: ./tuning Oct 2 19:39:27.247269 systemd[1]: Finished prepare-critools.service. Oct 2 19:39:27.254548 update_engine[1083]: I1002 19:39:27.254145 1083 main.cc:92] Flatcar Update Engine starting Oct 2 19:39:27.257289 systemd[1]: Started update-engine.service. Oct 2 19:39:27.258132 tar[1092]: ./firewall Oct 2 19:39:27.259878 systemd[1]: Started locksmithd.service. Oct 2 19:39:27.261537 update_engine[1083]: I1002 19:39:27.261500 1083 update_check_scheduler.cc:74] Next update check in 9m49s Oct 2 19:39:27.297000 tar[1092]: ./host-device Oct 2 19:39:27.325891 tar[1092]: ./sbr Oct 2 19:39:27.379810 tar[1092]: ./loopback Oct 2 19:39:27.404890 tar[1092]: ./dhcp Oct 2 19:39:27.460726 sshd_keygen[1100]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:39:27.506210 tar[1092]: ./ptp Oct 2 19:39:27.512180 systemd[1]: Finished sshd-keygen.service. Oct 2 19:39:27.514981 systemd[1]: Starting issuegen.service... Oct 2 19:39:27.518316 env[1096]: time="2023-10-02T19:39:27.518237355Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:39:27.518316 env[1096]: time="2023-10-02T19:39:27.518308581Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:39:27.518570 env[1096]: time="2023-10-02T19:39:27.518326345Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:39:27.518570 env[1096]: time="2023-10-02T19:39:27.518425078Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.518570 env[1096]: time="2023-10-02T19:39:27.518511871Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.518570 env[1096]: time="2023-10-02T19:39:27.518542097Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.518570 env[1096]: time="2023-10-02T19:39:27.518557736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.518570 env[1096]: time="2023-10-02T19:39:27.518571228Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.518702 env[1096]: time="2023-10-02T19:39:27.518590813Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.518702 env[1096]: time="2023-10-02T19:39:27.518607210Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.518702 env[1096]: time="2023-10-02T19:39:27.518624709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.518702 env[1096]: time="2023-10-02T19:39:27.518636556Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:39:27.518785 env[1096]: time="2023-10-02T19:39:27.518761149Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:39:27.518867 env[1096]: time="2023-10-02T19:39:27.518847625Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:39:27.519135 env[1096]: time="2023-10-02T19:39:27.519117133Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:39:27.519197 env[1096]: time="2023-10-02T19:39:27.519145397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519197 env[1096]: time="2023-10-02T19:39:27.519160412Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:39:27.519240 env[1096]: time="2023-10-02T19:39:27.519231034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519262 env[1096]: time="2023-10-02T19:39:27.519246991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519262 env[1096]: time="2023-10-02T19:39:27.519259584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519301 env[1096]: time="2023-10-02T19:39:27.519269878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519301 env[1096]: time="2023-10-02T19:39:27.519285005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519301 env[1096]: time="2023-10-02T19:39:27.519296168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519364 env[1096]: time="2023-10-02T19:39:27.519310754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519364 env[1096]: time="2023-10-02T19:39:27.519322233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519364 env[1096]: time="2023-10-02T19:39:27.519334234Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:39:27.519473 env[1096]: time="2023-10-02T19:39:27.519453982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519473 env[1096]: time="2023-10-02T19:39:27.519472259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519561 env[1096]: time="2023-10-02T19:39:27.519483738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519561 env[1096]: time="2023-10-02T19:39:27.519494736Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:39:27.519561 env[1096]: time="2023-10-02T19:39:27.519516079Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:39:27.519561 env[1096]: time="2023-10-02T19:39:27.519526914Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:39:27.519655 env[1096]: time="2023-10-02T19:39:27.519550782Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:39:27.519655 env[1096]: time="2023-10-02T19:39:27.519601799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:39:27.519876 env[1096]: time="2023-10-02T19:39:27.519825309Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:39:27.519876 env[1096]: time="2023-10-02T19:39:27.519882459Z" level=info msg="Connect containerd service" Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.519919307Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.520574789Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.520834249Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.520868696Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.520917189Z" level=info msg="containerd successfully booted in 0.621508s" Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.522873035Z" level=info msg="Start subscribing containerd event" Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.523050086Z" level=info msg="Start recovering state" Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.523137748Z" level=info msg="Start event monitor" Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.523167687Z" level=info msg="Start snapshots syncer" Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.523182488Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:39:27.524819 env[1096]: time="2023-10-02T19:39:27.523190329Z" level=info msg="Start streaming server" Oct 2 19:39:27.521009 systemd[1]: Started containerd.service. Oct 2 19:39:27.524407 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:39:27.524641 systemd[1]: Finished issuegen.service. Oct 2 19:39:27.526842 bash[1120]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:39:27.527624 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:39:27.528824 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:39:27.534328 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:39:27.537344 systemd[1]: Started getty@tty1.service. Oct 2 19:39:27.539863 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:39:27.541084 systemd[1]: Reached target getty.target. Oct 2 19:39:27.545640 tar[1092]: ./ipvlan Oct 2 19:39:27.571585 locksmithd[1128]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:39:27.578597 tar[1092]: ./bandwidth Oct 2 19:39:27.647653 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:39:27.655246 systemd[1]: Reached target multi-user.target. Oct 2 19:39:27.660831 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:39:27.677226 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:39:27.677472 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:39:27.679046 systemd[1]: Startup finished in 739ms (kernel) + 6.735s (initrd) + 8.099s (userspace) = 15.575s. Oct 2 19:39:29.462702 systemd[1]: Created slice system-sshd.slice. Oct 2 19:39:29.463974 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:51660.service. Oct 2 19:39:29.620528 sshd[1153]: Accepted publickey for core from 10.0.0.1 port 51660 ssh2: RSA SHA256:NgATMQDnUD9aUNjhhJmB2GfJkyCVZ14bUDm9dIaEVw0 Oct 2 19:39:29.622012 sshd[1153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:29.647428 systemd-logind[1082]: New session 1 of user core. Oct 2 19:39:29.648267 systemd[1]: Created slice user-500.slice. Oct 2 19:39:29.649317 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:39:29.657048 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:39:29.658353 systemd[1]: Starting user@500.service... Oct 2 19:39:29.661470 (systemd)[1156]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:29.729027 systemd[1156]: Queued start job for default target default.target. Oct 2 19:39:29.729533 systemd[1156]: Reached target paths.target. Oct 2 19:39:29.729581 systemd[1156]: Reached target sockets.target. Oct 2 19:39:29.729599 systemd[1156]: Reached target timers.target. Oct 2 19:39:29.729613 systemd[1156]: Reached target basic.target. Oct 2 19:39:29.729663 systemd[1156]: Reached target default.target. Oct 2 19:39:29.729696 systemd[1156]: Startup finished in 62ms. Oct 2 19:39:29.729762 systemd[1]: Started user@500.service. Oct 2 19:39:29.730808 systemd[1]: Started session-1.scope. Oct 2 19:39:29.803680 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:51670.service. Oct 2 19:39:29.897672 sshd[1165]: Accepted publickey for core from 10.0.0.1 port 51670 ssh2: RSA SHA256:NgATMQDnUD9aUNjhhJmB2GfJkyCVZ14bUDm9dIaEVw0 Oct 2 19:39:29.899528 sshd[1165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:29.908772 systemd-logind[1082]: New session 2 of user core. Oct 2 19:39:29.909876 systemd[1]: Started session-2.scope. Oct 2 19:39:30.014177 sshd[1165]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:30.020005 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:51670.service: Deactivated successfully. Oct 2 19:39:30.020749 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:39:30.023178 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:51680.service. Oct 2 19:39:30.024830 systemd-logind[1082]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:39:30.026795 systemd-logind[1082]: Removed session 2. Oct 2 19:39:30.077226 sshd[1171]: Accepted publickey for core from 10.0.0.1 port 51680 ssh2: RSA SHA256:NgATMQDnUD9aUNjhhJmB2GfJkyCVZ14bUDm9dIaEVw0 Oct 2 19:39:30.081289 sshd[1171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:30.094750 systemd-logind[1082]: New session 3 of user core. Oct 2 19:39:30.095325 systemd[1]: Started session-3.scope. Oct 2 19:39:30.172320 sshd[1171]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:30.179132 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:51692.service. Oct 2 19:39:30.183385 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:51680.service: Deactivated successfully. Oct 2 19:39:30.186419 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:39:30.188285 systemd-logind[1082]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:39:30.190688 systemd-logind[1082]: Removed session 3. Oct 2 19:39:30.236601 sshd[1176]: Accepted publickey for core from 10.0.0.1 port 51692 ssh2: RSA SHA256:NgATMQDnUD9aUNjhhJmB2GfJkyCVZ14bUDm9dIaEVw0 Oct 2 19:39:30.239458 sshd[1176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:30.256872 systemd-logind[1082]: New session 4 of user core. Oct 2 19:39:30.260958 systemd[1]: Started session-4.scope. Oct 2 19:39:30.354990 sshd[1176]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:30.363838 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:51698.service. Oct 2 19:39:30.364678 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:51692.service: Deactivated successfully. Oct 2 19:39:30.365450 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:39:30.366677 systemd-logind[1082]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:39:30.368059 systemd-logind[1082]: Removed session 4. Oct 2 19:39:30.402807 sshd[1182]: Accepted publickey for core from 10.0.0.1 port 51698 ssh2: RSA SHA256:NgATMQDnUD9aUNjhhJmB2GfJkyCVZ14bUDm9dIaEVw0 Oct 2 19:39:30.405166 sshd[1182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:30.412368 systemd-logind[1082]: New session 5 of user core. Oct 2 19:39:30.413252 systemd[1]: Started session-5.scope. Oct 2 19:39:30.595136 sudo[1186]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:39:30.595364 sudo[1186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:30.636253 dbus-daemon[1069]: \xd0\xed\xb7(gU: received setenforce notice (enforcing=1383501392) Oct 2 19:39:30.638945 sudo[1186]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:30.650895 sshd[1182]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:30.672635 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:51700.service. Oct 2 19:39:30.682147 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:51698.service: Deactivated successfully. Oct 2 19:39:30.683235 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:39:30.691934 systemd-logind[1082]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:39:30.694265 systemd-logind[1082]: Removed session 5. Oct 2 19:39:30.710034 sshd[1189]: Accepted publickey for core from 10.0.0.1 port 51700 ssh2: RSA SHA256:NgATMQDnUD9aUNjhhJmB2GfJkyCVZ14bUDm9dIaEVw0 Oct 2 19:39:30.711403 sshd[1189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:30.720327 systemd-logind[1082]: New session 6 of user core. Oct 2 19:39:30.721029 systemd[1]: Started session-6.scope. Oct 2 19:39:30.805868 sudo[1194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:39:30.812812 sudo[1194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:30.838219 sudo[1194]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:30.844422 sudo[1193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:39:30.844752 sudo[1193]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:30.875522 systemd[1]: Stopping audit-rules.service... Oct 2 19:39:30.885000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:39:30.887257 auditctl[1197]: No rules Oct 2 19:39:30.887368 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:39:30.887604 systemd[1]: Stopped audit-rules.service. Oct 2 19:39:30.889162 kernel: kauditd_printk_skb: 129 callbacks suppressed Oct 2 19:39:30.889226 kernel: audit: type=1305 audit(1696275570.885:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:39:30.890842 kernel: audit: type=1300 audit(1696275570.885:156): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1ccf45d0 a2=420 a3=0 items=0 ppid=1 pid=1197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:30.885000 audit[1197]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1ccf45d0 a2=420 a3=0 items=0 ppid=1 pid=1197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:30.899082 kernel: audit: type=1327 audit(1696275570.885:156): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:39:30.885000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:39:30.896172 systemd[1]: Starting audit-rules.service... Oct 2 19:39:30.906265 kernel: audit: type=1131 audit(1696275570.885:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.932029 augenrules[1214]: No rules Oct 2 19:39:30.933262 systemd[1]: Finished audit-rules.service. Oct 2 19:39:30.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.936398 sudo[1193]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:30.935000 audit[1193]: USER_END pid=1193 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.944773 kernel: audit: type=1130 audit(1696275570.933:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.944861 kernel: audit: type=1106 audit(1696275570.935:159): pid=1193 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.945124 sshd[1189]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:30.935000 audit[1193]: CRED_DISP pid=1193 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.949999 kernel: audit: type=1104 audit(1696275570.935:160): pid=1193 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.949000 audit[1189]: USER_END pid=1189 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:30.949000 audit[1189]: CRED_DISP pid=1189 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:30.955813 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:51716.service. Oct 2 19:39:30.956519 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:51700.service: Deactivated successfully. Oct 2 19:39:30.957437 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:39:30.969218 kernel: audit: type=1106 audit(1696275570.949:161): pid=1189 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:30.969361 kernel: audit: type=1104 audit(1696275570.949:162): pid=1189 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:30.969442 kernel: audit: type=1130 audit(1696275570.955:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:51716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:51716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:30.959574 systemd-logind[1082]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:39:30.960841 systemd-logind[1082]: Removed session 6. Oct 2 19:39:30.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.12:22-10.0.0.1:51700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:31.027000 audit[1219]: USER_ACCT pid=1219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:31.030478 sshd[1219]: Accepted publickey for core from 10.0.0.1 port 51716 ssh2: RSA SHA256:NgATMQDnUD9aUNjhhJmB2GfJkyCVZ14bUDm9dIaEVw0 Oct 2 19:39:31.030000 audit[1219]: CRED_ACQ pid=1219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:31.031000 audit[1219]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf01d38d0 a2=3 a3=0 items=0 ppid=1 pid=1219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:31.031000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:39:31.032750 sshd[1219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:39:31.046777 systemd-logind[1082]: New session 7 of user core. Oct 2 19:39:31.047953 systemd[1]: Started session-7.scope. Oct 2 19:39:31.057000 audit[1219]: USER_START pid=1219 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:31.064000 audit[1222]: CRED_ACQ pid=1222 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:31.123000 audit[1223]: USER_ACCT pid=1223 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:31.124000 audit[1223]: CRED_REFR pid=1223 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:31.124574 sudo[1223]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:39:31.124807 sudo[1223]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:39:31.136000 audit[1223]: USER_START pid=1223 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:31.746488 systemd[1]: Reloading. Oct 2 19:39:31.811706 /usr/lib/systemd/system-generators/torcx-generator[1253]: time="2023-10-02T19:39:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:31.811737 /usr/lib/systemd/system-generators/torcx-generator[1253]: time="2023-10-02T19:39:31Z" level=info msg="torcx already run" Oct 2 19:39:31.913838 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:31.913860 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:31.942527 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit: BPF prog-id=31 op=LOAD Oct 2 19:39:32.023000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit: BPF prog-id=32 op=LOAD Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.023000 audit: BPF prog-id=33 op=LOAD Oct 2 19:39:32.023000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:39:32.023000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit: BPF prog-id=34 op=LOAD Oct 2 19:39:32.024000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.025000 audit: BPF prog-id=35 op=LOAD Oct 2 19:39:32.025000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit: BPF prog-id=36 op=LOAD Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.028000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.028000 audit: BPF prog-id=37 op=LOAD Oct 2 19:39:32.028000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:39:32.028000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.029000 audit: BPF prog-id=38 op=LOAD Oct 2 19:39:32.029000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:39:32.031000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.031000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.031000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.031000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.031000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.031000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.031000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.031000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.031000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit: BPF prog-id=39 op=LOAD Oct 2 19:39:32.032000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit: BPF prog-id=40 op=LOAD Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.032000 audit: BPF prog-id=41 op=LOAD Oct 2 19:39:32.032000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:39:32.032000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit: BPF prog-id=42 op=LOAD Oct 2 19:39:32.033000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit: BPF prog-id=43 op=LOAD Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.033000 audit: BPF prog-id=44 op=LOAD Oct 2 19:39:32.033000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:39:32.033000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:39:32.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:32.035000 audit: BPF prog-id=45 op=LOAD Oct 2 19:39:32.035000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:39:32.045277 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:39:32.052051 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:39:32.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:32.052716 systemd[1]: Reached target network-online.target. Oct 2 19:39:32.054220 systemd[1]: Started kubelet.service. Oct 2 19:39:32.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:32.066182 systemd[1]: Starting coreos-metadata.service... Oct 2 19:39:32.074422 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:39:32.074663 systemd[1]: Finished coreos-metadata.service. Oct 2 19:39:32.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:32.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:32.113531 kubelet[1294]: E1002 19:39:32.113455 1294 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:39:32.115689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:39:32.115828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:39:32.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:39:32.362890 systemd[1]: Stopped kubelet.service. Oct 2 19:39:32.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:32.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:32.378998 systemd[1]: Reloading. Oct 2 19:39:32.458343 /usr/lib/systemd/system-generators/torcx-generator[1362]: time="2023-10-02T19:39:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:39:32.458369 /usr/lib/systemd/system-generators/torcx-generator[1362]: time="2023-10-02T19:39:32Z" level=info msg="torcx already run" Oct 2 19:39:33.391228 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:39:33.391262 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:39:33.430912 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:39:33.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.536000 audit: BPF prog-id=46 op=LOAD Oct 2 19:39:33.536000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:39:33.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit: BPF prog-id=47 op=LOAD Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.537000 audit: BPF prog-id=48 op=LOAD Oct 2 19:39:33.537000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:39:33.537000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit: BPF prog-id=49 op=LOAD Oct 2 19:39:33.538000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.538000 audit: BPF prog-id=50 op=LOAD Oct 2 19:39:33.538000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit: BPF prog-id=51 op=LOAD Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.541000 audit: BPF prog-id=52 op=LOAD Oct 2 19:39:33.541000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:39:33.541000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.542000 audit: BPF prog-id=53 op=LOAD Oct 2 19:39:33.542000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:39:33.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.546000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.546000 audit: BPF prog-id=54 op=LOAD Oct 2 19:39:33.546000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:39:33.546000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.546000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.546000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.546000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.546000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.546000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.546000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.546000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit: BPF prog-id=55 op=LOAD Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.547000 audit: BPF prog-id=56 op=LOAD Oct 2 19:39:33.547000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:39:33.547000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:39:33.548000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.548000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.548000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.548000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.548000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.548000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.548000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.548000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.548000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit: BPF prog-id=57 op=LOAD Oct 2 19:39:33.549000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit: BPF prog-id=58 op=LOAD Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit: BPF prog-id=59 op=LOAD Oct 2 19:39:33.550000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:39:33.550000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.551000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:33.551000 audit: BPF prog-id=60 op=LOAD Oct 2 19:39:33.551000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:39:33.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:33.571521 systemd[1]: Started kubelet.service. Oct 2 19:39:33.621287 kubelet[1404]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:39:33.621287 kubelet[1404]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:39:33.621287 kubelet[1404]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:39:33.621690 kubelet[1404]: I1002 19:39:33.621314 1404 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:39:33.622578 kubelet[1404]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:39:33.622578 kubelet[1404]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:39:33.622578 kubelet[1404]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:39:33.980002 kubelet[1404]: I1002 19:39:33.979960 1404 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:39:33.980211 kubelet[1404]: I1002 19:39:33.980164 1404 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:39:33.980472 kubelet[1404]: I1002 19:39:33.980458 1404 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:39:34.021955 kubelet[1404]: I1002 19:39:34.021879 1404 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:39:34.026654 kubelet[1404]: I1002 19:39:34.026623 1404 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:39:34.026930 kubelet[1404]: I1002 19:39:34.026906 1404 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:39:34.026993 kubelet[1404]: I1002 19:39:34.026983 1404 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:39:34.027101 kubelet[1404]: I1002 19:39:34.027008 1404 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:39:34.027101 kubelet[1404]: I1002 19:39:34.027018 1404 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:39:34.027189 kubelet[1404]: I1002 19:39:34.027174 1404 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:39:34.030547 kubelet[1404]: I1002 19:39:34.030517 1404 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:39:34.030596 kubelet[1404]: I1002 19:39:34.030572 1404 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:39:34.030619 kubelet[1404]: I1002 19:39:34.030608 1404 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:39:34.030650 kubelet[1404]: I1002 19:39:34.030633 1404 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:39:34.030740 kubelet[1404]: E1002 19:39:34.030723 1404 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:34.030766 kubelet[1404]: E1002 19:39:34.030760 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:34.031602 kubelet[1404]: I1002 19:39:34.031529 1404 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:39:34.031891 kubelet[1404]: W1002 19:39:34.031875 1404 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:39:34.032267 kubelet[1404]: I1002 19:39:34.032241 1404 server.go:1175] "Started kubelet" Oct 2 19:39:34.032529 kubelet[1404]: I1002 19:39:34.032494 1404 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:39:34.034000 audit[1404]: AVC avc: denied { mac_admin } for pid=1404 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:34.034000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:34.034000 audit[1404]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c2f1a0 a1=c000fa0390 a2=c000c2f170 a3=25 items=0 ppid=1 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.034000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:34.034000 audit[1404]: AVC avc: denied { mac_admin } for pid=1404 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:34.034000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:34.034000 audit[1404]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00022b320 a1=c000fa03a8 a2=c000c2f230 a3=25 items=0 ppid=1 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.034000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:34.035472 kubelet[1404]: I1002 19:39:34.035055 1404 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:39:34.035472 kubelet[1404]: I1002 19:39:34.035096 1404 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:39:34.035472 kubelet[1404]: I1002 19:39:34.035198 1404 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:39:34.035742 kubelet[1404]: E1002 19:39:34.035717 1404 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:39:34.035840 kubelet[1404]: E1002 19:39:34.035825 1404 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:39:34.041890 kubelet[1404]: W1002 19:39:34.041835 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:34.041890 kubelet[1404]: E1002 19:39:34.041896 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:34.042000 kubelet[1404]: W1002 19:39:34.041926 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:34.042000 kubelet[1404]: E1002 19:39:34.041937 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:34.042077 kubelet[1404]: E1002 19:39:34.041977 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b75cc1fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 32220669, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 32220669, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.042244 kubelet[1404]: I1002 19:39:34.042218 1404 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:39:34.042348 kubelet[1404]: I1002 19:39:34.042325 1404 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:39:34.043115 kubelet[1404]: E1002 19:39:34.043029 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b7938907", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 35810567, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 35810567, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.043637 kubelet[1404]: E1002 19:39:34.043617 1404 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:34.043794 kubelet[1404]: I1002 19:39:34.043718 1404 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:39:34.044393 kubelet[1404]: W1002 19:39:34.044341 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:34.044393 kubelet[1404]: E1002 19:39:34.044393 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:34.044516 kubelet[1404]: E1002 19:39:34.043811 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:34.065456 kubelet[1404]: I1002 19:39:34.065418 1404 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:39:34.065621 kubelet[1404]: I1002 19:39:34.065437 1404 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:39:34.065621 kubelet[1404]: I1002 19:39:34.065515 1404 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:39:34.066054 kubelet[1404]: E1002 19:39:34.065957 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94757e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64371687, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64371687, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.067097 kubelet[1404]: E1002 19:39:34.067015 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94771e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64378341, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64378341, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.067662 kubelet[1404]: E1002 19:39:34.067609 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b9477ea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64381603, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64381603, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.074000 audit[1421]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.074000 audit[1421]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc2b295bf0 a2=0 a3=7ffc2b295bdc items=0 ppid=1404 pid=1421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:39:34.075000 audit[1426]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.075000 audit[1426]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffca3f2bc10 a2=0 a3=7ffca3f2bbfc items=0 ppid=1404 pid=1426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:39:34.143064 kubelet[1404]: E1002 19:39:34.143001 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:34.143897 kubelet[1404]: I1002 19:39:34.143865 1404 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:39:34.145402 kubelet[1404]: E1002 19:39:34.145319 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94757e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64371687, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 143793525, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94757e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.145511 kubelet[1404]: E1002 19:39:34.145415 1404 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:39:34.146321 kubelet[1404]: E1002 19:39:34.146222 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94771e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64378341, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 143804815, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94771e5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.147056 kubelet[1404]: E1002 19:39:34.146983 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b9477ea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64381603, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 143808288, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b9477ea3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.244181 kubelet[1404]: E1002 19:39:34.244039 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:34.245115 kubelet[1404]: E1002 19:39:34.245049 1404 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:34.077000 audit[1428]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.077000 audit[1428]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe628f22f0 a2=0 a3=7ffe628f22dc items=0 ppid=1404 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:34.274000 audit[1433]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.274000 audit[1433]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe5255b840 a2=0 a3=7ffe5255b82c items=0 ppid=1404 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:39:34.308680 kubelet[1404]: I1002 19:39:34.307896 1404 policy_none.go:49] "None policy: Start" Oct 2 19:39:34.309679 kubelet[1404]: I1002 19:39:34.309662 1404 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:39:34.309784 kubelet[1404]: I1002 19:39:34.309766 1404 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:39:34.344485 kubelet[1404]: E1002 19:39:34.344442 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:34.344000 audit[1438]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.344000 audit[1438]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcf3ffade0 a2=0 a3=7ffcf3ffadcc items=0 ppid=1404 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:39:34.348833 kubelet[1404]: I1002 19:39:34.347363 1404 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:39:34.347000 audit[1439]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.347000 audit[1439]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdad31cc10 a2=0 a3=7ffdad31cbfc items=0 ppid=1404 pid=1439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.347000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:39:34.352208 kubelet[1404]: E1002 19:39:34.352050 1404 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:39:34.356583 kubelet[1404]: E1002 19:39:34.354369 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94757e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64371687, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 347311012, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94757e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.357278 kubelet[1404]: E1002 19:39:34.356988 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94771e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64378341, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 347320716, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94771e5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.359978 kubelet[1404]: E1002 19:39:34.359877 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b9477ea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64381603, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 347325320, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b9477ea3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.445880 kubelet[1404]: E1002 19:39:34.445824 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:34.446000 audit[1442]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.446000 audit[1442]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe0e013010 a2=0 a3=7ffe0e012ffc items=0 ppid=1404 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:39:34.461000 audit[1445]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.461000 audit[1445]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffd8d52e030 a2=0 a3=7ffd8d52e01c items=0 ppid=1404 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:39:34.463000 audit[1446]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.463000 audit[1446]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc19d80aa0 a2=0 a3=7ffc19d80a8c items=0 ppid=1404 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.463000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:39:34.466000 audit[1447]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.466000 audit[1447]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd647aabe0 a2=0 a3=7ffd647aabcc items=0 ppid=1404 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:39:34.468000 audit[1449]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.468000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd3a2e2bb0 a2=0 a3=7ffd3a2e2b9c items=0 ppid=1404 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:39:34.517624 systemd[1]: Created slice kubepods.slice. Oct 2 19:39:34.526879 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:39:34.530311 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:39:34.548431 kubelet[1404]: E1002 19:39:34.548287 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:34.471000 audit[1451]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.471000 audit[1451]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffb98526c0 a2=0 a3=7fffb98526ac items=0 ppid=1404 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:39:34.553000 audit[1454]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.553000 audit[1454]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fffb4504a00 a2=0 a3=7fffb45049ec items=0 ppid=1404 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:39:34.558194 kubelet[1404]: I1002 19:39:34.558163 1404 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:39:34.558435 kubelet[1404]: I1002 19:39:34.558413 1404 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:39:34.558985 kubelet[1404]: I1002 19:39:34.558967 1404 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:39:34.557000 audit[1456]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.557000 audit[1456]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd09b2b6c0 a2=0 a3=7ffd09b2b6ac items=0 ppid=1404 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:39:34.557000 audit[1404]: AVC avc: denied { mac_admin } for pid=1404 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:34.557000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:39:34.557000 audit[1404]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b31ce0 a1=c000cb0d50 a2=c000b31cb0 a3=25 items=0 ppid=1 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.557000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:39:34.561531 kubelet[1404]: E1002 19:39:34.561494 1404 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:39:34.569061 kubelet[1404]: E1002 19:39:34.568658 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5d6f22982", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 562105730, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 562105730, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.570000 audit[1459]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.570000 audit[1459]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffeba7e9f90 a2=0 a3=7ffeba7e9f7c items=0 ppid=1404 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:39:34.572732 kubelet[1404]: I1002 19:39:34.572701 1404 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:39:34.572000 audit[1460]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.572000 audit[1460]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd7ed7c190 a2=0 a3=7ffd7ed7c17c items=0 ppid=1404 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:39:34.572000 audit[1461]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.572000 audit[1461]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc77976320 a2=0 a3=7ffc7797630c items=0 ppid=1404 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:39:34.573000 audit[1462]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_chain pid=1462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.573000 audit[1462]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe077d4ae0 a2=0 a3=7ffe077d4acc items=0 ppid=1404 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:39:34.573000 audit[1463]: NETFILTER_CFG table=nat:20 family=10 entries=2 op=nft_register_chain pid=1463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.573000 audit[1463]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd110a6dd0 a2=0 a3=7ffd110a6dbc items=0 ppid=1404 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:39:34.574000 audit[1464]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:39:34.574000 audit[1464]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd6f36b60 a2=0 a3=7ffcd6f36b4c items=0 ppid=1404 pid=1464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:39:34.577000 audit[1466]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.577000 audit[1466]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc69905800 a2=0 a3=7ffc699057ec items=0 ppid=1404 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:39:34.577000 audit[1467]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.577000 audit[1467]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc9235bf60 a2=0 a3=7ffc9235bf4c items=0 ppid=1404 pid=1467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:39:34.580000 audit[1469]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.580000 audit[1469]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe8d69f460 a2=0 a3=7ffe8d69f44c items=0 ppid=1404 pid=1469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:39:34.581000 audit[1470]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.581000 audit[1470]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf6b29b70 a2=0 a3=7ffcf6b29b5c items=0 ppid=1404 pid=1470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:39:34.582000 audit[1471]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.582000 audit[1471]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef8675520 a2=0 a3=7ffef867550c items=0 ppid=1404 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:39:34.585000 audit[1473]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.585000 audit[1473]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff575efe70 a2=0 a3=7fff575efe5c items=0 ppid=1404 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:39:34.588000 audit[1475]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.588000 audit[1475]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe6792d760 a2=0 a3=7ffe6792d74c items=0 ppid=1404 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:39:34.591000 audit[1477]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.591000 audit[1477]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fffcbd50350 a2=0 a3=7fffcbd5033c items=0 ppid=1404 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:39:34.594000 audit[1479]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.594000 audit[1479]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffcb161a380 a2=0 a3=7ffcb161a36c items=0 ppid=1404 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.594000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:39:34.596000 audit[1481]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.596000 audit[1481]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fff4b8c7c00 a2=0 a3=7fff4b8c7bec items=0 ppid=1404 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:39:34.599915 kubelet[1404]: I1002 19:39:34.598828 1404 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:39:34.599915 kubelet[1404]: I1002 19:39:34.598906 1404 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:39:34.599915 kubelet[1404]: I1002 19:39:34.598950 1404 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:39:34.599915 kubelet[1404]: E1002 19:39:34.599047 1404 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:39:34.598000 audit[1482]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.598000 audit[1482]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc501c0b30 a2=0 a3=7ffc501c0b1c items=0 ppid=1404 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:39:34.599000 audit[1483]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.599000 audit[1483]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce18508f0 a2=0 a3=7ffce18508dc items=0 ppid=1404 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:39:34.602119 kubelet[1404]: W1002 19:39:34.601833 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:34.602119 kubelet[1404]: E1002 19:39:34.601891 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:34.600000 audit[1484]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:39:34.600000 audit[1484]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc570b6030 a2=0 a3=7ffc570b601c items=0 ppid=1404 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:34.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:39:34.646886 kubelet[1404]: E1002 19:39:34.646824 1404 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:34.648938 kubelet[1404]: E1002 19:39:34.648883 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:34.750219 kubelet[1404]: E1002 19:39:34.750129 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:34.757131 kubelet[1404]: I1002 19:39:34.757077 1404 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:39:34.765770 kubelet[1404]: E1002 19:39:34.765560 1404 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:39:34.765770 kubelet[1404]: E1002 19:39:34.765469 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94757e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64371687, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 757025788, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94757e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.834411 kubelet[1404]: E1002 19:39:34.834144 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94771e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64378341, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 757033442, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94771e5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:34.850740 kubelet[1404]: E1002 19:39:34.850662 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:34.907003 kubelet[1404]: W1002 19:39:34.906928 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:34.907003 kubelet[1404]: E1002 19:39:34.906985 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:34.951694 kubelet[1404]: E1002 19:39:34.951631 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.031448 kubelet[1404]: E1002 19:39:35.031393 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:35.034273 kubelet[1404]: E1002 19:39:35.034144 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b9477ea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64381603, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 757036724, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b9477ea3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:35.051841 kubelet[1404]: E1002 19:39:35.051744 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.152771 kubelet[1404]: E1002 19:39:35.152617 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.250982 kubelet[1404]: W1002 19:39:35.250934 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:35.250982 kubelet[1404]: E1002 19:39:35.250975 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:35.253081 kubelet[1404]: E1002 19:39:35.253046 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.353529 kubelet[1404]: E1002 19:39:35.353468 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.449045 kubelet[1404]: E1002 19:39:35.448870 1404 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:35.453977 kubelet[1404]: E1002 19:39:35.453936 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.493965 kubelet[1404]: W1002 19:39:35.493899 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:35.493965 kubelet[1404]: E1002 19:39:35.493944 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:35.557889 kubelet[1404]: E1002 19:39:35.556909 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.568149 kubelet[1404]: I1002 19:39:35.568013 1404 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:39:35.570980 kubelet[1404]: E1002 19:39:35.570901 1404 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:39:35.571188 kubelet[1404]: E1002 19:39:35.570961 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94757e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64371687, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 35, 567936474, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94757e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:35.574340 kubelet[1404]: E1002 19:39:35.573702 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94771e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64378341, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 35, 567954038, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94771e5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:35.612304 kubelet[1404]: W1002 19:39:35.608496 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:35.612525 kubelet[1404]: E1002 19:39:35.612371 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:35.636832 kubelet[1404]: E1002 19:39:35.636656 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b9477ea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64381603, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 35, 567971430, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b9477ea3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:35.657851 kubelet[1404]: E1002 19:39:35.657711 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.758958 kubelet[1404]: E1002 19:39:35.758792 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.859406 kubelet[1404]: E1002 19:39:35.859323 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:35.960250 kubelet[1404]: E1002 19:39:35.960185 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.032361 kubelet[1404]: E1002 19:39:36.032210 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:36.061159 kubelet[1404]: E1002 19:39:36.061102 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.162101 kubelet[1404]: E1002 19:39:36.162020 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.262992 kubelet[1404]: E1002 19:39:36.262906 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.363194 kubelet[1404]: E1002 19:39:36.363016 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.464645 kubelet[1404]: E1002 19:39:36.464350 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.565562 kubelet[1404]: E1002 19:39:36.565388 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.665921 kubelet[1404]: E1002 19:39:36.665757 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.766968 kubelet[1404]: E1002 19:39:36.766146 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.867034 kubelet[1404]: E1002 19:39:36.866804 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:36.968044 kubelet[1404]: E1002 19:39:36.967909 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.033205 kubelet[1404]: E1002 19:39:37.032366 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:37.051251 kubelet[1404]: E1002 19:39:37.051139 1404 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:37.068712 kubelet[1404]: E1002 19:39:37.068615 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.169865 kubelet[1404]: E1002 19:39:37.169783 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.172792 kubelet[1404]: I1002 19:39:37.172764 1404 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:39:37.174237 kubelet[1404]: E1002 19:39:37.174209 1404 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:39:37.174303 kubelet[1404]: E1002 19:39:37.174208 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94757e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64371687, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 37, 172677976, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94757e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:37.175070 kubelet[1404]: E1002 19:39:37.175001 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94771e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64378341, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 37, 172698262, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94771e5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:37.175759 kubelet[1404]: E1002 19:39:37.175704 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b9477ea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64381603, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 37, 172730856, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b9477ea3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:37.181964 kubelet[1404]: W1002 19:39:37.181929 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:37.182357 kubelet[1404]: E1002 19:39:37.182320 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:37.271392 kubelet[1404]: E1002 19:39:37.270981 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.372884 kubelet[1404]: E1002 19:39:37.372049 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.472865 kubelet[1404]: E1002 19:39:37.472460 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.530693 kubelet[1404]: W1002 19:39:37.530363 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:37.530693 kubelet[1404]: E1002 19:39:37.530415 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:37.573023 kubelet[1404]: E1002 19:39:37.572806 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.673789 kubelet[1404]: E1002 19:39:37.673637 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.777834 kubelet[1404]: E1002 19:39:37.776816 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.877459 kubelet[1404]: E1002 19:39:37.877249 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:37.978465 kubelet[1404]: E1002 19:39:37.978375 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.003145 kubelet[1404]: W1002 19:39:38.003083 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:38.003145 kubelet[1404]: E1002 19:39:38.003121 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:39:38.005984 kubelet[1404]: W1002 19:39:38.005944 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:38.005984 kubelet[1404]: E1002 19:39:38.005965 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:38.032641 kubelet[1404]: E1002 19:39:38.032574 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:38.079606 kubelet[1404]: E1002 19:39:38.079521 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.180631 kubelet[1404]: E1002 19:39:38.180452 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.281565 kubelet[1404]: E1002 19:39:38.281479 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.382680 kubelet[1404]: E1002 19:39:38.382598 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.483120 kubelet[1404]: E1002 19:39:38.482909 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.583522 kubelet[1404]: E1002 19:39:38.583470 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.684084 kubelet[1404]: E1002 19:39:38.683964 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.785228 kubelet[1404]: E1002 19:39:38.785093 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.886194 kubelet[1404]: E1002 19:39:38.886114 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:38.987129 kubelet[1404]: E1002 19:39:38.987071 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.032785 kubelet[1404]: E1002 19:39:39.032728 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:39.088055 kubelet[1404]: E1002 19:39:39.087910 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.188170 kubelet[1404]: E1002 19:39:39.188022 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.289099 kubelet[1404]: E1002 19:39:39.289030 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.390243 kubelet[1404]: E1002 19:39:39.390069 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.491188 kubelet[1404]: E1002 19:39:39.491107 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.559551 kubelet[1404]: E1002 19:39:39.559513 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:39.592255 kubelet[1404]: E1002 19:39:39.592171 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.692924 kubelet[1404]: E1002 19:39:39.692699 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.793777 kubelet[1404]: E1002 19:39:39.793694 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.894710 kubelet[1404]: E1002 19:39:39.894615 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:39.995864 kubelet[1404]: E1002 19:39:39.995704 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:40.033349 kubelet[1404]: E1002 19:39:40.033298 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:40.096557 kubelet[1404]: E1002 19:39:40.096396 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:40.197492 kubelet[1404]: E1002 19:39:40.197421 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:40.253420 kubelet[1404]: E1002 19:39:40.253270 1404 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:39:40.297823 kubelet[1404]: E1002 19:39:40.297734 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:40.375150 kubelet[1404]: I1002 19:39:40.375117 1404 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:39:40.376322 kubelet[1404]: E1002 19:39:40.376288 1404 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:39:40.376411 kubelet[1404]: E1002 19:39:40.376291 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94757e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64371687, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 40, 375057298, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94757e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:40.377082 kubelet[1404]: E1002 19:39:40.377026 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b94771e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64378341, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 40, 375069394, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b94771e5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:40.377937 kubelet[1404]: E1002 19:39:40.377877 1404 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a61a5b9477ea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 39, 34, 64381603, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 39, 40, 375086437, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a61a5b9477ea3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:39:40.398475 kubelet[1404]: E1002 19:39:40.398366 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:40.499514 kubelet[1404]: E1002 19:39:40.499433 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:40.599937 kubelet[1404]: E1002 19:39:40.599784 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:40.700233 kubelet[1404]: E1002 19:39:40.700154 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:40.800942 kubelet[1404]: E1002 19:39:40.800695 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:40.902686 kubelet[1404]: E1002 19:39:40.901831 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.002575 kubelet[1404]: E1002 19:39:41.002394 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.034317 kubelet[1404]: E1002 19:39:41.033849 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:41.103715 kubelet[1404]: E1002 19:39:41.103569 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.204450 kubelet[1404]: E1002 19:39:41.204266 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.303160 kubelet[1404]: W1002 19:39:41.303057 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:41.303160 kubelet[1404]: E1002 19:39:41.303099 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:39:41.305203 kubelet[1404]: E1002 19:39:41.305156 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.406219 kubelet[1404]: E1002 19:39:41.406152 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.507449 kubelet[1404]: E1002 19:39:41.507293 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.608126 kubelet[1404]: E1002 19:39:41.608057 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.709378 kubelet[1404]: E1002 19:39:41.709280 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.812035 kubelet[1404]: E1002 19:39:41.811793 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:41.913434 kubelet[1404]: E1002 19:39:41.913300 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.013745 kubelet[1404]: E1002 19:39:42.013503 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.034564 kubelet[1404]: E1002 19:39:42.034360 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:42.064096 kubelet[1404]: W1002 19:39:42.063877 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:42.064096 kubelet[1404]: E1002 19:39:42.063932 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:39:42.089763 kubelet[1404]: W1002 19:39:42.089691 1404 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:42.089763 kubelet[1404]: E1002 19:39:42.089742 1404 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:39:42.113903 kubelet[1404]: E1002 19:39:42.113664 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.214504 kubelet[1404]: E1002 19:39:42.214375 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.317444 kubelet[1404]: E1002 19:39:42.317237 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.417677 kubelet[1404]: E1002 19:39:42.417479 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.517935 kubelet[1404]: E1002 19:39:42.517758 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.619221 kubelet[1404]: E1002 19:39:42.618051 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.718669 kubelet[1404]: E1002 19:39:42.718595 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.819529 kubelet[1404]: E1002 19:39:42.819462 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:42.920656 kubelet[1404]: E1002 19:39:42.920474 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.020911 kubelet[1404]: E1002 19:39:43.020820 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.036017 kubelet[1404]: E1002 19:39:43.034957 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:43.121701 kubelet[1404]: E1002 19:39:43.121633 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.222818 kubelet[1404]: E1002 19:39:43.222663 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.323813 kubelet[1404]: E1002 19:39:43.323726 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.424853 kubelet[1404]: E1002 19:39:43.424791 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.525169 kubelet[1404]: E1002 19:39:43.524993 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.625859 kubelet[1404]: E1002 19:39:43.625803 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.726665 kubelet[1404]: E1002 19:39:43.726549 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.827771 kubelet[1404]: E1002 19:39:43.827587 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.928416 kubelet[1404]: E1002 19:39:43.928338 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:43.982057 kubelet[1404]: I1002 19:39:43.981971 1404 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:39:44.028658 kubelet[1404]: E1002 19:39:44.028602 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:44.036015 kubelet[1404]: E1002 19:39:44.035950 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:44.129332 kubelet[1404]: E1002 19:39:44.129204 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:44.230071 kubelet[1404]: E1002 19:39:44.229968 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:44.331005 kubelet[1404]: E1002 19:39:44.330920 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:44.431504 kubelet[1404]: E1002 19:39:44.431355 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:44.455740 kubelet[1404]: E1002 19:39:44.455647 1404 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.12" not found Oct 2 19:39:44.532414 kubelet[1404]: E1002 19:39:44.532321 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:44.560677 kubelet[1404]: E1002 19:39:44.560641 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:44.561679 kubelet[1404]: E1002 19:39:44.561650 1404 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:39:44.632993 kubelet[1404]: E1002 19:39:44.632924 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:44.734637 kubelet[1404]: E1002 19:39:44.733813 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:44.834768 kubelet[1404]: E1002 19:39:44.834717 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:44.935177 kubelet[1404]: E1002 19:39:44.935118 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.035405 kubelet[1404]: E1002 19:39:45.035254 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.035405 kubelet[1404]: I1002 19:39:45.035308 1404 apiserver.go:52] "Watching apiserver" Oct 2 19:39:45.036429 kubelet[1404]: E1002 19:39:45.036400 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:45.136068 kubelet[1404]: E1002 19:39:45.136008 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.236982 kubelet[1404]: E1002 19:39:45.236894 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.337584 kubelet[1404]: E1002 19:39:45.337404 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.421119 kubelet[1404]: I1002 19:39:45.421033 1404 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:39:45.438395 kubelet[1404]: E1002 19:39:45.438322 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.538559 kubelet[1404]: E1002 19:39:45.538483 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.640096 kubelet[1404]: E1002 19:39:45.639885 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.740476 kubelet[1404]: E1002 19:39:45.740370 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.840948 kubelet[1404]: E1002 19:39:45.840553 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:45.859122 kubelet[1404]: E1002 19:39:45.857202 1404 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.12" not found Oct 2 19:39:45.941466 kubelet[1404]: E1002 19:39:45.941168 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.037087 kubelet[1404]: E1002 19:39:46.037032 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:46.042246 kubelet[1404]: E1002 19:39:46.042208 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.143045 kubelet[1404]: E1002 19:39:46.142964 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.246711 kubelet[1404]: E1002 19:39:46.245861 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.347382 kubelet[1404]: E1002 19:39:46.347243 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.449943 kubelet[1404]: E1002 19:39:46.449554 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.550240 kubelet[1404]: E1002 19:39:46.549963 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.651260 kubelet[1404]: E1002 19:39:46.651141 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.660625 kubelet[1404]: E1002 19:39:46.660523 1404 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.12\" not found" node="10.0.0.12" Oct 2 19:39:46.752439 kubelet[1404]: E1002 19:39:46.752284 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.778012 kubelet[1404]: I1002 19:39:46.777948 1404 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:39:46.852563 kubelet[1404]: E1002 19:39:46.852394 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:46.953393 kubelet[1404]: E1002 19:39:46.953337 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:47.037310 kubelet[1404]: E1002 19:39:47.037236 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:47.053933 kubelet[1404]: E1002 19:39:47.053835 1404 kubelet.go:2448] "Error getting node" err="node \"10.0.0.12\" not found" Oct 2 19:39:47.076822 kubelet[1404]: I1002 19:39:47.076698 1404 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.12" Oct 2 19:39:47.155807 kubelet[1404]: I1002 19:39:47.154625 1404 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:39:47.155993 env[1096]: time="2023-10-02T19:39:47.155411031Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:39:47.157182 kubelet[1404]: I1002 19:39:47.157099 1404 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:39:47.157863 kubelet[1404]: E1002 19:39:47.157759 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:47.304920 kubelet[1404]: I1002 19:39:47.304726 1404 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:39:47.307105 kubelet[1404]: I1002 19:39:47.307007 1404 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:39:47.309770 sudo[1223]: pam_unix(sudo:session): session closed for user root Oct 2 19:39:47.308000 audit[1223]: USER_END pid=1223 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:47.312860 systemd[1]: Created slice kubepods-besteffort-pod567853f0_51fb_4f77_b562_177e029f117c.slice. Oct 2 19:39:47.344367 kernel: kauditd_printk_skb: 474 callbacks suppressed Oct 2 19:39:47.344599 kernel: audit: type=1106 audit(1696275587.308:561): pid=1223 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:47.308000 audit[1223]: CRED_DISP pid=1223 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:47.349000 audit[1219]: USER_END pid=1219 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:47.349613 sshd[1219]: pam_unix(sshd:session): session closed for user core Oct 2 19:39:47.355303 kernel: audit: type=1104 audit(1696275587.308:562): pid=1223 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:39:47.355438 kernel: audit: type=1106 audit(1696275587.349:563): pid=1219 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:47.355468 kernel: audit: type=1104 audit(1696275587.349:564): pid=1219 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:47.349000 audit[1219]: CRED_DISP pid=1219 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:39:47.365066 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:51716.service: Deactivated successfully. Oct 2 19:39:47.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:51716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:47.369463 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:39:47.371596 kernel: audit: type=1131 audit(1696275587.365:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:51716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:39:47.373316 systemd-logind[1082]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:39:47.375881 systemd[1]: Created slice kubepods-burstable-pod0c5b95ac_1a94_42c5_81dc_b0098e5e789c.slice. Oct 2 19:39:47.377202 systemd-logind[1082]: Removed session 7. Oct 2 19:39:47.440474 kubelet[1404]: I1002 19:39:47.439586 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-hubble-tls\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.440474 kubelet[1404]: I1002 19:39:47.439661 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-run\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.440474 kubelet[1404]: I1002 19:39:47.439696 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-bpf-maps\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.440474 kubelet[1404]: I1002 19:39:47.439728 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cni-path\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.440474 kubelet[1404]: I1002 19:39:47.439783 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/567853f0-51fb-4f77-b562-177e029f117c-kube-proxy\") pod \"kube-proxy-g44z6\" (UID: \"567853f0-51fb-4f77-b562-177e029f117c\") " pod="kube-system/kube-proxy-g44z6" Oct 2 19:39:47.440474 kubelet[1404]: I1002 19:39:47.439820 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/567853f0-51fb-4f77-b562-177e029f117c-xtables-lock\") pod \"kube-proxy-g44z6\" (UID: \"567853f0-51fb-4f77-b562-177e029f117c\") " pod="kube-system/kube-proxy-g44z6" Oct 2 19:39:47.440821 kubelet[1404]: I1002 19:39:47.439919 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-xtables-lock\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.440821 kubelet[1404]: I1002 19:39:47.439985 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-host-proc-sys-kernel\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.440821 kubelet[1404]: I1002 19:39:47.440013 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/567853f0-51fb-4f77-b562-177e029f117c-lib-modules\") pod \"kube-proxy-g44z6\" (UID: \"567853f0-51fb-4f77-b562-177e029f117c\") " pod="kube-system/kube-proxy-g44z6" Oct 2 19:39:47.440821 kubelet[1404]: I1002 19:39:47.440088 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-hostproc\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.440821 kubelet[1404]: I1002 19:39:47.440119 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-etc-cni-netd\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.440821 kubelet[1404]: I1002 19:39:47.440146 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-clustermesh-secrets\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.441018 kubelet[1404]: I1002 19:39:47.440171 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-config-path\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.441018 kubelet[1404]: I1002 19:39:47.440198 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-host-proc-sys-net\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.441018 kubelet[1404]: I1002 19:39:47.440227 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljxbj\" (UniqueName: \"kubernetes.io/projected/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-kube-api-access-ljxbj\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.441018 kubelet[1404]: I1002 19:39:47.440288 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjv6\" (UniqueName: \"kubernetes.io/projected/567853f0-51fb-4f77-b562-177e029f117c-kube-api-access-6sjv6\") pod \"kube-proxy-g44z6\" (UID: \"567853f0-51fb-4f77-b562-177e029f117c\") " pod="kube-system/kube-proxy-g44z6" Oct 2 19:39:47.441018 kubelet[1404]: I1002 19:39:47.440405 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-cgroup\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:47.441018 kubelet[1404]: I1002 19:39:47.440497 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-lib-modules\") pod \"cilium-5zv6z\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " pod="kube-system/cilium-5zv6z" Oct 2 19:39:48.038430 kubelet[1404]: E1002 19:39:48.038271 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:48.444243 kubelet[1404]: I1002 19:39:48.443886 1404 request.go:690] Waited for 1.136310855s due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.11:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dcilium-clustermesh&limit=500&resourceVersion=0 Oct 2 19:39:48.543769 kubelet[1404]: E1002 19:39:48.542942 1404 configmap.go:197] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Oct 2 19:39:48.543769 kubelet[1404]: E1002 19:39:48.543117 1404 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-config-path podName:0c5b95ac-1a94-42c5-81dc-b0098e5e789c nodeName:}" failed. No retries permitted until 2023-10-02 19:39:49.043084344 +0000 UTC m=+15.467585020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-config-path") pod "cilium-5zv6z" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c") : failed to sync configmap cache: timed out waiting for the condition Oct 2 19:39:49.039338 kubelet[1404]: E1002 19:39:49.039258 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:49.165613 kubelet[1404]: E1002 19:39:49.165561 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:49.166750 env[1096]: time="2023-10-02T19:39:49.166697708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g44z6,Uid:567853f0-51fb-4f77-b562-177e029f117c,Namespace:kube-system,Attempt:0,}" Oct 2 19:39:49.487712 kubelet[1404]: E1002 19:39:49.487575 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:49.488319 env[1096]: time="2023-10-02T19:39:49.488283572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zv6z,Uid:0c5b95ac-1a94-42c5-81dc-b0098e5e789c,Namespace:kube-system,Attempt:0,}" Oct 2 19:39:49.561629 kubelet[1404]: E1002 19:39:49.561600 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:49.926902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761353976.mount: Deactivated successfully. Oct 2 19:39:49.936678 env[1096]: time="2023-10-02T19:39:49.936583699Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:49.937729 env[1096]: time="2023-10-02T19:39:49.937674743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:49.939245 env[1096]: time="2023-10-02T19:39:49.939203105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:49.940600 env[1096]: time="2023-10-02T19:39:49.940557585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:49.941768 env[1096]: time="2023-10-02T19:39:49.941712577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:49.943320 env[1096]: time="2023-10-02T19:39:49.943261430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:49.969807 env[1096]: time="2023-10-02T19:39:49.969738787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:49.972218 env[1096]: time="2023-10-02T19:39:49.972179056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:39:50.002765 env[1096]: time="2023-10-02T19:39:50.002689746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:39:50.002765 env[1096]: time="2023-10-02T19:39:50.002736418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:39:50.002931 env[1096]: time="2023-10-02T19:39:50.002808841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:39:50.002989 env[1096]: time="2023-10-02T19:39:50.002847500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:39:50.002989 env[1096]: time="2023-10-02T19:39:50.002864327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:39:50.003070 env[1096]: time="2023-10-02T19:39:50.003027316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e4cd68b70eca72368ffebd993d5c399a04de406702a26a49f7852c973bcf485 pid=1509 runtime=io.containerd.runc.v2 Oct 2 19:39:50.003180 env[1096]: time="2023-10-02T19:39:50.003058995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:39:50.003421 env[1096]: time="2023-10-02T19:39:50.003384229Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4 pid=1510 runtime=io.containerd.runc.v2 Oct 2 19:39:50.014271 systemd[1]: Started cri-containerd-1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4.scope. Oct 2 19:39:50.034671 systemd[1]: Started cri-containerd-9e4cd68b70eca72368ffebd993d5c399a04de406702a26a49f7852c973bcf485.scope. Oct 2 19:39:50.039734 kubelet[1404]: E1002 19:39:50.039680 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:50.047000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.047000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.065035 kernel: audit: type=1400 audit(1696275590.047:566): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.065090 kernel: audit: type=1400 audit(1696275590.047:567): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.065141 kernel: audit: type=1400 audit(1696275590.047:568): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.047000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066850 kernel: audit: type=1400 audit(1696275590.047:569): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.047000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070564 kernel: audit: type=1400 audit(1696275590.047:570): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.047000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.047000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.047000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.047000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.047000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit: BPF prog-id=61 op=LOAD Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1510 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623634373162373761326564633739313537306161396634383161 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1510 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623634373162373761326564633739313537306161396634383161 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.063000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.063000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.063000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.063000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.063000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.063000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.063000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.063000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.063000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.062000 audit: BPF prog-id=62 op=LOAD Oct 2 19:39:50.062000 audit[1527]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000307a60 items=0 ppid=1510 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623634373162373761326564633739313537306161396634383161 Oct 2 19:39:50.064000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.064000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.064000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.064000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.064000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.064000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.064000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.064000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit: BPF prog-id=63 op=LOAD Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000149c48 a2=10 a3=1c items=0 ppid=1509 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346364363862373065636137323336386666656264393933643563 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001496b0 a2=3c a3=c items=0 ppid=1509 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346364363862373065636137323336386666656264393933643563 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.064000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.064000 audit[1527]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000307aa8 items=0 ppid=1510 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623634373162373761326564633739313537306161396634383161 Oct 2 19:39:50.070000 audit: BPF prog-id=64 op=UNLOAD Oct 2 19:39:50.070000 audit: BPF prog-id=62 op=UNLOAD Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { perfmon } for pid=1527 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit[1527]: AVC avc: denied { bpf } for pid=1527 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.070000 audit: BPF prog-id=65 op=LOAD Oct 2 19:39:50.070000 audit[1527]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000307eb8 items=0 ppid=1510 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623634373162373761326564633739313537306161396634383161 Oct 2 19:39:50.066000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.066000 audit: BPF prog-id=66 op=LOAD Oct 2 19:39:50.066000 audit[1531]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001499d8 a2=78 a3=c000218e00 items=0 ppid=1509 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346364363862373065636137323336386666656264393933643563 Oct 2 19:39:50.072000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.072000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.072000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.072000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.072000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.072000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.072000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.072000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.072000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.072000 audit: BPF prog-id=67 op=LOAD Oct 2 19:39:50.072000 audit[1531]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000149770 a2=78 a3=c000218e48 items=0 ppid=1509 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346364363862373065636137323336386666656264393933643563 Oct 2 19:39:50.073000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:39:50.073000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { perfmon } for pid=1531 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit[1531]: AVC avc: denied { bpf } for pid=1531 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:39:50.073000 audit: BPF prog-id=68 op=LOAD Oct 2 19:39:50.073000 audit[1531]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000149c30 a2=78 a3=c000219258 items=0 ppid=1509 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:39:50.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965346364363862373065636137323336386666656264393933643563 Oct 2 19:39:50.085959 env[1096]: time="2023-10-02T19:39:50.085906782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zv6z,Uid:0c5b95ac-1a94-42c5-81dc-b0098e5e789c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\"" Oct 2 19:39:50.086818 kubelet[1404]: E1002 19:39:50.086800 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:50.088237 env[1096]: time="2023-10-02T19:39:50.088209518Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:39:50.089920 env[1096]: time="2023-10-02T19:39:50.089888108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g44z6,Uid:567853f0-51fb-4f77-b562-177e029f117c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e4cd68b70eca72368ffebd993d5c399a04de406702a26a49f7852c973bcf485\"" Oct 2 19:39:50.090688 kubelet[1404]: E1002 19:39:50.090545 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:51.040325 kubelet[1404]: E1002 19:39:51.040237 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:52.040445 kubelet[1404]: E1002 19:39:52.040379 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:53.041078 kubelet[1404]: E1002 19:39:53.040941 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:54.032373 kubelet[1404]: E1002 19:39:54.032292 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:54.041582 kubelet[1404]: E1002 19:39:54.041506 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:54.563568 kubelet[1404]: E1002 19:39:54.563486 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:55.041826 kubelet[1404]: E1002 19:39:55.041683 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:56.042677 kubelet[1404]: E1002 19:39:56.042611 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:57.043477 kubelet[1404]: E1002 19:39:57.043394 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:57.411398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139255655.mount: Deactivated successfully. Oct 2 19:39:58.044164 kubelet[1404]: E1002 19:39:58.044085 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:59.044555 kubelet[1404]: E1002 19:39:59.044491 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:59.564574 kubelet[1404]: E1002 19:39:59.564520 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:00.047178 kubelet[1404]: E1002 19:40:00.046871 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:01.048073 kubelet[1404]: E1002 19:40:01.047929 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:02.050922 kubelet[1404]: E1002 19:40:02.050804 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:03.051981 kubelet[1404]: E1002 19:40:03.051906 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:03.495391 env[1096]: time="2023-10-02T19:40:03.495310621Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:40:03.510810 env[1096]: time="2023-10-02T19:40:03.510728876Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:40:03.526093 env[1096]: time="2023-10-02T19:40:03.526012403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:40:03.526729 env[1096]: time="2023-10-02T19:40:03.526685301Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 19:40:03.527842 env[1096]: time="2023-10-02T19:40:03.527797659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:40:03.529106 env[1096]: time="2023-10-02T19:40:03.529066490Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:40:03.555253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011269454.mount: Deactivated successfully. Oct 2 19:40:03.576855 env[1096]: time="2023-10-02T19:40:03.576775688Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\"" Oct 2 19:40:03.577582 env[1096]: time="2023-10-02T19:40:03.577526012Z" level=info msg="StartContainer for \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\"" Oct 2 19:40:03.637677 systemd[1]: Started cri-containerd-7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024.scope. Oct 2 19:40:03.666201 systemd[1]: cri-containerd-7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024.scope: Deactivated successfully. Oct 2 19:40:03.672023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024-rootfs.mount: Deactivated successfully. Oct 2 19:40:04.052643 kubelet[1404]: E1002 19:40:04.052558 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:04.565844 env[1096]: time="2023-10-02T19:40:04.565788536Z" level=info msg="shim disconnected" id=7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024 Oct 2 19:40:04.565844 env[1096]: time="2023-10-02T19:40:04.565841289Z" level=warning msg="cleaning up after shim disconnected" id=7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024 namespace=k8s.io Oct 2 19:40:04.565844 env[1096]: time="2023-10-02T19:40:04.565852548Z" level=info msg="cleaning up dead shim" Oct 2 19:40:04.566221 kubelet[1404]: E1002 19:40:04.566114 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:04.574341 env[1096]: time="2023-10-02T19:40:04.574295767Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1606 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:04.574734 env[1096]: time="2023-10-02T19:40:04.574617963Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Oct 2 19:40:04.576667 env[1096]: time="2023-10-02T19:40:04.576609112Z" level=error msg="Failed to pipe stdout of container \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\"" error="reading from a closed fifo" Oct 2 19:40:04.583655 env[1096]: time="2023-10-02T19:40:04.583615515Z" level=error msg="Failed to pipe stderr of container \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\"" error="reading from a closed fifo" Oct 2 19:40:04.671189 env[1096]: time="2023-10-02T19:40:04.671119611Z" level=error msg="StartContainer for \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:04.671378 kubelet[1404]: E1002 19:40:04.671348 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024" Oct 2 19:40:04.671584 kubelet[1404]: E1002 19:40:04.671560 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:04.671584 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:04.671584 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:40:04.671584 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ljxbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:04.671888 kubelet[1404]: E1002 19:40:04.671636 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:04.720436 kubelet[1404]: E1002 19:40:04.720399 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:04.722481 env[1096]: time="2023-10-02T19:40:04.722424920Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:40:05.053033 kubelet[1404]: E1002 19:40:05.052986 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:05.756963 env[1096]: time="2023-10-02T19:40:05.756747966Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\"" Oct 2 19:40:05.759566 env[1096]: time="2023-10-02T19:40:05.759031785Z" level=info msg="StartContainer for \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\"" Oct 2 19:40:05.806193 systemd[1]: Started cri-containerd-af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2.scope. Oct 2 19:40:05.830102 systemd[1]: cri-containerd-af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2.scope: Deactivated successfully. Oct 2 19:40:05.835940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2-rootfs.mount: Deactivated successfully. Oct 2 19:40:05.864076 env[1096]: time="2023-10-02T19:40:05.863989475Z" level=info msg="shim disconnected" id=af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2 Oct 2 19:40:05.864076 env[1096]: time="2023-10-02T19:40:05.864070768Z" level=warning msg="cleaning up after shim disconnected" id=af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2 namespace=k8s.io Oct 2 19:40:05.864076 env[1096]: time="2023-10-02T19:40:05.864084572Z" level=info msg="cleaning up dead shim" Oct 2 19:40:05.879738 env[1096]: time="2023-10-02T19:40:05.879496044Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1641 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:05Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:05.882106 env[1096]: time="2023-10-02T19:40:05.882051541Z" level=error msg="Failed to pipe stderr of container \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\"" error="reading from a closed fifo" Oct 2 19:40:05.884355 env[1096]: time="2023-10-02T19:40:05.880710326Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Oct 2 19:40:05.893141 env[1096]: time="2023-10-02T19:40:05.892055326Z" level=error msg="Failed to pipe stdout of container \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\"" error="reading from a closed fifo" Oct 2 19:40:05.908573 env[1096]: time="2023-10-02T19:40:05.908456453Z" level=error msg="StartContainer for \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:05.913190 kubelet[1404]: E1002 19:40:05.912454 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2" Oct 2 19:40:05.913190 kubelet[1404]: E1002 19:40:05.912625 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:05.913190 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:05.913190 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:40:05.913542 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ljxbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:05.913685 kubelet[1404]: E1002 19:40:05.912679 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:06.053876 kubelet[1404]: E1002 19:40:06.053670 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:06.736264 kubelet[1404]: I1002 19:40:06.735603 1404 scope.go:115] "RemoveContainer" containerID="7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024" Oct 2 19:40:06.736264 kubelet[1404]: I1002 19:40:06.736028 1404 scope.go:115] "RemoveContainer" containerID="7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024" Oct 2 19:40:06.738608 env[1096]: time="2023-10-02T19:40:06.738102003Z" level=info msg="RemoveContainer for \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\"" Oct 2 19:40:06.739522 env[1096]: time="2023-10-02T19:40:06.739493966Z" level=info msg="RemoveContainer for \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\"" Oct 2 19:40:06.739718 env[1096]: time="2023-10-02T19:40:06.739674144Z" level=error msg="RemoveContainer for \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\" failed" error="failed to set removing state for container \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\": container is already in removing state" Oct 2 19:40:06.742598 kubelet[1404]: E1002 19:40:06.739944 1404 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\": container is already in removing state" containerID="7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024" Oct 2 19:40:06.742598 kubelet[1404]: E1002 19:40:06.739995 1404 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024": container is already in removing state; Skipping pod "cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)" Oct 2 19:40:06.742598 kubelet[1404]: E1002 19:40:06.740057 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:06.742598 kubelet[1404]: E1002 19:40:06.740405 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:06.748863 env[1096]: time="2023-10-02T19:40:06.748804482Z" level=info msg="RemoveContainer for \"7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024\" returns successfully" Oct 2 19:40:07.054566 kubelet[1404]: E1002 19:40:07.054338 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:07.206660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356792629.mount: Deactivated successfully. Oct 2 19:40:07.673948 kubelet[1404]: W1002 19:40:07.671129 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5b95ac_1a94_42c5_81dc_b0098e5e789c.slice/cri-containerd-7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024.scope WatchSource:0}: container "7c6ec385c020fb2f64dd5068385029ba17030a598635fe4fd1c700e5ccef4024" in namespace "k8s.io": not found Oct 2 19:40:07.738818 kubelet[1404]: E1002 19:40:07.738779 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:07.739189 kubelet[1404]: E1002 19:40:07.739040 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:08.056379 kubelet[1404]: E1002 19:40:08.056174 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:08.332360 env[1096]: time="2023-10-02T19:40:08.331348587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:40:08.337471 env[1096]: time="2023-10-02T19:40:08.336893228Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:40:08.339548 env[1096]: time="2023-10-02T19:40:08.339453857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:40:08.352134 env[1096]: time="2023-10-02T19:40:08.352003931Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:40:08.353151 env[1096]: time="2023-10-02T19:40:08.353055593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 19:40:08.356904 env[1096]: time="2023-10-02T19:40:08.356698479Z" level=info msg="CreateContainer within sandbox \"9e4cd68b70eca72368ffebd993d5c399a04de406702a26a49f7852c973bcf485\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:40:08.584164 env[1096]: time="2023-10-02T19:40:08.584003261Z" level=info msg="CreateContainer within sandbox \"9e4cd68b70eca72368ffebd993d5c399a04de406702a26a49f7852c973bcf485\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73d2978983b77de9ca4376758c5272e9c2f7c5f48022589c94d874c9e0d3f7f7\"" Oct 2 19:40:08.584709 env[1096]: time="2023-10-02T19:40:08.584655302Z" level=info msg="StartContainer for \"73d2978983b77de9ca4376758c5272e9c2f7c5f48022589c94d874c9e0d3f7f7\"" Oct 2 19:40:08.664069 systemd[1]: Started cri-containerd-73d2978983b77de9ca4376758c5272e9c2f7c5f48022589c94d874c9e0d3f7f7.scope. Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.704525 kernel: kauditd_printk_skb: 111 callbacks suppressed Oct 2 19:40:08.704677 kernel: audit: type=1400 audit(1696275608.702:602): avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit[1660]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1509 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.710315 kernel: audit: type=1300 audit(1696275608.702:602): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1509 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.710370 kernel: audit: type=1327 audit(1696275608.702:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643239373839383362373764653963613433373637353863353237 Oct 2 19:40:08.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643239373839383362373764653963613433373637353863353237 Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.715831 kernel: audit: type=1400 audit(1696275608.702:603): avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.715934 kernel: audit: type=1400 audit(1696275608.702:603): avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.720506 kernel: audit: type=1400 audit(1696275608.702:603): avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.720570 kernel: audit: type=1400 audit(1696275608.702:603): avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.725137 kernel: audit: type=1400 audit(1696275608.702:603): avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.727902 kernel: audit: type=1400 audit(1696275608.702:603): avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.727987 kernel: audit: type=1400 audit(1696275608.702:603): avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.728860 env[1096]: time="2023-10-02T19:40:08.728806973Z" level=info msg="StartContainer for \"73d2978983b77de9ca4376758c5272e9c2f7c5f48022589c94d874c9e0d3f7f7\" returns successfully" Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.702000 audit: BPF prog-id=69 op=LOAD Oct 2 19:40:08.702000 audit[1660]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0000a1110 items=0 ppid=1509 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643239373839383362373764653963613433373637353863353237 Oct 2 19:40:08.706000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.706000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.706000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.706000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.706000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.706000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.706000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.706000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.706000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.706000 audit: BPF prog-id=70 op=LOAD Oct 2 19:40:08.706000 audit[1660]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0000a1158 items=0 ppid=1509 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643239373839383362373764653963613433373637353863353237 Oct 2 19:40:08.709000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:40:08.709000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { perfmon } for pid=1660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit[1660]: AVC avc: denied { bpf } for pid=1660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:40:08.709000 audit: BPF prog-id=71 op=LOAD Oct 2 19:40:08.709000 audit[1660]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0000a11e8 items=0 ppid=1509 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733643239373839383362373764653963613433373637353863353237 Oct 2 19:40:08.741680 kubelet[1404]: E1002 19:40:08.741180 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:08.767655 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:40:08.767805 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:40:08.767843 kernel: IPVS: ipvs loaded. Oct 2 19:40:08.775569 kernel: IPVS: [rr] scheduler registered. Oct 2 19:40:08.780615 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:40:08.785562 kernel: IPVS: [sh] scheduler registered. Oct 2 19:40:08.819000 audit[1719]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.819000 audit[1719]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1d06fd50 a2=0 a3=7ffd1d06fd3c items=0 ppid=1670 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:40:08.820000 audit[1720]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:08.820000 audit[1720]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd80668ec0 a2=0 a3=7ffd80668eac items=0 ppid=1670 pid=1720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:40:08.823000 audit[1722]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=1722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:08.823000 audit[1722]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeab14ed50 a2=0 a3=7ffeab14ed3c items=0 ppid=1670 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.823000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:40:08.823000 audit[1721]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=1721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.823000 audit[1721]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff29755430 a2=0 a3=7fff2975541c items=0 ppid=1670 pid=1721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:40:08.824000 audit[1723]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=1723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.824000 audit[1723]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd061f4980 a2=0 a3=7ffd061f496c items=0 ppid=1670 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:40:08.825000 audit[1724]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:08.825000 audit[1724]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff85ae91e0 a2=0 a3=7fff85ae91cc items=0 ppid=1670 pid=1724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:40:08.924000 audit[1725]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.924000 audit[1725]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc014e9130 a2=0 a3=7ffc014e911c items=0 ppid=1670 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.924000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:40:08.927000 audit[1727]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1727 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.927000 audit[1727]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff6c817480 a2=0 a3=7fff6c81746c items=0 ppid=1670 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:40:08.932000 audit[1730]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1730 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.932000 audit[1730]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe0e3b0270 a2=0 a3=7ffe0e3b025c items=0 ppid=1670 pid=1730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:40:08.933000 audit[1731]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.933000 audit[1731]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffab903610 a2=0 a3=7fffab9035fc items=0 ppid=1670 pid=1731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:40:08.936000 audit[1733]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1733 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.936000 audit[1733]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3c9506d0 a2=0 a3=7ffc3c9506bc items=0 ppid=1670 pid=1733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:40:08.937000 audit[1734]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.937000 audit[1734]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff645252f0 a2=0 a3=7fff645252dc items=0 ppid=1670 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:40:08.940000 audit[1736]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1736 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.940000 audit[1736]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd88d2a1f0 a2=0 a3=7ffd88d2a1dc items=0 ppid=1670 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:40:08.945000 audit[1739]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1739 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.945000 audit[1739]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc279ce1f0 a2=0 a3=7ffc279ce1dc items=0 ppid=1670 pid=1739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:40:08.947000 audit[1740]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.947000 audit[1740]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5e020c90 a2=0 a3=7ffd5e020c7c items=0 ppid=1670 pid=1740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.947000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:40:08.949000 audit[1742]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.949000 audit[1742]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff43872290 a2=0 a3=7fff4387227c items=0 ppid=1670 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.949000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:40:08.951000 audit[1743]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.951000 audit[1743]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6b558ac0 a2=0 a3=7ffe6b558aac items=0 ppid=1670 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:40:08.953000 audit[1745]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1745 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.953000 audit[1745]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe352091a0 a2=0 a3=7ffe3520918c items=0 ppid=1670 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:40:08.956000 audit[1748]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.956000 audit[1748]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdbbf7abb0 a2=0 a3=7ffdbbf7ab9c items=0 ppid=1670 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:40:08.960000 audit[1751]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1751 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.960000 audit[1751]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb28c0d40 a2=0 a3=7ffcb28c0d2c items=0 ppid=1670 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:40:08.961000 audit[1752]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.961000 audit[1752]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc330ccbf0 a2=0 a3=7ffc330ccbdc items=0 ppid=1670 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.961000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:40:08.963000 audit[1754]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1754 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.963000 audit[1754]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe802fa560 a2=0 a3=7ffe802fa54c items=0 ppid=1670 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:40:08.966000 audit[1757]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:40:08.966000 audit[1757]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe6bf39c90 a2=0 a3=7ffe6bf39c7c items=0 ppid=1670 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:40:08.978000 audit[1761]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:40:08.978000 audit[1761]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffc96c7f4d0 a2=0 a3=7ffc96c7f4bc items=0 ppid=1670 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.978000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:40:08.985000 audit[1761]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:40:08.985000 audit[1761]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc96c7f4d0 a2=0 a3=7ffc96c7f4bc items=0 ppid=1670 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:40:08.990000 audit[1765]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1765 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:08.990000 audit[1765]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe01d7df50 a2=0 a3=7ffe01d7df3c items=0 ppid=1670 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:40:08.993000 audit[1767]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:08.993000 audit[1767]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffef4f36f10 a2=0 a3=7ffef4f36efc items=0 ppid=1670 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:40:08.997000 audit[1770]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:08.997000 audit[1770]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd294d75f0 a2=0 a3=7ffd294d75dc items=0 ppid=1670 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:40:08.999000 audit[1771]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:08.999000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8462f500 a2=0 a3=7ffc8462f4ec items=0 ppid=1670 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:08.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:40:09.002000 audit[1773]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.002000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa887b410 a2=0 a3=7fffa887b3fc items=0 ppid=1670 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:40:09.004000 audit[1774]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.004000 audit[1774]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb57ef690 a2=0 a3=7ffcb57ef67c items=0 ppid=1670 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:40:09.006000 audit[1776]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.006000 audit[1776]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffed66e47d0 a2=0 a3=7ffed66e47bc items=0 ppid=1670 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:40:09.010000 audit[1779]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.010000 audit[1779]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd079a2e10 a2=0 a3=7ffd079a2dfc items=0 ppid=1670 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:40:09.012000 audit[1780]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.012000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe77c861c0 a2=0 a3=7ffe77c861ac items=0 ppid=1670 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:40:09.014000 audit[1782]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.014000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff97c4c480 a2=0 a3=7fff97c4c46c items=0 ppid=1670 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:40:09.015000 audit[1783]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.015000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf74d6fa0 a2=0 a3=7ffdf74d6f8c items=0 ppid=1670 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.015000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:40:09.018000 audit[1785]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.018000 audit[1785]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc155d4280 a2=0 a3=7ffc155d426c items=0 ppid=1670 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:40:09.022000 audit[1788]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.022000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc2adaed0 a2=0 a3=7ffdc2adaebc items=0 ppid=1670 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:40:09.025000 audit[1791]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.025000 audit[1791]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff39590b10 a2=0 a3=7fff39590afc items=0 ppid=1670 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:40:09.028000 audit[1792]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.028000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd3b387e40 a2=0 a3=7ffd3b387e2c items=0 ppid=1670 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:40:09.030000 audit[1794]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.030000 audit[1794]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffcad859b50 a2=0 a3=7ffcad859b3c items=0 ppid=1670 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.030000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:40:09.034000 audit[1797]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:40:09.034000 audit[1797]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd7ac52b80 a2=0 a3=7ffd7ac52b6c items=0 ppid=1670 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:40:09.040000 audit[1801]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:40:09.040000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe3e6292d0 a2=0 a3=7ffe3e6292bc items=0 ppid=1670 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.040000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:40:09.041000 audit[1801]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:40:09.041000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7ffe3e6292d0 a2=0 a3=7ffe3e6292bc items=0 ppid=1670 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:40:09.041000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:40:09.056948 kubelet[1404]: E1002 19:40:09.056898 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:09.567686 kubelet[1404]: E1002 19:40:09.567639 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:09.742467 kubelet[1404]: E1002 19:40:09.742417 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:10.057846 kubelet[1404]: E1002 19:40:10.057758 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:10.788025 kubelet[1404]: W1002 19:40:10.787974 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5b95ac_1a94_42c5_81dc_b0098e5e789c.slice/cri-containerd-af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2.scope WatchSource:0}: task af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2 not found: not found Oct 2 19:40:11.058409 kubelet[1404]: E1002 19:40:11.058271 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:12.059048 kubelet[1404]: E1002 19:40:12.058977 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:12.108229 update_engine[1083]: I1002 19:40:12.108089 1083 update_attempter.cc:505] Updating boot flags... Oct 2 19:40:13.059247 kubelet[1404]: E1002 19:40:13.059161 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:14.031674 kubelet[1404]: E1002 19:40:14.031606 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:14.059849 kubelet[1404]: E1002 19:40:14.059778 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:14.569164 kubelet[1404]: E1002 19:40:14.569123 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:15.060326 kubelet[1404]: E1002 19:40:15.060245 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:16.060458 kubelet[1404]: E1002 19:40:16.060397 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:17.060859 kubelet[1404]: E1002 19:40:17.060785 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:18.061185 kubelet[1404]: E1002 19:40:18.061120 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:19.062151 kubelet[1404]: E1002 19:40:19.062079 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:19.570678 kubelet[1404]: E1002 19:40:19.570640 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:19.600523 kubelet[1404]: E1002 19:40:19.600492 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:19.602590 env[1096]: time="2023-10-02T19:40:19.602545251Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:40:19.617524 env[1096]: time="2023-10-02T19:40:19.617473769Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\"" Oct 2 19:40:19.618077 env[1096]: time="2023-10-02T19:40:19.618052252Z" level=info msg="StartContainer for \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\"" Oct 2 19:40:19.633023 systemd[1]: run-containerd-runc-k8s.io-ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c-runc.uk2Ycl.mount: Deactivated successfully. Oct 2 19:40:19.635079 systemd[1]: Started cri-containerd-ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c.scope. Oct 2 19:40:19.643817 systemd[1]: cri-containerd-ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c.scope: Deactivated successfully. Oct 2 19:40:19.644057 systemd[1]: Stopped cri-containerd-ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c.scope. Oct 2 19:40:19.839358 env[1096]: time="2023-10-02T19:40:19.839164428Z" level=info msg="shim disconnected" id=ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c Oct 2 19:40:19.839358 env[1096]: time="2023-10-02T19:40:19.839305434Z" level=warning msg="cleaning up after shim disconnected" id=ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c namespace=k8s.io Oct 2 19:40:19.839358 env[1096]: time="2023-10-02T19:40:19.839319361Z" level=info msg="cleaning up dead shim" Oct 2 19:40:19.846583 env[1096]: time="2023-10-02T19:40:19.846528379Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1841 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:19.846857 env[1096]: time="2023-10-02T19:40:19.846800494Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:40:19.847075 env[1096]: time="2023-10-02T19:40:19.847002141Z" level=error msg="Failed to pipe stdout of container \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\"" error="reading from a closed fifo" Oct 2 19:40:19.848650 env[1096]: time="2023-10-02T19:40:19.848600380Z" level=error msg="Failed to pipe stderr of container \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\"" error="reading from a closed fifo" Oct 2 19:40:19.852168 env[1096]: time="2023-10-02T19:40:19.852115687Z" level=error msg="StartContainer for \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:19.852411 kubelet[1404]: E1002 19:40:19.852387 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c" Oct 2 19:40:19.852581 kubelet[1404]: E1002 19:40:19.852520 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:19.852581 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:19.852581 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:40:19.852581 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ljxbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:19.852782 kubelet[1404]: E1002 19:40:19.852585 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:20.062292 kubelet[1404]: E1002 19:40:20.062212 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:20.613349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c-rootfs.mount: Deactivated successfully. Oct 2 19:40:20.763472 kubelet[1404]: I1002 19:40:20.763423 1404 scope.go:115] "RemoveContainer" containerID="af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2" Oct 2 19:40:20.763926 kubelet[1404]: I1002 19:40:20.763903 1404 scope.go:115] "RemoveContainer" containerID="af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2" Oct 2 19:40:20.766672 env[1096]: time="2023-10-02T19:40:20.766590863Z" level=info msg="RemoveContainer for \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\"" Oct 2 19:40:20.767165 env[1096]: time="2023-10-02T19:40:20.766945610Z" level=info msg="RemoveContainer for \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\"" Oct 2 19:40:20.767165 env[1096]: time="2023-10-02T19:40:20.767085605Z" level=error msg="RemoveContainer for \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\" failed" error="failed to set removing state for container \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\": container is already in removing state" Oct 2 19:40:20.767291 kubelet[1404]: E1002 19:40:20.767257 1404 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\": container is already in removing state" containerID="af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2" Oct 2 19:40:20.767403 kubelet[1404]: E1002 19:40:20.767309 1404 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2": container is already in removing state; Skipping pod "cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)" Oct 2 19:40:20.767403 kubelet[1404]: E1002 19:40:20.767391 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:20.767732 kubelet[1404]: E1002 19:40:20.767681 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:20.867463 env[1096]: time="2023-10-02T19:40:20.867212245Z" level=info msg="RemoveContainer for \"af2d7582b909118e6b06037c6c551755318f1e2696472263ef1328eaf88db5e2\" returns successfully" Oct 2 19:40:21.063064 kubelet[1404]: E1002 19:40:21.063003 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:22.064164 kubelet[1404]: E1002 19:40:22.064091 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:22.945230 kubelet[1404]: W1002 19:40:22.945152 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5b95ac_1a94_42c5_81dc_b0098e5e789c.slice/cri-containerd-ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c.scope WatchSource:0}: task ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c not found: not found Oct 2 19:40:23.065274 kubelet[1404]: E1002 19:40:23.065170 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:24.065913 kubelet[1404]: E1002 19:40:24.065849 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:24.571255 kubelet[1404]: E1002 19:40:24.571227 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:25.066436 kubelet[1404]: E1002 19:40:25.066335 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:26.066794 kubelet[1404]: E1002 19:40:26.066718 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:27.067131 kubelet[1404]: E1002 19:40:27.067077 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:28.067849 kubelet[1404]: E1002 19:40:28.067787 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:29.068443 kubelet[1404]: E1002 19:40:29.068359 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:29.572345 kubelet[1404]: E1002 19:40:29.572310 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:30.068915 kubelet[1404]: E1002 19:40:30.068819 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:31.069953 kubelet[1404]: E1002 19:40:31.069892 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:31.599825 kubelet[1404]: E1002 19:40:31.599794 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:31.600054 kubelet[1404]: E1002 19:40:31.599991 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:32.070359 kubelet[1404]: E1002 19:40:32.070301 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:33.071445 kubelet[1404]: E1002 19:40:33.071388 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:34.031503 kubelet[1404]: E1002 19:40:34.031448 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:34.071553 kubelet[1404]: E1002 19:40:34.071474 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:34.573816 kubelet[1404]: E1002 19:40:34.573767 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:35.071700 kubelet[1404]: E1002 19:40:35.071635 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:36.072129 kubelet[1404]: E1002 19:40:36.072061 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:37.072946 kubelet[1404]: E1002 19:40:37.072878 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:38.073844 kubelet[1404]: E1002 19:40:38.073787 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:39.074868 kubelet[1404]: E1002 19:40:39.074761 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:39.574991 kubelet[1404]: E1002 19:40:39.574950 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:40.074967 kubelet[1404]: E1002 19:40:40.074915 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:41.076053 kubelet[1404]: E1002 19:40:41.075971 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:42.077076 kubelet[1404]: E1002 19:40:42.077012 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:43.077498 kubelet[1404]: E1002 19:40:43.077455 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:44.078555 kubelet[1404]: E1002 19:40:44.078468 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:44.576079 kubelet[1404]: E1002 19:40:44.576045 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:45.079003 kubelet[1404]: E1002 19:40:45.078946 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:45.600061 kubelet[1404]: E1002 19:40:45.600003 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:45.601825 env[1096]: time="2023-10-02T19:40:45.601772676Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:40:45.613680 env[1096]: time="2023-10-02T19:40:45.613634911Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\"" Oct 2 19:40:45.614111 env[1096]: time="2023-10-02T19:40:45.614066540Z" level=info msg="StartContainer for \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\"" Oct 2 19:40:45.631041 systemd[1]: Started cri-containerd-a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9.scope. Oct 2 19:40:45.639065 systemd[1]: cri-containerd-a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9.scope: Deactivated successfully. Oct 2 19:40:45.639410 systemd[1]: Stopped cri-containerd-a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9.scope. Oct 2 19:40:45.642488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9-rootfs.mount: Deactivated successfully. Oct 2 19:40:45.651293 env[1096]: time="2023-10-02T19:40:45.651227589Z" level=info msg="shim disconnected" id=a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9 Oct 2 19:40:45.651452 env[1096]: time="2023-10-02T19:40:45.651297879Z" level=warning msg="cleaning up after shim disconnected" id=a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9 namespace=k8s.io Oct 2 19:40:45.651452 env[1096]: time="2023-10-02T19:40:45.651307387Z" level=info msg="cleaning up dead shim" Oct 2 19:40:45.658326 env[1096]: time="2023-10-02T19:40:45.658286004Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1882 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:45Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:45.658559 env[1096]: time="2023-10-02T19:40:45.658502575Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:40:45.658760 env[1096]: time="2023-10-02T19:40:45.658722531Z" level=error msg="Failed to pipe stdout of container \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\"" error="reading from a closed fifo" Oct 2 19:40:45.659010 env[1096]: time="2023-10-02T19:40:45.658918834Z" level=error msg="Failed to pipe stderr of container \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\"" error="reading from a closed fifo" Oct 2 19:40:45.661951 env[1096]: time="2023-10-02T19:40:45.661902196Z" level=error msg="StartContainer for \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:45.662163 kubelet[1404]: E1002 19:40:45.662135 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9" Oct 2 19:40:45.662281 kubelet[1404]: E1002 19:40:45.662258 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:45.662281 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:45.662281 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:40:45.662281 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ljxbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:45.662458 kubelet[1404]: E1002 19:40:45.662300 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:45.803802 kubelet[1404]: I1002 19:40:45.803772 1404 scope.go:115] "RemoveContainer" containerID="ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c" Oct 2 19:40:45.804185 kubelet[1404]: I1002 19:40:45.804160 1404 scope.go:115] "RemoveContainer" containerID="ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c" Oct 2 19:40:45.804706 env[1096]: time="2023-10-02T19:40:45.804675563Z" level=info msg="RemoveContainer for \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\"" Oct 2 19:40:45.804919 env[1096]: time="2023-10-02T19:40:45.804899477Z" level=info msg="RemoveContainer for \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\"" Oct 2 19:40:45.804984 env[1096]: time="2023-10-02T19:40:45.804961492Z" level=error msg="RemoveContainer for \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\" failed" error="failed to set removing state for container \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\": container is already in removing state" Oct 2 19:40:45.805146 kubelet[1404]: E1002 19:40:45.805107 1404 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\": container is already in removing state" containerID="ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c" Oct 2 19:40:45.805222 kubelet[1404]: E1002 19:40:45.805151 1404 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c": container is already in removing state; Skipping pod "cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)" Oct 2 19:40:45.805252 kubelet[1404]: E1002 19:40:45.805227 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:45.805457 kubelet[1404]: E1002 19:40:45.805438 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:45.807722 env[1096]: time="2023-10-02T19:40:45.807687227Z" level=info msg="RemoveContainer for \"ded4289eee470f339ae8fbf253dc918a9b620d87fb2bd00b93aad8545f5ac24c\" returns successfully" Oct 2 19:40:46.079999 kubelet[1404]: E1002 19:40:46.079944 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:47.080836 kubelet[1404]: E1002 19:40:47.080767 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:48.081468 kubelet[1404]: E1002 19:40:48.081418 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:48.756786 kubelet[1404]: W1002 19:40:48.756711 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5b95ac_1a94_42c5_81dc_b0098e5e789c.slice/cri-containerd-a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9.scope WatchSource:0}: task a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9 not found: not found Oct 2 19:40:49.082426 kubelet[1404]: E1002 19:40:49.082279 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:49.577596 kubelet[1404]: E1002 19:40:49.577560 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:50.082783 kubelet[1404]: E1002 19:40:50.082711 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:51.083722 kubelet[1404]: E1002 19:40:51.083648 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:52.084558 kubelet[1404]: E1002 19:40:52.084487 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:53.085153 kubelet[1404]: E1002 19:40:53.085085 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:54.031013 kubelet[1404]: E1002 19:40:54.030870 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:54.086353 kubelet[1404]: E1002 19:40:54.086250 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:54.578273 kubelet[1404]: E1002 19:40:54.578234 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:55.087089 kubelet[1404]: E1002 19:40:55.087027 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:56.087509 kubelet[1404]: E1002 19:40:56.087445 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:57.087982 kubelet[1404]: E1002 19:40:57.087912 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:58.088631 kubelet[1404]: E1002 19:40:58.088550 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:58.599736 kubelet[1404]: E1002 19:40:58.599678 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:58.599935 kubelet[1404]: E1002 19:40:58.599886 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:40:59.089020 kubelet[1404]: E1002 19:40:59.088954 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:59.579505 kubelet[1404]: E1002 19:40:59.579469 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:00.089701 kubelet[1404]: E1002 19:41:00.089619 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:01.090708 kubelet[1404]: E1002 19:41:01.090645 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:02.091152 kubelet[1404]: E1002 19:41:02.091096 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:03.091480 kubelet[1404]: E1002 19:41:03.091415 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:04.092313 kubelet[1404]: E1002 19:41:04.092238 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:04.580393 kubelet[1404]: E1002 19:41:04.580353 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:05.092446 kubelet[1404]: E1002 19:41:05.092364 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:06.093322 kubelet[1404]: E1002 19:41:06.093252 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:07.093893 kubelet[1404]: E1002 19:41:07.093841 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:08.094438 kubelet[1404]: E1002 19:41:08.094369 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:09.095170 kubelet[1404]: E1002 19:41:09.095114 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:09.581627 kubelet[1404]: E1002 19:41:09.581594 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:10.096275 kubelet[1404]: E1002 19:41:10.096203 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:11.097362 kubelet[1404]: E1002 19:41:11.097290 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:11.600184 kubelet[1404]: E1002 19:41:11.600135 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:12.097908 kubelet[1404]: E1002 19:41:12.097841 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:13.098422 kubelet[1404]: E1002 19:41:13.098358 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:13.600460 kubelet[1404]: E1002 19:41:13.600415 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:13.600696 kubelet[1404]: E1002 19:41:13.600638 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:41:14.031028 kubelet[1404]: E1002 19:41:14.030951 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:14.099515 kubelet[1404]: E1002 19:41:14.099412 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:14.582312 kubelet[1404]: E1002 19:41:14.582277 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:15.099689 kubelet[1404]: E1002 19:41:15.099612 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:16.100037 kubelet[1404]: E1002 19:41:16.099966 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:17.100244 kubelet[1404]: E1002 19:41:17.100157 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:18.100896 kubelet[1404]: E1002 19:41:18.100824 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:19.101701 kubelet[1404]: E1002 19:41:19.101636 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:19.583027 kubelet[1404]: E1002 19:41:19.582995 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:20.101917 kubelet[1404]: E1002 19:41:20.101828 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:21.102569 kubelet[1404]: E1002 19:41:21.102490 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:22.103182 kubelet[1404]: E1002 19:41:22.103115 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:23.104071 kubelet[1404]: E1002 19:41:23.104005 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:24.104732 kubelet[1404]: E1002 19:41:24.104666 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:24.583449 kubelet[1404]: E1002 19:41:24.583411 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:25.105028 kubelet[1404]: E1002 19:41:25.104955 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:26.105384 kubelet[1404]: E1002 19:41:26.105309 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:26.600270 kubelet[1404]: E1002 19:41:26.600225 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:26.602014 env[1096]: time="2023-10-02T19:41:26.601972131Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:41:26.615975 env[1096]: time="2023-10-02T19:41:26.615936807Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\"" Oct 2 19:41:26.616307 env[1096]: time="2023-10-02T19:41:26.616275734Z" level=info msg="StartContainer for \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\"" Oct 2 19:41:26.631326 systemd[1]: Started cri-containerd-a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f.scope. Oct 2 19:41:26.639929 systemd[1]: cri-containerd-a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f.scope: Deactivated successfully. Oct 2 19:41:26.640191 systemd[1]: Stopped cri-containerd-a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f.scope. Oct 2 19:41:26.642641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f-rootfs.mount: Deactivated successfully. Oct 2 19:41:26.651515 env[1096]: time="2023-10-02T19:41:26.651448425Z" level=info msg="shim disconnected" id=a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f Oct 2 19:41:26.651515 env[1096]: time="2023-10-02T19:41:26.651507768Z" level=warning msg="cleaning up after shim disconnected" id=a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f namespace=k8s.io Oct 2 19:41:26.651765 env[1096]: time="2023-10-02T19:41:26.651527036Z" level=info msg="cleaning up dead shim" Oct 2 19:41:26.658346 env[1096]: time="2023-10-02T19:41:26.658291032Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:41:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1920 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:41:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:41:26.658634 env[1096]: time="2023-10-02T19:41:26.658576506Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:41:26.658849 env[1096]: time="2023-10-02T19:41:26.658781338Z" level=error msg="Failed to pipe stdout of container \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\"" error="reading from a closed fifo" Oct 2 19:41:26.658849 env[1096]: time="2023-10-02T19:41:26.658815503Z" level=error msg="Failed to pipe stderr of container \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\"" error="reading from a closed fifo" Oct 2 19:41:26.661377 env[1096]: time="2023-10-02T19:41:26.661337484Z" level=error msg="StartContainer for \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:41:26.661635 kubelet[1404]: E1002 19:41:26.661609 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f" Oct 2 19:41:26.661775 kubelet[1404]: E1002 19:41:26.661743 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:41:26.661775 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:41:26.661775 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:41:26.661775 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ljxbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:41:26.661976 kubelet[1404]: E1002 19:41:26.661788 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:41:26.874332 kubelet[1404]: I1002 19:41:26.873396 1404 scope.go:115] "RemoveContainer" containerID="a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9" Oct 2 19:41:26.874332 kubelet[1404]: I1002 19:41:26.873765 1404 scope.go:115] "RemoveContainer" containerID="a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9" Oct 2 19:41:26.874525 env[1096]: time="2023-10-02T19:41:26.874490644Z" level=info msg="RemoveContainer for \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\"" Oct 2 19:41:26.874833 env[1096]: time="2023-10-02T19:41:26.874796528Z" level=info msg="RemoveContainer for \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\"" Oct 2 19:41:26.874931 env[1096]: time="2023-10-02T19:41:26.874886309Z" level=error msg="RemoveContainer for \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\" failed" error="failed to set removing state for container \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\": container is already in removing state" Oct 2 19:41:26.875086 kubelet[1404]: E1002 19:41:26.875057 1404 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\": container is already in removing state" containerID="a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9" Oct 2 19:41:26.875086 kubelet[1404]: E1002 19:41:26.875084 1404 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9": container is already in removing state; Skipping pod "cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)" Oct 2 19:41:26.875199 kubelet[1404]: E1002 19:41:26.875136 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:26.875403 kubelet[1404]: E1002 19:41:26.875383 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:41:26.877364 env[1096]: time="2023-10-02T19:41:26.877339209Z" level=info msg="RemoveContainer for \"a5787d4ba3acc3132da81fddad51b6e97d5d21c291906f04af4978a75e8773f9\" returns successfully" Oct 2 19:41:27.105937 kubelet[1404]: E1002 19:41:27.105764 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:28.106436 kubelet[1404]: E1002 19:41:28.106378 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:29.107430 kubelet[1404]: E1002 19:41:29.107377 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:29.584688 kubelet[1404]: E1002 19:41:29.584654 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:29.757632 kubelet[1404]: W1002 19:41:29.757588 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5b95ac_1a94_42c5_81dc_b0098e5e789c.slice/cri-containerd-a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f.scope WatchSource:0}: task a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f not found: not found Oct 2 19:41:30.108172 kubelet[1404]: E1002 19:41:30.108106 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:31.108386 kubelet[1404]: E1002 19:41:31.108307 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:32.108690 kubelet[1404]: E1002 19:41:32.108625 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:33.109456 kubelet[1404]: E1002 19:41:33.109392 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:34.030968 kubelet[1404]: E1002 19:41:34.030915 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:34.110508 kubelet[1404]: E1002 19:41:34.110418 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:34.586109 kubelet[1404]: E1002 19:41:34.586052 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:35.110640 kubelet[1404]: E1002 19:41:35.110554 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:36.111457 kubelet[1404]: E1002 19:41:36.111398 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:37.111622 kubelet[1404]: E1002 19:41:37.111560 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:38.111941 kubelet[1404]: E1002 19:41:38.111880 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:39.112803 kubelet[1404]: E1002 19:41:39.112720 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:39.587456 kubelet[1404]: E1002 19:41:39.587435 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:39.600184 kubelet[1404]: E1002 19:41:39.600158 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:39.600408 kubelet[1404]: E1002 19:41:39.600393 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:41:40.113364 kubelet[1404]: E1002 19:41:40.113302 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:41.113791 kubelet[1404]: E1002 19:41:41.113724 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:42.114351 kubelet[1404]: E1002 19:41:42.114290 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:43.114846 kubelet[1404]: E1002 19:41:43.114794 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:44.115801 kubelet[1404]: E1002 19:41:44.115748 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:44.588690 kubelet[1404]: E1002 19:41:44.588665 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:45.116291 kubelet[1404]: E1002 19:41:45.116219 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:46.116418 kubelet[1404]: E1002 19:41:46.116364 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:47.117010 kubelet[1404]: E1002 19:41:47.116941 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:48.118093 kubelet[1404]: E1002 19:41:48.118026 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:49.118788 kubelet[1404]: E1002 19:41:49.118728 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:49.590082 kubelet[1404]: E1002 19:41:49.589873 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:50.119523 kubelet[1404]: E1002 19:41:50.119432 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:51.119707 kubelet[1404]: E1002 19:41:51.119606 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:52.119990 kubelet[1404]: E1002 19:41:52.119909 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:53.120143 kubelet[1404]: E1002 19:41:53.120088 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:54.031666 kubelet[1404]: E1002 19:41:54.031607 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:54.121178 kubelet[1404]: E1002 19:41:54.121140 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:54.591161 kubelet[1404]: E1002 19:41:54.591134 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:54.599876 kubelet[1404]: E1002 19:41:54.599857 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:54.600268 kubelet[1404]: E1002 19:41:54.600219 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:41:55.121643 kubelet[1404]: E1002 19:41:55.121569 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:56.124800 kubelet[1404]: E1002 19:41:56.124715 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:57.125858 kubelet[1404]: E1002 19:41:57.125739 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:58.128934 kubelet[1404]: E1002 19:41:58.128791 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:59.129996 kubelet[1404]: E1002 19:41:59.129865 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:59.596710 kubelet[1404]: E1002 19:41:59.595731 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:00.130781 kubelet[1404]: E1002 19:42:00.130693 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:01.131714 kubelet[1404]: E1002 19:42:01.131598 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:02.132023 kubelet[1404]: E1002 19:42:02.131827 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:03.133783 kubelet[1404]: E1002 19:42:03.132588 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:04.133070 kubelet[1404]: E1002 19:42:04.132923 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:04.602659 kubelet[1404]: E1002 19:42:04.601724 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:05.133844 kubelet[1404]: E1002 19:42:05.133646 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:06.134885 kubelet[1404]: E1002 19:42:06.134768 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:07.135895 kubelet[1404]: E1002 19:42:07.135609 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:08.136350 kubelet[1404]: E1002 19:42:08.135977 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:09.136505 kubelet[1404]: E1002 19:42:09.136413 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:09.600149 kubelet[1404]: E1002 19:42:09.600044 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:09.600418 kubelet[1404]: E1002 19:42:09.600334 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:42:09.612970 kubelet[1404]: E1002 19:42:09.612874 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:10.139738 kubelet[1404]: E1002 19:42:10.136921 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:11.143237 kubelet[1404]: E1002 19:42:11.142377 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:12.144246 kubelet[1404]: E1002 19:42:12.143354 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:13.144832 kubelet[1404]: E1002 19:42:13.144469 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:14.034184 kubelet[1404]: E1002 19:42:14.031636 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:14.147456 kubelet[1404]: E1002 19:42:14.147321 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:14.621155 kubelet[1404]: E1002 19:42:14.619478 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:15.148408 kubelet[1404]: E1002 19:42:15.148278 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:16.149654 kubelet[1404]: E1002 19:42:16.149268 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:17.150376 kubelet[1404]: E1002 19:42:17.150226 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:18.151568 kubelet[1404]: E1002 19:42:18.151341 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:19.152695 kubelet[1404]: E1002 19:42:19.151692 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:19.625129 kubelet[1404]: E1002 19:42:19.621929 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:20.151963 kubelet[1404]: E1002 19:42:20.151863 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:21.152171 kubelet[1404]: E1002 19:42:21.152121 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:22.153218 kubelet[1404]: E1002 19:42:22.153084 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:23.154955 kubelet[1404]: E1002 19:42:23.154076 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:24.154662 kubelet[1404]: E1002 19:42:24.154475 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:24.602075 kubelet[1404]: E1002 19:42:24.601626 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:24.602571 kubelet[1404]: E1002 19:42:24.601842 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:24.602908 kubelet[1404]: E1002 19:42:24.602886 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:42:24.630611 kubelet[1404]: E1002 19:42:24.629162 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:25.160899 kubelet[1404]: E1002 19:42:25.157958 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:26.158878 kubelet[1404]: E1002 19:42:26.158413 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:27.166460 kubelet[1404]: E1002 19:42:27.161002 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:28.167192 kubelet[1404]: E1002 19:42:28.166637 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:29.167790 kubelet[1404]: E1002 19:42:29.166866 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:29.630966 kubelet[1404]: E1002 19:42:29.630626 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:30.168748 kubelet[1404]: E1002 19:42:30.167843 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:31.168718 kubelet[1404]: E1002 19:42:31.168601 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:32.169267 kubelet[1404]: E1002 19:42:32.168810 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:33.169102 kubelet[1404]: E1002 19:42:33.168964 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:34.032597 kubelet[1404]: E1002 19:42:34.030721 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:34.169917 kubelet[1404]: E1002 19:42:34.169797 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:34.636726 kubelet[1404]: E1002 19:42:34.634580 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:35.181469 kubelet[1404]: E1002 19:42:35.170698 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:36.182531 kubelet[1404]: E1002 19:42:36.182162 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:36.603279 kubelet[1404]: E1002 19:42:36.601414 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:36.603279 kubelet[1404]: E1002 19:42:36.601732 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:42:37.182788 kubelet[1404]: E1002 19:42:37.182656 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:38.183103 kubelet[1404]: E1002 19:42:38.183033 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:39.183830 kubelet[1404]: E1002 19:42:39.183747 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:39.636065 kubelet[1404]: E1002 19:42:39.635937 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:40.184061 kubelet[1404]: E1002 19:42:40.183953 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:41.185124 kubelet[1404]: E1002 19:42:41.185049 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:42.185611 kubelet[1404]: E1002 19:42:42.185555 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:43.186407 kubelet[1404]: E1002 19:42:43.186374 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:44.186790 kubelet[1404]: E1002 19:42:44.186726 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:44.637272 kubelet[1404]: E1002 19:42:44.637148 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:45.187559 kubelet[1404]: E1002 19:42:45.187470 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:46.188207 kubelet[1404]: E1002 19:42:46.188113 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:47.188749 kubelet[1404]: E1002 19:42:47.188693 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:48.189619 kubelet[1404]: E1002 19:42:48.189527 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:49.189724 kubelet[1404]: E1002 19:42:49.189654 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:49.637887 kubelet[1404]: E1002 19:42:49.637780 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:50.190804 kubelet[1404]: E1002 19:42:50.190756 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:51.190928 kubelet[1404]: E1002 19:42:51.190883 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:51.600129 kubelet[1404]: E1002 19:42:51.600030 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:51.601728 env[1096]: time="2023-10-02T19:42:51.601688810Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:42:51.611151 env[1096]: time="2023-10-02T19:42:51.611117590Z" level=info msg="CreateContainer within sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8\"" Oct 2 19:42:51.611417 env[1096]: time="2023-10-02T19:42:51.611394859Z" level=info msg="StartContainer for \"2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8\"" Oct 2 19:42:51.625034 systemd[1]: Started cri-containerd-2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8.scope. Oct 2 19:42:51.632864 systemd[1]: cri-containerd-2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8.scope: Deactivated successfully. Oct 2 19:42:51.633114 systemd[1]: Stopped cri-containerd-2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8.scope. Oct 2 19:42:51.635733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8-rootfs.mount: Deactivated successfully. Oct 2 19:42:51.639703 env[1096]: time="2023-10-02T19:42:51.639658601Z" level=info msg="shim disconnected" id=2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8 Oct 2 19:42:51.639796 env[1096]: time="2023-10-02T19:42:51.639705578Z" level=warning msg="cleaning up after shim disconnected" id=2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8 namespace=k8s.io Oct 2 19:42:51.639796 env[1096]: time="2023-10-02T19:42:51.639714405Z" level=info msg="cleaning up dead shim" Oct 2 19:42:51.645734 env[1096]: time="2023-10-02T19:42:51.645702639Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1966 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:51.645957 env[1096]: time="2023-10-02T19:42:51.645912592Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:42:51.646147 env[1096]: time="2023-10-02T19:42:51.646099271Z" level=error msg="Failed to pipe stderr of container \"2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8\"" error="reading from a closed fifo" Oct 2 19:42:51.646612 env[1096]: time="2023-10-02T19:42:51.646574750Z" level=error msg="Failed to pipe stdout of container \"2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8\"" error="reading from a closed fifo" Oct 2 19:42:51.648725 env[1096]: time="2023-10-02T19:42:51.648685129Z" level=error msg="StartContainer for \"2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:51.648913 kubelet[1404]: E1002 19:42:51.648882 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8" Oct 2 19:42:51.649019 kubelet[1404]: E1002 19:42:51.649011 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:51.649019 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:51.649019 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:42:51.649019 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ljxbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:51.649151 kubelet[1404]: E1002 19:42:51.649047 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:42:52.191160 kubelet[1404]: E1002 19:42:52.191114 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:52.306486 kubelet[1404]: I1002 19:42:52.306469 1404 scope.go:115] "RemoveContainer" containerID="a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f" Oct 2 19:42:52.306726 kubelet[1404]: I1002 19:42:52.306713 1404 scope.go:115] "RemoveContainer" containerID="a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f" Oct 2 19:42:52.307391 env[1096]: time="2023-10-02T19:42:52.307361645Z" level=info msg="RemoveContainer for \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\"" Oct 2 19:42:52.307614 env[1096]: time="2023-10-02T19:42:52.307572129Z" level=info msg="RemoveContainer for \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\"" Oct 2 19:42:52.307687 env[1096]: time="2023-10-02T19:42:52.307648261Z" level=error msg="RemoveContainer for \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\" failed" error="failed to set removing state for container \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\": container is already in removing state" Oct 2 19:42:52.307790 kubelet[1404]: E1002 19:42:52.307774 1404 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\": container is already in removing state" containerID="a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f" Oct 2 19:42:52.307873 kubelet[1404]: E1002 19:42:52.307805 1404 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f": container is already in removing state; Skipping pod "cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)" Oct 2 19:42:52.307873 kubelet[1404]: E1002 19:42:52.307853 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:52.308035 kubelet[1404]: E1002 19:42:52.308024 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-5zv6z_kube-system(0c5b95ac-1a94-42c5-81dc-b0098e5e789c)\"" pod="kube-system/cilium-5zv6z" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c Oct 2 19:42:52.309674 env[1096]: time="2023-10-02T19:42:52.309653112Z" level=info msg="RemoveContainer for \"a5328c5071873c0488f11bbabe5bc7f239981769ad21d2d4064d56061f91ed7f\" returns successfully" Oct 2 19:42:53.191495 kubelet[1404]: E1002 19:42:53.191432 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:53.839821 env[1096]: time="2023-10-02T19:42:53.839771120Z" level=info msg="StopPodSandbox for \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\"" Oct 2 19:42:53.840245 env[1096]: time="2023-10-02T19:42:53.839852361Z" level=info msg="Container to stop \"2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:42:53.841137 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4-shm.mount: Deactivated successfully. Oct 2 19:42:53.849429 systemd[1]: cri-containerd-1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4.scope: Deactivated successfully. Oct 2 19:42:53.848000 audit: BPF prog-id=61 op=UNLOAD Oct 2 19:42:53.851328 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:42:53.851414 kernel: audit: type=1334 audit(1696275773.848:652): prog-id=61 op=UNLOAD Oct 2 19:42:53.854000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:42:53.857562 kernel: audit: type=1334 audit(1696275773.854:653): prog-id=65 op=UNLOAD Oct 2 19:42:53.864623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4-rootfs.mount: Deactivated successfully. Oct 2 19:42:53.872447 env[1096]: time="2023-10-02T19:42:53.872369947Z" level=info msg="shim disconnected" id=1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4 Oct 2 19:42:53.872447 env[1096]: time="2023-10-02T19:42:53.872420642Z" level=warning msg="cleaning up after shim disconnected" id=1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4 namespace=k8s.io Oct 2 19:42:53.872447 env[1096]: time="2023-10-02T19:42:53.872429759Z" level=info msg="cleaning up dead shim" Oct 2 19:42:53.878798 env[1096]: time="2023-10-02T19:42:53.878746336Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1996 runtime=io.containerd.runc.v2\n" Oct 2 19:42:53.879078 env[1096]: time="2023-10-02T19:42:53.879053621Z" level=info msg="TearDown network for sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" successfully" Oct 2 19:42:53.879078 env[1096]: time="2023-10-02T19:42:53.879076193Z" level=info msg="StopPodSandbox for \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" returns successfully" Oct 2 19:42:54.021978 kubelet[1404]: I1002 19:42:54.021927 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljxbj\" (UniqueName: \"kubernetes.io/projected/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-kube-api-access-ljxbj\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.021978 kubelet[1404]: I1002 19:42:54.021978 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-host-proc-sys-kernel\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022170 kubelet[1404]: I1002 19:42:54.022002 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-hubble-tls\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022170 kubelet[1404]: I1002 19:42:54.022018 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cni-path\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022170 kubelet[1404]: I1002 19:42:54.022033 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-run\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022170 kubelet[1404]: I1002 19:42:54.022050 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-hostproc\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022170 kubelet[1404]: I1002 19:42:54.022068 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-config-path\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022170 kubelet[1404]: I1002 19:42:54.022083 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-bpf-maps\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022323 kubelet[1404]: I1002 19:42:54.022101 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-clustermesh-secrets\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022323 kubelet[1404]: I1002 19:42:54.022116 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-host-proc-sys-net\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022323 kubelet[1404]: I1002 19:42:54.022131 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-lib-modules\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022323 kubelet[1404]: I1002 19:42:54.022151 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-xtables-lock\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022323 kubelet[1404]: I1002 19:42:54.022168 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-cgroup\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022323 kubelet[1404]: I1002 19:42:54.022184 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-etc-cni-netd\") pod \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\" (UID: \"0c5b95ac-1a94-42c5-81dc-b0098e5e789c\") " Oct 2 19:42:54.022469 kubelet[1404]: I1002 19:42:54.022219 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.022469 kubelet[1404]: I1002 19:42:54.022252 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.022469 kubelet[1404]: W1002 19:42:54.022267 1404 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0c5b95ac-1a94-42c5-81dc-b0098e5e789c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:42:54.022469 kubelet[1404]: I1002 19:42:54.022283 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.022469 kubelet[1404]: I1002 19:42:54.022321 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.022619 kubelet[1404]: I1002 19:42:54.022338 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cni-path" (OuterVolumeSpecName: "cni-path") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.022619 kubelet[1404]: I1002 19:42:54.022353 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.022619 kubelet[1404]: I1002 19:42:54.022366 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.022619 kubelet[1404]: I1002 19:42:54.022379 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.022619 kubelet[1404]: I1002 19:42:54.022392 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.022738 kubelet[1404]: I1002 19:42:54.022406 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-hostproc" (OuterVolumeSpecName: "hostproc") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:54.025353 kubelet[1404]: I1002 19:42:54.025315 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:42:54.025754 kubelet[1404]: I1002 19:42:54.024414 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:42:54.025896 kubelet[1404]: I1002 19:42:54.025858 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-kube-api-access-ljxbj" (OuterVolumeSpecName: "kube-api-access-ljxbj") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "kube-api-access-ljxbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:54.026149 systemd[1]: var-lib-kubelet-pods-0c5b95ac\x2d1a94\x2d42c5\x2d81dc\x2db0098e5e789c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dljxbj.mount: Deactivated successfully. Oct 2 19:42:54.026295 kubelet[1404]: I1002 19:42:54.026162 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0c5b95ac-1a94-42c5-81dc-b0098e5e789c" (UID: "0c5b95ac-1a94-42c5-81dc-b0098e5e789c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:54.027833 systemd[1]: var-lib-kubelet-pods-0c5b95ac\x2d1a94\x2d42c5\x2d81dc\x2db0098e5e789c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:42:54.027908 systemd[1]: var-lib-kubelet-pods-0c5b95ac\x2d1a94\x2d42c5\x2d81dc\x2db0098e5e789c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:42:54.031279 kubelet[1404]: E1002 19:42:54.031258 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:54.122810 kubelet[1404]: I1002 19:42:54.122732 1404 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-bpf-maps\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.122810 kubelet[1404]: I1002 19:42:54.122755 1404 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-clustermesh-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.122810 kubelet[1404]: I1002 19:42:54.122765 1404 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-host-proc-sys-net\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.122810 kubelet[1404]: I1002 19:42:54.122774 1404 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-lib-modules\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.122810 kubelet[1404]: I1002 19:42:54.122783 1404 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-xtables-lock\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.122810 kubelet[1404]: I1002 19:42:54.122791 1404 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-cgroup\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.122810 kubelet[1404]: I1002 19:42:54.122800 1404 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-etc-cni-netd\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.122810 kubelet[1404]: I1002 19:42:54.122810 1404 reconciler.go:399] "Volume detached for volume \"kube-api-access-ljxbj\" (UniqueName: \"kubernetes.io/projected/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-kube-api-access-ljxbj\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.123054 kubelet[1404]: I1002 19:42:54.122818 1404 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-host-proc-sys-kernel\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.123054 kubelet[1404]: I1002 19:42:54.122827 1404 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-hubble-tls\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.123054 kubelet[1404]: I1002 19:42:54.122836 1404 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cni-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.123054 kubelet[1404]: I1002 19:42:54.122844 1404 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-run\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.123054 kubelet[1404]: I1002 19:42:54.122852 1404 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-hostproc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.123054 kubelet[1404]: I1002 19:42:54.122861 1404 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c5b95ac-1a94-42c5-81dc-b0098e5e789c-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:54.192153 kubelet[1404]: E1002 19:42:54.192106 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:54.310964 kubelet[1404]: I1002 19:42:54.310938 1404 scope.go:115] "RemoveContainer" containerID="2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8" Oct 2 19:42:54.311861 env[1096]: time="2023-10-02T19:42:54.311823099Z" level=info msg="RemoveContainer for \"2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8\"" Oct 2 19:42:54.314458 env[1096]: time="2023-10-02T19:42:54.314435386Z" level=info msg="RemoveContainer for \"2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8\" returns successfully" Oct 2 19:42:54.314662 systemd[1]: Removed slice kubepods-burstable-pod0c5b95ac_1a94_42c5_81dc_b0098e5e789c.slice. Oct 2 19:42:54.333059 kubelet[1404]: I1002 19:42:54.333035 1404 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:42:54.333122 kubelet[1404]: E1002 19:42:54.333093 1404 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333122 kubelet[1404]: E1002 19:42:54.333102 1404 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333122 kubelet[1404]: E1002 19:42:54.333108 1404 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333122 kubelet[1404]: E1002 19:42:54.333113 1404 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333220 kubelet[1404]: I1002 19:42:54.333129 1404 memory_manager.go:345] "RemoveStaleState removing state" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333220 kubelet[1404]: I1002 19:42:54.333136 1404 memory_manager.go:345] "RemoveStaleState removing state" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333220 kubelet[1404]: I1002 19:42:54.333142 1404 memory_manager.go:345] "RemoveStaleState removing state" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333220 kubelet[1404]: E1002 19:42:54.333157 1404 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333220 kubelet[1404]: I1002 19:42:54.333169 1404 memory_manager.go:345] "RemoveStaleState removing state" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333220 kubelet[1404]: I1002 19:42:54.333174 1404 memory_manager.go:345] "RemoveStaleState removing state" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333220 kubelet[1404]: I1002 19:42:54.333180 1404 memory_manager.go:345] "RemoveStaleState removing state" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.333220 kubelet[1404]: E1002 19:42:54.333190 1404 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0c5b95ac-1a94-42c5-81dc-b0098e5e789c" containerName="mount-cgroup" Oct 2 19:42:54.339047 systemd[1]: Created slice kubepods-burstable-pod452606a7_7588_4baa_80f3_f0679d8cb994.slice. Oct 2 19:42:54.423832 kubelet[1404]: I1002 19:42:54.423769 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-bpf-maps\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.423832 kubelet[1404]: I1002 19:42:54.423832 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-host-proc-sys-kernel\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.423993 kubelet[1404]: I1002 19:42:54.423920 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/452606a7-7588-4baa-80f3-f0679d8cb994-clustermesh-secrets\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.423993 kubelet[1404]: I1002 19:42:54.423964 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79p5c\" (UniqueName: \"kubernetes.io/projected/452606a7-7588-4baa-80f3-f0679d8cb994-kube-api-access-79p5c\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424049 kubelet[1404]: I1002 19:42:54.424011 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-hostproc\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424077 kubelet[1404]: I1002 19:42:54.424051 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-etc-cni-netd\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424077 kubelet[1404]: I1002 19:42:54.424074 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-lib-modules\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424124 kubelet[1404]: I1002 19:42:54.424091 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-xtables-lock\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424124 kubelet[1404]: I1002 19:42:54.424124 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-host-proc-sys-net\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424178 kubelet[1404]: I1002 19:42:54.424151 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/452606a7-7588-4baa-80f3-f0679d8cb994-hubble-tls\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424178 kubelet[1404]: I1002 19:42:54.424175 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-run\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424224 kubelet[1404]: I1002 19:42:54.424196 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-cgroup\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424251 kubelet[1404]: I1002 19:42:54.424224 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cni-path\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.424251 kubelet[1404]: I1002 19:42:54.424248 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-config-path\") pod \"cilium-2w6fp\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " pod="kube-system/cilium-2w6fp" Oct 2 19:42:54.602573 kubelet[1404]: I1002 19:42:54.602522 1404 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0c5b95ac-1a94-42c5-81dc-b0098e5e789c path="/var/lib/kubelet/pods/0c5b95ac-1a94-42c5-81dc-b0098e5e789c/volumes" Oct 2 19:42:54.639067 kubelet[1404]: E1002 19:42:54.639042 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:54.651208 kubelet[1404]: E1002 19:42:54.651185 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:54.651673 env[1096]: time="2023-10-02T19:42:54.651616477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2w6fp,Uid:452606a7-7588-4baa-80f3-f0679d8cb994,Namespace:kube-system,Attempt:0,}" Oct 2 19:42:54.664913 env[1096]: time="2023-10-02T19:42:54.664837981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:42:54.664913 env[1096]: time="2023-10-02T19:42:54.664875811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:42:54.664913 env[1096]: time="2023-10-02T19:42:54.664886922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:42:54.665156 env[1096]: time="2023-10-02T19:42:54.665055959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6 pid=2022 runtime=io.containerd.runc.v2 Oct 2 19:42:54.675777 systemd[1]: Started cri-containerd-93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6.scope. Oct 2 19:42:54.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688371 kernel: audit: type=1400 audit(1696275774.683:654): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688442 kernel: audit: type=1400 audit(1696275774.683:655): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688588 kernel: audit: type=1400 audit(1696275774.683:656): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.692108 kernel: audit: type=1400 audit(1696275774.683:657): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.694179 kernel: audit: type=1400 audit(1696275774.683:658): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.694228 kernel: audit: type=1400 audit(1696275774.683:659): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.697884 kernel: audit: type=1400 audit(1696275774.683:660): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.697942 kernel: audit: type=1400 audit(1696275774.683:661): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.686000 audit: BPF prog-id=72 op=LOAD Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2022 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:54.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323131613063613935356361346336633437366262643666663661 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2022 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:54.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323131613063613935356361346336633437366262643666663661 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.687000 audit: BPF prog-id=73 op=LOAD Oct 2 19:42:54.687000 audit[2032]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00020ec90 items=0 ppid=2022 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:54.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323131613063613935356361346336633437366262643666663661 Oct 2 19:42:54.688000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.688000 audit: BPF prog-id=74 op=LOAD Oct 2 19:42:54.688000 audit[2032]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00020ecd8 items=0 ppid=2022 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:54.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323131613063613935356361346336633437366262643666663661 Oct 2 19:42:54.690000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:42:54.690000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { perfmon } for pid=2032 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit[2032]: AVC avc: denied { bpf } for pid=2032 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:54.690000 audit: BPF prog-id=75 op=LOAD Oct 2 19:42:54.690000 audit[2032]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00020f0e8 items=0 ppid=2022 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:54.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933323131613063613935356361346336633437366262643666663661 Oct 2 19:42:54.709080 env[1096]: time="2023-10-02T19:42:54.709009214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2w6fp,Uid:452606a7-7588-4baa-80f3-f0679d8cb994,Namespace:kube-system,Attempt:0,} returns sandbox id \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\"" Oct 2 19:42:54.709926 kubelet[1404]: E1002 19:42:54.709906 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:54.711610 env[1096]: time="2023-10-02T19:42:54.711567870Z" level=info msg="CreateContainer within sandbox \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:42:54.722406 env[1096]: time="2023-10-02T19:42:54.722349760Z" level=info msg="CreateContainer within sandbox \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc\"" Oct 2 19:42:54.722846 env[1096]: time="2023-10-02T19:42:54.722805944Z" level=info msg="StartContainer for \"babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc\"" Oct 2 19:42:54.736409 systemd[1]: Started cri-containerd-babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc.scope. Oct 2 19:42:54.746976 kubelet[1404]: W1002 19:42:54.744729 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5b95ac_1a94_42c5_81dc_b0098e5e789c.slice/cri-containerd-2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8.scope WatchSource:0}: container "2627514c1f9fb43f2f464f47901fdea929cccd6558a890ec2e6787d0405338d8" in namespace "k8s.io": not found Oct 2 19:42:54.745359 systemd[1]: cri-containerd-babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc.scope: Deactivated successfully. Oct 2 19:42:54.745639 systemd[1]: Stopped cri-containerd-babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc.scope. Oct 2 19:42:54.758984 env[1096]: time="2023-10-02T19:42:54.758875481Z" level=info msg="shim disconnected" id=babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc Oct 2 19:42:54.758984 env[1096]: time="2023-10-02T19:42:54.758939240Z" level=warning msg="cleaning up after shim disconnected" id=babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc namespace=k8s.io Oct 2 19:42:54.758984 env[1096]: time="2023-10-02T19:42:54.758947496Z" level=info msg="cleaning up dead shim" Oct 2 19:42:54.766623 env[1096]: time="2023-10-02T19:42:54.766565086Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2077 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:54.766844 env[1096]: time="2023-10-02T19:42:54.766789787Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 2 19:42:54.767061 env[1096]: time="2023-10-02T19:42:54.766998838Z" level=error msg="Failed to pipe stderr of container \"babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc\"" error="reading from a closed fifo" Oct 2 19:42:54.767209 env[1096]: time="2023-10-02T19:42:54.767126327Z" level=error msg="Failed to pipe stdout of container \"babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc\"" error="reading from a closed fifo" Oct 2 19:42:54.769189 env[1096]: time="2023-10-02T19:42:54.769149371Z" level=error msg="StartContainer for \"babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:54.769324 kubelet[1404]: E1002 19:42:54.769298 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc" Oct 2 19:42:54.769438 kubelet[1404]: E1002 19:42:54.769404 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:54.769438 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:54.769438 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:42:54.769438 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-79p5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-2w6fp_kube-system(452606a7-7588-4baa-80f3-f0679d8cb994): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:54.769636 kubelet[1404]: E1002 19:42:54.769445 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2w6fp" podUID=452606a7-7588-4baa-80f3-f0679d8cb994 Oct 2 19:42:55.192881 kubelet[1404]: E1002 19:42:55.192815 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:55.314631 env[1096]: time="2023-10-02T19:42:55.314565035Z" level=info msg="StopPodSandbox for \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\"" Oct 2 19:42:55.314631 env[1096]: time="2023-10-02T19:42:55.314619537Z" level=info msg="Container to stop \"babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:42:55.315919 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6-shm.mount: Deactivated successfully. Oct 2 19:42:55.321337 systemd[1]: cri-containerd-93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6.scope: Deactivated successfully. Oct 2 19:42:55.320000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:42:55.325000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:42:55.337072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6-rootfs.mount: Deactivated successfully. Oct 2 19:42:55.340846 env[1096]: time="2023-10-02T19:42:55.340795314Z" level=info msg="shim disconnected" id=93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6 Oct 2 19:42:55.340846 env[1096]: time="2023-10-02T19:42:55.340841460Z" level=warning msg="cleaning up after shim disconnected" id=93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6 namespace=k8s.io Oct 2 19:42:55.340846 env[1096]: time="2023-10-02T19:42:55.340850647Z" level=info msg="cleaning up dead shim" Oct 2 19:42:55.346816 env[1096]: time="2023-10-02T19:42:55.346789978Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2109 runtime=io.containerd.runc.v2\n" Oct 2 19:42:55.347098 env[1096]: time="2023-10-02T19:42:55.347068328Z" level=info msg="TearDown network for sandbox \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\" successfully" Oct 2 19:42:55.347098 env[1096]: time="2023-10-02T19:42:55.347090610Z" level=info msg="StopPodSandbox for \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\" returns successfully" Oct 2 19:42:55.431152 kubelet[1404]: I1002 19:42:55.431087 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-bpf-maps\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431152 kubelet[1404]: I1002 19:42:55.431160 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-config-path\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431412 kubelet[1404]: I1002 19:42:55.431178 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-cgroup\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431412 kubelet[1404]: I1002 19:42:55.431194 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-hostproc\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431412 kubelet[1404]: I1002 19:42:55.431199 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.431412 kubelet[1404]: I1002 19:42:55.431218 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79p5c\" (UniqueName: \"kubernetes.io/projected/452606a7-7588-4baa-80f3-f0679d8cb994-kube-api-access-79p5c\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431412 kubelet[1404]: I1002 19:42:55.431269 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-lib-modules\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431412 kubelet[1404]: I1002 19:42:55.431288 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-xtables-lock\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431626 kubelet[1404]: I1002 19:42:55.431309 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/452606a7-7588-4baa-80f3-f0679d8cb994-hubble-tls\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431626 kubelet[1404]: I1002 19:42:55.431324 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cni-path\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431626 kubelet[1404]: I1002 19:42:55.431344 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-host-proc-sys-kernel\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431626 kubelet[1404]: I1002 19:42:55.431366 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/452606a7-7588-4baa-80f3-f0679d8cb994-clustermesh-secrets\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431626 kubelet[1404]: I1002 19:42:55.431380 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-etc-cni-netd\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431626 kubelet[1404]: I1002 19:42:55.431395 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-host-proc-sys-net\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431782 kubelet[1404]: I1002 19:42:55.431410 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-run\") pod \"452606a7-7588-4baa-80f3-f0679d8cb994\" (UID: \"452606a7-7588-4baa-80f3-f0679d8cb994\") " Oct 2 19:42:55.431782 kubelet[1404]: I1002 19:42:55.431432 1404 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-bpf-maps\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.431782 kubelet[1404]: I1002 19:42:55.431447 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.431782 kubelet[1404]: I1002 19:42:55.431460 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.431782 kubelet[1404]: I1002 19:42:55.431475 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.431912 kubelet[1404]: I1002 19:42:55.431556 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.431912 kubelet[1404]: W1002 19:42:55.431768 1404 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/452606a7-7588-4baa-80f3-f0679d8cb994/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:42:55.433443 kubelet[1404]: I1002 19:42:55.431993 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-hostproc" (OuterVolumeSpecName: "hostproc") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.433443 kubelet[1404]: I1002 19:42:55.433266 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:42:55.433443 kubelet[1404]: I1002 19:42:55.433293 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.433443 kubelet[1404]: I1002 19:42:55.433310 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cni-path" (OuterVolumeSpecName: "cni-path") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.433443 kubelet[1404]: I1002 19:42:55.433327 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.433620 kubelet[1404]: I1002 19:42:55.433341 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:55.434037 kubelet[1404]: I1002 19:42:55.434017 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452606a7-7588-4baa-80f3-f0679d8cb994-kube-api-access-79p5c" (OuterVolumeSpecName: "kube-api-access-79p5c") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "kube-api-access-79p5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:55.434129 kubelet[1404]: I1002 19:42:55.434075 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452606a7-7588-4baa-80f3-f0679d8cb994-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:55.434887 kubelet[1404]: I1002 19:42:55.434851 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452606a7-7588-4baa-80f3-f0679d8cb994-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "452606a7-7588-4baa-80f3-f0679d8cb994" (UID: "452606a7-7588-4baa-80f3-f0679d8cb994"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:42:55.435417 systemd[1]: var-lib-kubelet-pods-452606a7\x2d7588\x2d4baa\x2d80f3\x2df0679d8cb994-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79p5c.mount: Deactivated successfully. Oct 2 19:42:55.435555 systemd[1]: var-lib-kubelet-pods-452606a7\x2d7588\x2d4baa\x2d80f3\x2df0679d8cb994-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:42:55.436939 systemd[1]: var-lib-kubelet-pods-452606a7\x2d7588\x2d4baa\x2d80f3\x2df0679d8cb994-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:42:55.532127 kubelet[1404]: I1002 19:42:55.532030 1404 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cni-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532127 kubelet[1404]: I1002 19:42:55.532060 1404 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-host-proc-sys-net\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532127 kubelet[1404]: I1002 19:42:55.532070 1404 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-run\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532127 kubelet[1404]: I1002 19:42:55.532080 1404 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-host-proc-sys-kernel\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532127 kubelet[1404]: I1002 19:42:55.532089 1404 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/452606a7-7588-4baa-80f3-f0679d8cb994-clustermesh-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532127 kubelet[1404]: I1002 19:42:55.532097 1404 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-etc-cni-netd\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532127 kubelet[1404]: I1002 19:42:55.532106 1404 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-cgroup\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532127 kubelet[1404]: I1002 19:42:55.532113 1404 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-hostproc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532383 kubelet[1404]: I1002 19:42:55.532122 1404 reconciler.go:399] "Volume detached for volume \"kube-api-access-79p5c\" (UniqueName: \"kubernetes.io/projected/452606a7-7588-4baa-80f3-f0679d8cb994-kube-api-access-79p5c\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532383 kubelet[1404]: I1002 19:42:55.532131 1404 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/452606a7-7588-4baa-80f3-f0679d8cb994-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532383 kubelet[1404]: I1002 19:42:55.532139 1404 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-lib-modules\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532383 kubelet[1404]: I1002 19:42:55.532148 1404 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/452606a7-7588-4baa-80f3-f0679d8cb994-xtables-lock\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:55.532383 kubelet[1404]: I1002 19:42:55.532156 1404 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/452606a7-7588-4baa-80f3-f0679d8cb994-hubble-tls\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:56.193378 kubelet[1404]: E1002 19:42:56.193316 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:56.319897 kubelet[1404]: I1002 19:42:56.319860 1404 scope.go:115] "RemoveContainer" containerID="babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc" Oct 2 19:42:56.321169 env[1096]: time="2023-10-02T19:42:56.321121748Z" level=info msg="RemoveContainer for \"babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc\"" Oct 2 19:42:56.322970 systemd[1]: Removed slice kubepods-burstable-pod452606a7_7588_4baa_80f3_f0679d8cb994.slice. Oct 2 19:42:56.323956 env[1096]: time="2023-10-02T19:42:56.323904312Z" level=info msg="RemoveContainer for \"babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc\" returns successfully" Oct 2 19:42:56.601826 kubelet[1404]: I1002 19:42:56.601699 1404 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=452606a7-7588-4baa-80f3-f0679d8cb994 path="/var/lib/kubelet/pods/452606a7-7588-4baa-80f3-f0679d8cb994/volumes" Oct 2 19:42:57.194414 kubelet[1404]: E1002 19:42:57.194336 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:57.737215 kubelet[1404]: I1002 19:42:57.737162 1404 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:42:57.737215 kubelet[1404]: E1002 19:42:57.737211 1404 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="452606a7-7588-4baa-80f3-f0679d8cb994" containerName="mount-cgroup" Oct 2 19:42:57.737215 kubelet[1404]: I1002 19:42:57.737229 1404 memory_manager.go:345] "RemoveStaleState removing state" podUID="452606a7-7588-4baa-80f3-f0679d8cb994" containerName="mount-cgroup" Oct 2 19:42:57.738885 kubelet[1404]: I1002 19:42:57.738850 1404 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:42:57.742189 systemd[1]: Created slice kubepods-burstable-pod1c271122_35d0_4734_a6c5_c140a89edb1d.slice. Oct 2 19:42:57.760691 systemd[1]: Created slice kubepods-besteffort-pod0084bc52_46e5_4587_baad_5ba806c2c570.slice. Oct 2 19:42:57.843632 kubelet[1404]: I1002 19:42:57.843567 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-host-proc-sys-net\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843632 kubelet[1404]: I1002 19:42:57.843621 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-host-proc-sys-kernel\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843862 kubelet[1404]: I1002 19:42:57.843685 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c271122-35d0-4734-a6c5-c140a89edb1d-hubble-tls\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843862 kubelet[1404]: I1002 19:42:57.843722 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-cgroup\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843862 kubelet[1404]: I1002 19:42:57.843743 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cni-path\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843862 kubelet[1404]: I1002 19:42:57.843775 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c271122-35d0-4734-a6c5-c140a89edb1d-clustermesh-secrets\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843862 kubelet[1404]: I1002 19:42:57.843821 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-config-path\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843980 kubelet[1404]: I1002 19:42:57.843874 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-bpf-maps\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843980 kubelet[1404]: I1002 19:42:57.843920 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-lib-modules\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843980 kubelet[1404]: I1002 19:42:57.843950 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxbw7\" (UniqueName: \"kubernetes.io/projected/1c271122-35d0-4734-a6c5-c140a89edb1d-kube-api-access-dxbw7\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.843980 kubelet[1404]: I1002 19:42:57.843971 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-hostproc\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.844073 kubelet[1404]: I1002 19:42:57.844015 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-xtables-lock\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.844073 kubelet[1404]: I1002 19:42:57.844047 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwcs4\" (UniqueName: \"kubernetes.io/projected/0084bc52-46e5-4587-baad-5ba806c2c570-kube-api-access-bwcs4\") pod \"cilium-operator-69b677f97c-kc6pw\" (UID: \"0084bc52-46e5-4587-baad-5ba806c2c570\") " pod="kube-system/cilium-operator-69b677f97c-kc6pw" Oct 2 19:42:57.844073 kubelet[1404]: I1002 19:42:57.844067 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0084bc52-46e5-4587-baad-5ba806c2c570-cilium-config-path\") pod \"cilium-operator-69b677f97c-kc6pw\" (UID: \"0084bc52-46e5-4587-baad-5ba806c2c570\") " pod="kube-system/cilium-operator-69b677f97c-kc6pw" Oct 2 19:42:57.844147 kubelet[1404]: I1002 19:42:57.844085 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-run\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.844147 kubelet[1404]: I1002 19:42:57.844102 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-etc-cni-netd\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.844147 kubelet[1404]: I1002 19:42:57.844137 1404 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-ipsec-secrets\") pod \"cilium-rmgqt\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " pod="kube-system/cilium-rmgqt" Oct 2 19:42:57.868155 kubelet[1404]: W1002 19:42:57.868124 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod452606a7_7588_4baa_80f3_f0679d8cb994.slice/cri-containerd-babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc.scope WatchSource:0}: container "babca577c1c954457e1f9601a33344673b07ca6f00dc4fb3fd81381001d89dbc" in namespace "k8s.io": not found Oct 2 19:42:57.870419 kubelet[1404]: E1002 19:42:57.870378 1404 cadvisor_stats_provider.go:457] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod452606a7_7588_4baa_80f3_f0679d8cb994.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod452606a7_7588_4baa_80f3_f0679d8cb994.slice/cri-containerd-93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6.scope\": RecentStats: unable to find data in memory cache]" Oct 2 19:42:58.060158 kubelet[1404]: E1002 19:42:58.059987 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:58.060811 env[1096]: time="2023-10-02T19:42:58.060529678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rmgqt,Uid:1c271122-35d0-4734-a6c5-c140a89edb1d,Namespace:kube-system,Attempt:0,}" Oct 2 19:42:58.062760 kubelet[1404]: E1002 19:42:58.062735 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:58.063341 env[1096]: time="2023-10-02T19:42:58.063284801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-kc6pw,Uid:0084bc52-46e5-4587-baad-5ba806c2c570,Namespace:kube-system,Attempt:0,}" Oct 2 19:42:58.075188 env[1096]: time="2023-10-02T19:42:58.075105100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:42:58.075188 env[1096]: time="2023-10-02T19:42:58.075142150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:42:58.075188 env[1096]: time="2023-10-02T19:42:58.075152269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:42:58.075463 env[1096]: time="2023-10-02T19:42:58.075281230Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462 pid=2135 runtime=io.containerd.runc.v2 Oct 2 19:42:58.081227 env[1096]: time="2023-10-02T19:42:58.081138125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:42:58.081351 env[1096]: time="2023-10-02T19:42:58.081218986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:42:58.081351 env[1096]: time="2023-10-02T19:42:58.081254142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:42:58.081570 env[1096]: time="2023-10-02T19:42:58.081506855Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532 pid=2155 runtime=io.containerd.runc.v2 Oct 2 19:42:58.087447 systemd[1]: Started cri-containerd-d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462.scope. Oct 2 19:42:58.093106 systemd[1]: Started cri-containerd-ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532.scope. Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit: BPF prog-id=76 op=LOAD Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000117c48 a2=10 a3=1c items=0 ppid=2135 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433616539623834326263653564373561653163353365633532363432 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=c items=0 ppid=2135 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433616539623834326263653564373561653163353365633532363432 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.095000 audit: BPF prog-id=77 op=LOAD Oct 2 19:42:58.095000 audit[2150]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001179d8 a2=78 a3=c000234d20 items=0 ppid=2135 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433616539623834326263653564373561653163353365633532363432 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit: BPF prog-id=78 op=LOAD Oct 2 19:42:58.096000 audit[2150]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000117770 a2=78 a3=c000234d68 items=0 ppid=2135 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433616539623834326263653564373561653163353365633532363432 Oct 2 19:42:58.096000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:42:58.096000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { perfmon } for pid=2150 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit[2150]: AVC avc: denied { bpf } for pid=2150 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.096000 audit: BPF prog-id=79 op=LOAD Oct 2 19:42:58.096000 audit[2150]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000117c30 a2=78 a3=c000235178 items=0 ppid=2135 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433616539623834326263653564373561653163353365633532363432 Oct 2 19:42:58.105000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.105000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.105000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.105000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.105000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.105000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.105000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.105000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.105000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit: BPF prog-id=80 op=LOAD Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2155 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561643536646132356462633131333761326161366563396438333534 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=2155 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561643536646132356462633131333761326161366563396438333534 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit: BPF prog-id=81 op=LOAD Oct 2 19:42:58.106000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c0003c6560 items=0 ppid=2155 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561643536646132356462633131333761326161366563396438333534 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit: BPF prog-id=82 op=LOAD Oct 2 19:42:58.106000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c0003c65a8 items=0 ppid=2155 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561643536646132356462633131333761326161366563396438333534 Oct 2 19:42:58.106000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:42:58.106000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:58.106000 audit: BPF prog-id=83 op=LOAD Oct 2 19:42:58.106000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c0003c69b8 items=0 ppid=2155 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:58.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561643536646132356462633131333761326161366563396438333534 Oct 2 19:42:58.111590 env[1096]: time="2023-10-02T19:42:58.111528039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rmgqt,Uid:1c271122-35d0-4734-a6c5-c140a89edb1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\"" Oct 2 19:42:58.112354 kubelet[1404]: E1002 19:42:58.112178 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:58.113826 env[1096]: time="2023-10-02T19:42:58.113799116Z" level=info msg="CreateContainer within sandbox \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:42:58.128105 env[1096]: time="2023-10-02T19:42:58.128064910Z" level=info msg="CreateContainer within sandbox \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\"" Oct 2 19:42:58.128551 env[1096]: time="2023-10-02T19:42:58.128501155Z" level=info msg="StartContainer for \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\"" Oct 2 19:42:58.134835 env[1096]: time="2023-10-02T19:42:58.134776374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-kc6pw,Uid:0084bc52-46e5-4587-baad-5ba806c2c570,Namespace:kube-system,Attempt:0,} returns sandbox id \"ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532\"" Oct 2 19:42:58.135576 kubelet[1404]: E1002 19:42:58.135406 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:58.136438 env[1096]: time="2023-10-02T19:42:58.136409509Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:42:58.144583 systemd[1]: Started cri-containerd-7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18.scope. Oct 2 19:42:58.155179 systemd[1]: cri-containerd-7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18.scope: Deactivated successfully. Oct 2 19:42:58.155450 systemd[1]: Stopped cri-containerd-7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18.scope. Oct 2 19:42:58.169380 env[1096]: time="2023-10-02T19:42:58.169313514Z" level=info msg="shim disconnected" id=7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18 Oct 2 19:42:58.169380 env[1096]: time="2023-10-02T19:42:58.169373767Z" level=warning msg="cleaning up after shim disconnected" id=7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18 namespace=k8s.io Oct 2 19:42:58.169380 env[1096]: time="2023-10-02T19:42:58.169384697Z" level=info msg="cleaning up dead shim" Oct 2 19:42:58.176478 env[1096]: time="2023-10-02T19:42:58.176426668Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2235 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:58.176742 env[1096]: time="2023-10-02T19:42:58.176686805Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:42:58.180689 env[1096]: time="2023-10-02T19:42:58.180631544Z" level=error msg="Failed to pipe stderr of container \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\"" error="reading from a closed fifo" Oct 2 19:42:58.180689 env[1096]: time="2023-10-02T19:42:58.180627066Z" level=error msg="Failed to pipe stdout of container \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\"" error="reading from a closed fifo" Oct 2 19:42:58.182806 env[1096]: time="2023-10-02T19:42:58.182738013Z" level=error msg="StartContainer for \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:58.182991 kubelet[1404]: E1002 19:42:58.182966 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18" Oct 2 19:42:58.183103 kubelet[1404]: E1002 19:42:58.183090 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:58.183103 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:58.183103 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:42:58.183103 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dxbw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:58.183261 kubelet[1404]: E1002 19:42:58.183131 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:42:58.195314 kubelet[1404]: E1002 19:42:58.195281 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:58.326606 kubelet[1404]: E1002 19:42:58.326206 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:58.329130 env[1096]: time="2023-10-02T19:42:58.329055265Z" level=info msg="CreateContainer within sandbox \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:42:58.346073 env[1096]: time="2023-10-02T19:42:58.346007904Z" level=info msg="CreateContainer within sandbox \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\"" Oct 2 19:42:58.346811 env[1096]: time="2023-10-02T19:42:58.346762185Z" level=info msg="StartContainer for \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\"" Oct 2 19:42:58.359930 systemd[1]: Started cri-containerd-070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5.scope. Oct 2 19:42:58.367790 systemd[1]: cri-containerd-070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5.scope: Deactivated successfully. Oct 2 19:42:58.368056 systemd[1]: Stopped cri-containerd-070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5.scope. Oct 2 19:42:58.375792 env[1096]: time="2023-10-02T19:42:58.375729357Z" level=info msg="shim disconnected" id=070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5 Oct 2 19:42:58.375915 env[1096]: time="2023-10-02T19:42:58.375800109Z" level=warning msg="cleaning up after shim disconnected" id=070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5 namespace=k8s.io Oct 2 19:42:58.375915 env[1096]: time="2023-10-02T19:42:58.375810568Z" level=info msg="cleaning up dead shim" Oct 2 19:42:58.381750 env[1096]: time="2023-10-02T19:42:58.381712778Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2271 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:42:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:42:58.382015 env[1096]: time="2023-10-02T19:42:58.381961093Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:42:58.382210 env[1096]: time="2023-10-02T19:42:58.382163000Z" level=error msg="Failed to pipe stderr of container \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\"" error="reading from a closed fifo" Oct 2 19:42:58.382304 env[1096]: time="2023-10-02T19:42:58.382178970Z" level=error msg="Failed to pipe stdout of container \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\"" error="reading from a closed fifo" Oct 2 19:42:58.384586 env[1096]: time="2023-10-02T19:42:58.384525971Z" level=error msg="StartContainer for \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:42:58.384803 kubelet[1404]: E1002 19:42:58.384768 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5" Oct 2 19:42:58.384906 kubelet[1404]: E1002 19:42:58.384893 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:42:58.384906 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:42:58.384906 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:42:58.384906 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dxbw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:42:58.385039 kubelet[1404]: E1002 19:42:58.384939 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:42:59.195625 kubelet[1404]: E1002 19:42:59.195576 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:59.277316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851275256.mount: Deactivated successfully. Oct 2 19:42:59.329842 kubelet[1404]: I1002 19:42:59.329810 1404 scope.go:115] "RemoveContainer" containerID="7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18" Oct 2 19:42:59.330208 kubelet[1404]: I1002 19:42:59.330172 1404 scope.go:115] "RemoveContainer" containerID="7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18" Oct 2 19:42:59.330851 env[1096]: time="2023-10-02T19:42:59.330817863Z" level=info msg="RemoveContainer for \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\"" Oct 2 19:42:59.331087 env[1096]: time="2023-10-02T19:42:59.331048234Z" level=info msg="RemoveContainer for \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\"" Oct 2 19:42:59.331194 env[1096]: time="2023-10-02T19:42:59.331164611Z" level=error msg="RemoveContainer for \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\" failed" error="failed to set removing state for container \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\": container is already in removing state" Oct 2 19:42:59.331314 kubelet[1404]: E1002 19:42:59.331295 1404 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\": container is already in removing state" containerID="7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18" Oct 2 19:42:59.331367 kubelet[1404]: E1002 19:42:59.331330 1404 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18": container is already in removing state; Skipping pod "cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d)" Oct 2 19:42:59.331400 kubelet[1404]: E1002 19:42:59.331391 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:59.331627 kubelet[1404]: E1002 19:42:59.331610 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d)\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:42:59.353825 env[1096]: time="2023-10-02T19:42:59.353779810Z" level=info msg="RemoveContainer for \"7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18\" returns successfully" Oct 2 19:42:59.640319 kubelet[1404]: E1002 19:42:59.640210 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:59.939173 env[1096]: time="2023-10-02T19:42:59.939099218Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:42:59.940875 env[1096]: time="2023-10-02T19:42:59.940809347Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:42:59.942113 env[1096]: time="2023-10-02T19:42:59.942087918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:42:59.942672 env[1096]: time="2023-10-02T19:42:59.942640631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 19:42:59.944080 env[1096]: time="2023-10-02T19:42:59.944035852Z" level=info msg="CreateContainer within sandbox \"ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:42:59.956827 env[1096]: time="2023-10-02T19:42:59.956765310Z" level=info msg="CreateContainer within sandbox \"ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\"" Oct 2 19:42:59.957242 env[1096]: time="2023-10-02T19:42:59.957212868Z" level=info msg="StartContainer for \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\"" Oct 2 19:42:59.974846 systemd[1]: Started cri-containerd-9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba.scope. Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989574 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:42:59.989707 kernel: audit: type=1400 audit(1696275779.984:710): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989744 kernel: audit: type=1400 audit(1696275779.984:711): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.992378 kernel: audit: type=1400 audit(1696275779.984:712): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.992419 kernel: audit: type=1400 audit(1696275779.984:713): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.996097 kernel: audit: type=1400 audit(1696275779.984:714): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.996137 kernel: audit: type=1400 audit(1696275779.984:715): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.999797 kernel: audit: type=1400 audit(1696275779.984:716): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.999886 kernel: audit: type=1400 audit(1696275779.984:717): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:00.003531 kernel: audit: type=1400 audit(1696275779.984:718): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:00.003592 kernel: audit: type=1400 audit(1696275779.984:719): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.984000 audit: BPF prog-id=84 op=LOAD Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=2155 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:59.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965326565663564356235303737353437336465613335393562626239 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=8 items=0 ppid=2155 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:59.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965326565663564356235303737353437336465613335393562626239 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.985000 audit: BPF prog-id=85 op=LOAD Oct 2 19:42:59.985000 audit[2290]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000024be0 items=0 ppid=2155 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:59.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965326565663564356235303737353437336465613335393562626239 Oct 2 19:42:59.989000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.989000 audit: BPF prog-id=86 op=LOAD Oct 2 19:42:59.989000 audit[2290]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c000024c28 items=0 ppid=2155 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:59.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965326565663564356235303737353437336465613335393562626239 Oct 2 19:42:59.990000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:42:59.990000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { perfmon } for pid=2290 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit[2290]: AVC avc: denied { bpf } for pid=2290 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:42:59.990000 audit: BPF prog-id=87 op=LOAD Oct 2 19:42:59.990000 audit[2290]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c000025038 items=0 ppid=2155 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:42:59.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965326565663564356235303737353437336465613335393562626239 Oct 2 19:43:00.016432 env[1096]: time="2023-10-02T19:43:00.016367439Z" level=info msg="StartContainer for \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\" returns successfully" Oct 2 19:43:00.032000 audit[2301]: AVC avc: denied { map_create } for pid=2301 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c202,c974 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c202,c974 tclass=bpf permissive=0 Oct 2 19:43:00.032000 audit[2301]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0007077d0 a2=48 a3=0 items=0 ppid=2155 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c202,c974 key=(null) Oct 2 19:43:00.032000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:43:00.196078 kubelet[1404]: E1002 19:43:00.195926 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:00.333322 kubelet[1404]: E1002 19:43:00.333292 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:00.333634 kubelet[1404]: E1002 19:43:00.333525 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d)\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:43:00.334359 kubelet[1404]: E1002 19:43:00.334314 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:00.951901 systemd[1]: run-containerd-runc-k8s.io-9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba-runc.Gq9AGS.mount: Deactivated successfully. Oct 2 19:43:01.196169 kubelet[1404]: E1002 19:43:01.196098 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:01.274378 kubelet[1404]: W1002 19:43:01.274197 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c271122_35d0_4734_a6c5_c140a89edb1d.slice/cri-containerd-7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18.scope WatchSource:0}: container "7e2fc073f192a89fcf99bb6a832e53a70f7182cea72940a134f8287bf2173f18" in namespace "k8s.io": not found Oct 2 19:43:01.335611 kubelet[1404]: E1002 19:43:01.335559 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:02.196935 kubelet[1404]: E1002 19:43:02.196867 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:03.197107 kubelet[1404]: E1002 19:43:03.197015 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:04.198120 kubelet[1404]: E1002 19:43:04.198037 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:04.381676 kubelet[1404]: W1002 19:43:04.381623 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c271122_35d0_4734_a6c5_c140a89edb1d.slice/cri-containerd-070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5.scope WatchSource:0}: task 070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5 not found: not found Oct 2 19:43:04.641211 kubelet[1404]: E1002 19:43:04.641101 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:05.199014 kubelet[1404]: E1002 19:43:05.198942 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:06.200008 kubelet[1404]: E1002 19:43:06.199953 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:07.200504 kubelet[1404]: E1002 19:43:07.200435 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:08.201600 kubelet[1404]: E1002 19:43:08.201556 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:09.202256 kubelet[1404]: E1002 19:43:09.202182 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:09.642409 kubelet[1404]: E1002 19:43:09.642280 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:10.203332 kubelet[1404]: E1002 19:43:10.203272 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:11.204017 kubelet[1404]: E1002 19:43:11.203959 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:12.204985 kubelet[1404]: E1002 19:43:12.204910 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:13.209797 kubelet[1404]: E1002 19:43:13.209661 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:14.031278 kubelet[1404]: E1002 19:43:14.031206 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:14.210722 kubelet[1404]: E1002 19:43:14.210647 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:14.600454 kubelet[1404]: E1002 19:43:14.600409 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:14.602359 env[1096]: time="2023-10-02T19:43:14.602318789Z" level=info msg="CreateContainer within sandbox \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:43:14.618627 env[1096]: time="2023-10-02T19:43:14.618497163Z" level=info msg="CreateContainer within sandbox \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\"" Oct 2 19:43:14.619110 env[1096]: time="2023-10-02T19:43:14.619080347Z" level=info msg="StartContainer for \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\"" Oct 2 19:43:14.633350 systemd[1]: Started cri-containerd-0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461.scope. Oct 2 19:43:14.641946 systemd[1]: cri-containerd-0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461.scope: Deactivated successfully. Oct 2 19:43:14.642226 systemd[1]: Stopped cri-containerd-0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461.scope. Oct 2 19:43:14.643082 kubelet[1404]: E1002 19:43:14.643054 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:14.860138 env[1096]: time="2023-10-02T19:43:14.859983964Z" level=info msg="shim disconnected" id=0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461 Oct 2 19:43:14.860138 env[1096]: time="2023-10-02T19:43:14.860045333Z" level=warning msg="cleaning up after shim disconnected" id=0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461 namespace=k8s.io Oct 2 19:43:14.860138 env[1096]: time="2023-10-02T19:43:14.860054261Z" level=info msg="cleaning up dead shim" Oct 2 19:43:14.866400 env[1096]: time="2023-10-02T19:43:14.866343341Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2346 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:14.866699 env[1096]: time="2023-10-02T19:43:14.866635360Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:43:14.866889 env[1096]: time="2023-10-02T19:43:14.866837012Z" level=error msg="Failed to pipe stderr of container \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\"" error="reading from a closed fifo" Oct 2 19:43:14.868652 env[1096]: time="2023-10-02T19:43:14.868596113Z" level=error msg="Failed to pipe stdout of container \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\"" error="reading from a closed fifo" Oct 2 19:43:14.871072 env[1096]: time="2023-10-02T19:43:14.871024225Z" level=error msg="StartContainer for \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:14.871275 kubelet[1404]: E1002 19:43:14.871245 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461" Oct 2 19:43:14.871377 kubelet[1404]: E1002 19:43:14.871368 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:14.871377 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:14.871377 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:43:14.871377 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dxbw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:14.871516 kubelet[1404]: E1002 19:43:14.871406 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:43:15.210924 kubelet[1404]: E1002 19:43:15.210869 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:15.373257 kubelet[1404]: I1002 19:43:15.373222 1404 scope.go:115] "RemoveContainer" containerID="070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5" Oct 2 19:43:15.373579 kubelet[1404]: I1002 19:43:15.373564 1404 scope.go:115] "RemoveContainer" containerID="070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5" Oct 2 19:43:15.374722 env[1096]: time="2023-10-02T19:43:15.374678970Z" level=info msg="RemoveContainer for \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\"" Oct 2 19:43:15.374855 env[1096]: time="2023-10-02T19:43:15.374811808Z" level=info msg="RemoveContainer for \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\"" Oct 2 19:43:15.375053 env[1096]: time="2023-10-02T19:43:15.374991838Z" level=error msg="RemoveContainer for \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\" failed" error="failed to set removing state for container \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\": container is already in removing state" Oct 2 19:43:15.375246 kubelet[1404]: E1002 19:43:15.375197 1404 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\": container is already in removing state" containerID="070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5" Oct 2 19:43:15.375246 kubelet[1404]: I1002 19:43:15.375246 1404 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5} err="rpc error: code = Unknown desc = failed to set removing state for container \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\": container is already in removing state" Oct 2 19:43:15.381061 env[1096]: time="2023-10-02T19:43:15.381033091Z" level=info msg="RemoveContainer for \"070a6953091589acecf1157813b1e4371b8964411689ffd0f30b4d773a7303d5\" returns successfully" Oct 2 19:43:15.381247 kubelet[1404]: E1002 19:43:15.381228 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:15.381431 kubelet[1404]: E1002 19:43:15.381418 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d)\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:43:15.614133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461-rootfs.mount: Deactivated successfully. Oct 2 19:43:16.211558 kubelet[1404]: E1002 19:43:16.211482 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:17.212305 kubelet[1404]: E1002 19:43:17.212246 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:17.965714 kubelet[1404]: W1002 19:43:17.965660 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c271122_35d0_4734_a6c5_c140a89edb1d.slice/cri-containerd-0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461.scope WatchSource:0}: task 0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461 not found: not found Oct 2 19:43:18.212741 kubelet[1404]: E1002 19:43:18.212689 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:19.213039 kubelet[1404]: E1002 19:43:19.212988 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:19.643972 kubelet[1404]: E1002 19:43:19.643848 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:20.213508 kubelet[1404]: E1002 19:43:20.213430 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:21.214064 kubelet[1404]: E1002 19:43:21.213990 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:22.214619 kubelet[1404]: E1002 19:43:22.214532 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:23.215231 kubelet[1404]: E1002 19:43:23.215170 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:24.217720 kubelet[1404]: E1002 19:43:24.217405 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:24.645317 kubelet[1404]: E1002 19:43:24.645207 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:25.218258 kubelet[1404]: E1002 19:43:25.218190 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:26.218627 kubelet[1404]: E1002 19:43:26.218555 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:27.219415 kubelet[1404]: E1002 19:43:27.219355 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:28.220182 kubelet[1404]: E1002 19:43:28.220127 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:29.220384 kubelet[1404]: E1002 19:43:29.220307 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:29.599511 kubelet[1404]: E1002 19:43:29.599383 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:29.599688 kubelet[1404]: E1002 19:43:29.599591 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d)\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:43:29.646102 kubelet[1404]: E1002 19:43:29.646073 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:30.220641 kubelet[1404]: E1002 19:43:30.220586 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:31.221433 kubelet[1404]: E1002 19:43:31.221366 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:32.221560 kubelet[1404]: E1002 19:43:32.221501 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:33.222417 kubelet[1404]: E1002 19:43:33.222354 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:34.030907 kubelet[1404]: E1002 19:43:34.030851 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:34.050811 env[1096]: time="2023-10-02T19:43:34.050767449Z" level=info msg="StopPodSandbox for \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\"" Oct 2 19:43:34.051157 env[1096]: time="2023-10-02T19:43:34.050864977Z" level=info msg="TearDown network for sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" successfully" Oct 2 19:43:34.051157 env[1096]: time="2023-10-02T19:43:34.050914332Z" level=info msg="StopPodSandbox for \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" returns successfully" Oct 2 19:43:34.051300 env[1096]: time="2023-10-02T19:43:34.051267453Z" level=info msg="RemovePodSandbox for \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\"" Oct 2 19:43:34.051358 env[1096]: time="2023-10-02T19:43:34.051305766Z" level=info msg="Forcibly stopping sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\"" Oct 2 19:43:34.051413 env[1096]: time="2023-10-02T19:43:34.051389107Z" level=info msg="TearDown network for sandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" successfully" Oct 2 19:43:34.054000 env[1096]: time="2023-10-02T19:43:34.053957936Z" level=info msg="RemovePodSandbox \"1ab6471b77a2edc791570aa9f481af71b1c29f507a74b800fab956e8daacd3f4\" returns successfully" Oct 2 19:43:34.054295 env[1096]: time="2023-10-02T19:43:34.054266120Z" level=info msg="StopPodSandbox for \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\"" Oct 2 19:43:34.054369 env[1096]: time="2023-10-02T19:43:34.054336575Z" level=info msg="TearDown network for sandbox \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\" successfully" Oct 2 19:43:34.054399 env[1096]: time="2023-10-02T19:43:34.054367605Z" level=info msg="StopPodSandbox for \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\" returns successfully" Oct 2 19:43:34.054645 env[1096]: time="2023-10-02T19:43:34.054615492Z" level=info msg="RemovePodSandbox for \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\"" Oct 2 19:43:34.054717 env[1096]: time="2023-10-02T19:43:34.054647394Z" level=info msg="Forcibly stopping sandbox \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\"" Oct 2 19:43:34.054756 env[1096]: time="2023-10-02T19:43:34.054711587Z" level=info msg="TearDown network for sandbox \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\" successfully" Oct 2 19:43:34.057285 env[1096]: time="2023-10-02T19:43:34.057259707Z" level=info msg="RemovePodSandbox \"93211a0ca955ca4c6c476bbd6ff6ac472f66d56f62a5856cb5f2b101099a11b6\" returns successfully" Oct 2 19:43:34.223445 kubelet[1404]: E1002 19:43:34.223388 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:34.646633 kubelet[1404]: E1002 19:43:34.646603 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:35.224052 kubelet[1404]: E1002 19:43:35.223995 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:36.224892 kubelet[1404]: E1002 19:43:36.224827 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:37.225094 kubelet[1404]: E1002 19:43:37.225041 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:38.225941 kubelet[1404]: E1002 19:43:38.225873 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:39.227093 kubelet[1404]: E1002 19:43:39.227015 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:39.647946 kubelet[1404]: E1002 19:43:39.647906 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:40.228176 kubelet[1404]: E1002 19:43:40.228138 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:41.229095 kubelet[1404]: E1002 19:43:41.229042 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:42.229713 kubelet[1404]: E1002 19:43:42.229653 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:43.229956 kubelet[1404]: E1002 19:43:43.229889 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:43.600713 kubelet[1404]: E1002 19:43:43.600424 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:43.602278 env[1096]: time="2023-10-02T19:43:43.602240797Z" level=info msg="CreateContainer within sandbox \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:43:43.613404 env[1096]: time="2023-10-02T19:43:43.613343575Z" level=info msg="CreateContainer within sandbox \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406\"" Oct 2 19:43:43.613750 env[1096]: time="2023-10-02T19:43:43.613722773Z" level=info msg="StartContainer for \"fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406\"" Oct 2 19:43:43.630183 systemd[1]: Started cri-containerd-fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406.scope. Oct 2 19:43:43.637573 systemd[1]: cri-containerd-fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406.scope: Deactivated successfully. Oct 2 19:43:43.637908 systemd[1]: Stopped cri-containerd-fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406.scope. Oct 2 19:43:43.647110 env[1096]: time="2023-10-02T19:43:43.647050985Z" level=info msg="shim disconnected" id=fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406 Oct 2 19:43:43.647110 env[1096]: time="2023-10-02T19:43:43.647110009Z" level=warning msg="cleaning up after shim disconnected" id=fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406 namespace=k8s.io Oct 2 19:43:43.647297 env[1096]: time="2023-10-02T19:43:43.647121069Z" level=info msg="cleaning up dead shim" Oct 2 19:43:43.653718 env[1096]: time="2023-10-02T19:43:43.653679282Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2388 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:43.653948 env[1096]: time="2023-10-02T19:43:43.653895496Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:43:43.654341 env[1096]: time="2023-10-02T19:43:43.654265227Z" level=error msg="Failed to pipe stderr of container \"fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406\"" error="reading from a closed fifo" Oct 2 19:43:43.654632 env[1096]: time="2023-10-02T19:43:43.654585802Z" level=error msg="Failed to pipe stdout of container \"fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406\"" error="reading from a closed fifo" Oct 2 19:43:43.656828 env[1096]: time="2023-10-02T19:43:43.656783591Z" level=error msg="StartContainer for \"fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:43.656999 kubelet[1404]: E1002 19:43:43.656977 1404 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406" Oct 2 19:43:43.657091 kubelet[1404]: E1002 19:43:43.657074 1404 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:43.657091 kubelet[1404]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:43.657091 kubelet[1404]: rm /hostbin/cilium-mount Oct 2 19:43:43.657091 kubelet[1404]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dxbw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:43.657223 kubelet[1404]: E1002 19:43:43.657108 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:43:44.231064 kubelet[1404]: E1002 19:43:44.231012 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:44.422501 kubelet[1404]: I1002 19:43:44.422470 1404 scope.go:115] "RemoveContainer" containerID="0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461" Oct 2 19:43:44.422745 kubelet[1404]: I1002 19:43:44.422731 1404 scope.go:115] "RemoveContainer" containerID="0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461" Oct 2 19:43:44.423518 env[1096]: time="2023-10-02T19:43:44.423490039Z" level=info msg="RemoveContainer for \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\"" Oct 2 19:43:44.423688 env[1096]: time="2023-10-02T19:43:44.423666017Z" level=info msg="RemoveContainer for \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\"" Oct 2 19:43:44.423770 env[1096]: time="2023-10-02T19:43:44.423727454Z" level=error msg="RemoveContainer for \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\" failed" error="failed to set removing state for container \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\": container is already in removing state" Oct 2 19:43:44.423874 kubelet[1404]: E1002 19:43:44.423851 1404 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\": container is already in removing state" containerID="0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461" Oct 2 19:43:44.423940 kubelet[1404]: E1002 19:43:44.423892 1404 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461": container is already in removing state; Skipping pod "cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d)" Oct 2 19:43:44.423992 kubelet[1404]: E1002 19:43:44.423962 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:44.424201 kubelet[1404]: E1002 19:43:44.424185 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d)\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:43:44.426111 env[1096]: time="2023-10-02T19:43:44.426078847Z" level=info msg="RemoveContainer for \"0fc2d32db3fbd36f84fbb65a3088038ea47b4fd9ffa9ea567afcd3172faed461\" returns successfully" Oct 2 19:43:44.610246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406-rootfs.mount: Deactivated successfully. Oct 2 19:43:44.648336 kubelet[1404]: E1002 19:43:44.648313 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:45.231507 kubelet[1404]: E1002 19:43:45.231456 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:46.232399 kubelet[1404]: E1002 19:43:46.232335 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:46.751912 kubelet[1404]: W1002 19:43:46.751866 1404 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c271122_35d0_4734_a6c5_c140a89edb1d.slice/cri-containerd-fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406.scope WatchSource:0}: task fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406 not found: not found Oct 2 19:43:47.232671 kubelet[1404]: E1002 19:43:47.232618 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:48.233747 kubelet[1404]: E1002 19:43:48.233672 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:49.234480 kubelet[1404]: E1002 19:43:49.234376 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:49.649605 kubelet[1404]: E1002 19:43:49.649551 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:50.235351 kubelet[1404]: E1002 19:43:50.235304 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:50.599871 kubelet[1404]: E1002 19:43:50.599739 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:51.235703 kubelet[1404]: E1002 19:43:51.235642 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:52.236159 kubelet[1404]: E1002 19:43:52.236104 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:53.237334 kubelet[1404]: E1002 19:43:53.237259 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:54.030894 kubelet[1404]: E1002 19:43:54.030842 1404 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:54.238049 kubelet[1404]: E1002 19:43:54.237993 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:54.650730 kubelet[1404]: E1002 19:43:54.650695 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:55.239189 kubelet[1404]: E1002 19:43:55.239120 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:56.240034 kubelet[1404]: E1002 19:43:56.239976 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:56.600406 kubelet[1404]: E1002 19:43:56.600259 1404 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:56.600628 kubelet[1404]: E1002 19:43:56.600485 1404 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-rmgqt_kube-system(1c271122-35d0-4734-a6c5-c140a89edb1d)\"" pod="kube-system/cilium-rmgqt" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d Oct 2 19:43:57.241119 kubelet[1404]: E1002 19:43:57.241062 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:58.241563 kubelet[1404]: E1002 19:43:58.241471 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:58.829075 env[1096]: time="2023-10-02T19:43:58.829023147Z" level=info msg="StopPodSandbox for \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\"" Oct 2 19:43:58.829474 env[1096]: time="2023-10-02T19:43:58.829105324Z" level=info msg="Container to stop \"fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:43:58.830782 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462-shm.mount: Deactivated successfully. Oct 2 19:43:58.834844 systemd[1]: cri-containerd-d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462.scope: Deactivated successfully. Oct 2 19:43:58.834000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:43:58.835746 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:43:58.835833 kernel: audit: type=1334 audit(1696275838.834:729): prog-id=76 op=UNLOAD Oct 2 19:43:58.839825 env[1096]: time="2023-10-02T19:43:58.839780695Z" level=info msg="StopContainer for \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\" with timeout 30 (s)" Oct 2 19:43:58.840206 env[1096]: time="2023-10-02T19:43:58.840168746Z" level=info msg="Stop container \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\" with signal terminated" Oct 2 19:43:58.840000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:43:58.841564 kernel: audit: type=1334 audit(1696275838.840:730): prog-id=79 op=UNLOAD Oct 2 19:43:58.853047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462-rootfs.mount: Deactivated successfully. Oct 2 19:43:58.858655 env[1096]: time="2023-10-02T19:43:58.858616009Z" level=info msg="shim disconnected" id=d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462 Oct 2 19:43:58.858890 env[1096]: time="2023-10-02T19:43:58.858860497Z" level=warning msg="cleaning up after shim disconnected" id=d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462 namespace=k8s.io Oct 2 19:43:58.858890 env[1096]: time="2023-10-02T19:43:58.858880646Z" level=info msg="cleaning up dead shim" Oct 2 19:43:58.860000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:43:58.860944 systemd[1]: cri-containerd-9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba.scope: Deactivated successfully. Oct 2 19:43:58.862570 kernel: audit: type=1334 audit(1696275838.860:731): prog-id=84 op=UNLOAD Oct 2 19:43:58.867166 env[1096]: time="2023-10-02T19:43:58.867119189Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2424 runtime=io.containerd.runc.v2\n" Oct 2 19:43:58.867000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:43:58.868131 env[1096]: time="2023-10-02T19:43:58.868101246Z" level=info msg="TearDown network for sandbox \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" successfully" Oct 2 19:43:58.868226 env[1096]: time="2023-10-02T19:43:58.868203972Z" level=info msg="StopPodSandbox for \"d3ae9b842bce5d75ae1c53ec52642b2921b607caec041763d18e2384eed2c462\" returns successfully" Oct 2 19:43:58.869587 kernel: audit: type=1334 audit(1696275838.867:732): prog-id=87 op=UNLOAD Oct 2 19:43:58.875588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba-rootfs.mount: Deactivated successfully. Oct 2 19:43:58.879025 env[1096]: time="2023-10-02T19:43:58.878967732Z" level=info msg="shim disconnected" id=9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba Oct 2 19:43:58.879118 env[1096]: time="2023-10-02T19:43:58.879021124Z" level=warning msg="cleaning up after shim disconnected" id=9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba namespace=k8s.io Oct 2 19:43:58.879118 env[1096]: time="2023-10-02T19:43:58.879033789Z" level=info msg="cleaning up dead shim" Oct 2 19:43:58.885452 env[1096]: time="2023-10-02T19:43:58.885395826Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2448 runtime=io.containerd.runc.v2\n" Oct 2 19:43:58.888070 env[1096]: time="2023-10-02T19:43:58.888029158Z" level=info msg="StopContainer for \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\" returns successfully" Oct 2 19:43:58.890328 env[1096]: time="2023-10-02T19:43:58.890284829Z" level=info msg="StopPodSandbox for \"ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532\"" Oct 2 19:43:58.890410 env[1096]: time="2023-10-02T19:43:58.890358229Z" level=info msg="Container to stop \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:43:58.891824 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532-shm.mount: Deactivated successfully. Oct 2 19:43:58.897603 systemd[1]: cri-containerd-ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532.scope: Deactivated successfully. Oct 2 19:43:58.897000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:43:58.898569 kernel: audit: type=1334 audit(1696275838.897:733): prog-id=80 op=UNLOAD Oct 2 19:43:58.903000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:43:58.904553 kernel: audit: type=1334 audit(1696275838.903:734): prog-id=83 op=UNLOAD Oct 2 19:43:58.918161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532-rootfs.mount: Deactivated successfully. Oct 2 19:43:58.920423 env[1096]: time="2023-10-02T19:43:58.920367116Z" level=info msg="shim disconnected" id=ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532 Oct 2 19:43:58.920423 env[1096]: time="2023-10-02T19:43:58.920417373Z" level=warning msg="cleaning up after shim disconnected" id=ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532 namespace=k8s.io Oct 2 19:43:58.920423 env[1096]: time="2023-10-02T19:43:58.920426912Z" level=info msg="cleaning up dead shim" Oct 2 19:43:58.926566 env[1096]: time="2023-10-02T19:43:58.926499797Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2478 runtime=io.containerd.runc.v2\n" Oct 2 19:43:58.926867 env[1096]: time="2023-10-02T19:43:58.926833233Z" level=info msg="TearDown network for sandbox \"ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532\" successfully" Oct 2 19:43:58.926867 env[1096]: time="2023-10-02T19:43:58.926859754Z" level=info msg="StopPodSandbox for \"ead56da25dbc1137a2aa6ec9d8354174dd6db49fd1cce6ead0d54fdf8fdae532\" returns successfully" Oct 2 19:43:58.982463 kubelet[1404]: I1002 19:43:58.982397 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-etc-cni-netd\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982463 kubelet[1404]: I1002 19:43:58.982452 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-host-proc-sys-kernel\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982664 kubelet[1404]: I1002 19:43:58.982473 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-bpf-maps\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982664 kubelet[1404]: I1002 19:43:58.982492 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-cgroup\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982664 kubelet[1404]: I1002 19:43:58.982529 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-config-path\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982664 kubelet[1404]: I1002 19:43:58.982559 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-run\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982664 kubelet[1404]: I1002 19:43:58.982577 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0084bc52-46e5-4587-baad-5ba806c2c570-cilium-config-path\") pod \"0084bc52-46e5-4587-baad-5ba806c2c570\" (UID: \"0084bc52-46e5-4587-baad-5ba806c2c570\") " Oct 2 19:43:58.982664 kubelet[1404]: I1002 19:43:58.982595 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-ipsec-secrets\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982819 kubelet[1404]: I1002 19:43:58.982569 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.982819 kubelet[1404]: I1002 19:43:58.982566 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.982819 kubelet[1404]: I1002 19:43:58.982613 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-host-proc-sys-net\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982819 kubelet[1404]: I1002 19:43:58.982625 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.982819 kubelet[1404]: I1002 19:43:58.982632 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxbw7\" (UniqueName: \"kubernetes.io/projected/1c271122-35d0-4734-a6c5-c140a89edb1d-kube-api-access-dxbw7\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982938 kubelet[1404]: I1002 19:43:58.982638 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.982938 kubelet[1404]: I1002 19:43:58.982648 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-hostproc\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.982938 kubelet[1404]: I1002 19:43:58.982667 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-hostproc" (OuterVolumeSpecName: "hostproc") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.982938 kubelet[1404]: I1002 19:43:58.982686 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.982938 kubelet[1404]: I1002 19:43:58.982688 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-xtables-lock\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.983079 kubelet[1404]: I1002 19:43:58.982702 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.983079 kubelet[1404]: I1002 19:43:58.982711 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c271122-35d0-4734-a6c5-c140a89edb1d-hubble-tls\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.983079 kubelet[1404]: I1002 19:43:58.982728 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c271122-35d0-4734-a6c5-c140a89edb1d-clustermesh-secrets\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.983079 kubelet[1404]: I1002 19:43:58.982744 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-lib-modules\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.983079 kubelet[1404]: I1002 19:43:58.982762 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwcs4\" (UniqueName: \"kubernetes.io/projected/0084bc52-46e5-4587-baad-5ba806c2c570-kube-api-access-bwcs4\") pod \"0084bc52-46e5-4587-baad-5ba806c2c570\" (UID: \"0084bc52-46e5-4587-baad-5ba806c2c570\") " Oct 2 19:43:58.983079 kubelet[1404]: I1002 19:43:58.982778 1404 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cni-path\") pod \"1c271122-35d0-4734-a6c5-c140a89edb1d\" (UID: \"1c271122-35d0-4734-a6c5-c140a89edb1d\") " Oct 2 19:43:58.983222 kubelet[1404]: I1002 19:43:58.982800 1404 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-hostproc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:58.983222 kubelet[1404]: I1002 19:43:58.982809 1404 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-xtables-lock\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:58.983222 kubelet[1404]: I1002 19:43:58.982820 1404 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-host-proc-sys-kernel\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:58.983222 kubelet[1404]: I1002 19:43:58.982828 1404 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-bpf-maps\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:58.983222 kubelet[1404]: I1002 19:43:58.982837 1404 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-etc-cni-netd\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:58.983222 kubelet[1404]: I1002 19:43:58.982845 1404 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-host-proc-sys-net\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:58.983222 kubelet[1404]: I1002 19:43:58.982853 1404 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-run\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:58.983222 kubelet[1404]: I1002 19:43:58.982868 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cni-path" (OuterVolumeSpecName: "cni-path") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.983400 kubelet[1404]: W1002 19:43:58.982873 1404 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/1c271122-35d0-4734-a6c5-c140a89edb1d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:43:58.983884 kubelet[1404]: I1002 19:43:58.982724 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.985190 kubelet[1404]: I1002 19:43:58.985162 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:43:58.985296 kubelet[1404]: W1002 19:43:58.985260 1404 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0084bc52-46e5-4587-baad-5ba806c2c570/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:43:58.985479 kubelet[1404]: I1002 19:43:58.985442 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:43:58.985479 kubelet[1404]: I1002 19:43:58.985451 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c271122-35d0-4734-a6c5-c140a89edb1d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:58.985587 kubelet[1404]: I1002 19:43:58.985491 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:43:58.986891 kubelet[1404]: I1002 19:43:58.986864 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c271122-35d0-4734-a6c5-c140a89edb1d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:43:58.987019 kubelet[1404]: I1002 19:43:58.986987 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0084bc52-46e5-4587-baad-5ba806c2c570-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0084bc52-46e5-4587-baad-5ba806c2c570" (UID: "0084bc52-46e5-4587-baad-5ba806c2c570"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:43:58.987844 kubelet[1404]: I1002 19:43:58.987808 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c271122-35d0-4734-a6c5-c140a89edb1d-kube-api-access-dxbw7" (OuterVolumeSpecName: "kube-api-access-dxbw7") pod "1c271122-35d0-4734-a6c5-c140a89edb1d" (UID: "1c271122-35d0-4734-a6c5-c140a89edb1d"). InnerVolumeSpecName "kube-api-access-dxbw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:58.988012 kubelet[1404]: I1002 19:43:58.987882 1404 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0084bc52-46e5-4587-baad-5ba806c2c570-kube-api-access-bwcs4" (OuterVolumeSpecName: "kube-api-access-bwcs4") pod "0084bc52-46e5-4587-baad-5ba806c2c570" (UID: "0084bc52-46e5-4587-baad-5ba806c2c570"). InnerVolumeSpecName "kube-api-access-bwcs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:43:59.083600 kubelet[1404]: I1002 19:43:59.083385 1404 reconciler.go:399] "Volume detached for volume \"kube-api-access-bwcs4\" (UniqueName: \"kubernetes.io/projected/0084bc52-46e5-4587-baad-5ba806c2c570-kube-api-access-bwcs4\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.083600 kubelet[1404]: I1002 19:43:59.083431 1404 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cni-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.083600 kubelet[1404]: I1002 19:43:59.083440 1404 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-lib-modules\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.083600 kubelet[1404]: I1002 19:43:59.083448 1404 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-cgroup\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.083600 kubelet[1404]: I1002 19:43:59.083457 1404 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.083600 kubelet[1404]: I1002 19:43:59.083465 1404 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0084bc52-46e5-4587-baad-5ba806c2c570-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.083600 kubelet[1404]: I1002 19:43:59.083474 1404 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c271122-35d0-4734-a6c5-c140a89edb1d-cilium-ipsec-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.083600 kubelet[1404]: I1002 19:43:59.083482 1404 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c271122-35d0-4734-a6c5-c140a89edb1d-hubble-tls\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.083924 kubelet[1404]: I1002 19:43:59.083491 1404 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c271122-35d0-4734-a6c5-c140a89edb1d-clustermesh-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.083924 kubelet[1404]: I1002 19:43:59.083499 1404 reconciler.go:399] "Volume detached for volume \"kube-api-access-dxbw7\" (UniqueName: \"kubernetes.io/projected/1c271122-35d0-4734-a6c5-c140a89edb1d-kube-api-access-dxbw7\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:43:59.242672 kubelet[1404]: E1002 19:43:59.242623 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:59.447800 kubelet[1404]: I1002 19:43:59.447767 1404 scope.go:115] "RemoveContainer" containerID="fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406" Oct 2 19:43:59.449093 env[1096]: time="2023-10-02T19:43:59.448844429Z" level=info msg="RemoveContainer for \"fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406\"" Oct 2 19:43:59.451380 env[1096]: time="2023-10-02T19:43:59.451346710Z" level=info msg="RemoveContainer for \"fb4542abd17eb716226de24235ad640adf33330d5b29509d683d52841c9fd406\" returns successfully" Oct 2 19:43:59.451507 kubelet[1404]: I1002 19:43:59.451493 1404 scope.go:115] "RemoveContainer" containerID="9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba" Oct 2 19:43:59.452503 env[1096]: time="2023-10-02T19:43:59.452482049Z" level=info msg="RemoveContainer for \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\"" Oct 2 19:43:59.453119 systemd[1]: Removed slice kubepods-burstable-pod1c271122_35d0_4734_a6c5_c140a89edb1d.slice. Oct 2 19:43:59.454897 env[1096]: time="2023-10-02T19:43:59.454861465Z" level=info msg="RemoveContainer for \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\" returns successfully" Oct 2 19:43:59.455133 kubelet[1404]: I1002 19:43:59.455112 1404 scope.go:115] "RemoveContainer" containerID="9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba" Oct 2 19:43:59.455190 systemd[1]: Removed slice kubepods-besteffort-pod0084bc52_46e5_4587_baad_5ba806c2c570.slice. Oct 2 19:43:59.455409 env[1096]: time="2023-10-02T19:43:59.455313979Z" level=error msg="ContainerStatus for \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\": not found" Oct 2 19:43:59.455566 kubelet[1404]: E1002 19:43:59.455513 1404 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\": not found" containerID="9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba" Oct 2 19:43:59.455566 kubelet[1404]: I1002 19:43:59.455564 1404 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba} err="failed to get container status \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e2eef5d5b50775473dea3595bbb95b469b744878709f4ab4e25ae6ca7a9f7ba\": not found" Oct 2 19:43:59.651484 kubelet[1404]: E1002 19:43:59.651450 1404 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:59.831060 systemd[1]: var-lib-kubelet-pods-1c271122\x2d35d0\x2d4734\x2da6c5\x2dc140a89edb1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxbw7.mount: Deactivated successfully. Oct 2 19:43:59.831221 systemd[1]: var-lib-kubelet-pods-0084bc52\x2d46e5\x2d4587\x2dbaad\x2d5ba806c2c570-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbwcs4.mount: Deactivated successfully. Oct 2 19:43:59.831342 systemd[1]: var-lib-kubelet-pods-1c271122\x2d35d0\x2d4734\x2da6c5\x2dc140a89edb1d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:43:59.831431 systemd[1]: var-lib-kubelet-pods-1c271122\x2d35d0\x2d4734\x2da6c5\x2dc140a89edb1d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:43:59.831518 systemd[1]: var-lib-kubelet-pods-1c271122\x2d35d0\x2d4734\x2da6c5\x2dc140a89edb1d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:44:00.242957 kubelet[1404]: E1002 19:44:00.242881 1404 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:00.603416 kubelet[1404]: I1002 19:44:00.603255 1404 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0084bc52-46e5-4587-baad-5ba806c2c570 path="/var/lib/kubelet/pods/0084bc52-46e5-4587-baad-5ba806c2c570/volumes" Oct 2 19:44:00.604095 kubelet[1404]: I1002 19:44:00.604066 1404 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1c271122-35d0-4734-a6c5-c140a89edb1d path="/var/lib/kubelet/pods/1c271122-35d0-4734-a6c5-c140a89edb1d/volumes"